Hybrid Minds: Unlocking the Power of AI + IQ

Inside Walmart's AI-Driven Future for Retail Excellence

Episode Summary

Join host Vahe Andonians for this episode of Hybrid Minds as he discusses with David Glick, Senior Vice President of Enterprise Business Services at Walmart, the extensive role he plays in driving technological innovation within the company. They explore Walmart's strategic shift towards in-house system development, the integration of AI through the Element platform, and the transformation of internal processes. David shares insights on managing large engineering teams and the broader impact of AI on the retail sector, highlighting ethical considerations and the importance of responsible AI use.

Episode Notes

Join host Vahe Andonians for an engaging discussion with David Glick, Senior Vice President of Enterprise Business Services at Walmart, as we explore the multifaceted role he plays in driving technological innovation within the company. From payment systems to people technology, and even legal tech, David sheds light on the vast responsibilities that fall under his purview. We also discuss Walmart's strategic shift towards building more in-house systems to handle its massive scale better, enhancing both agility and customization. Listen in as David shares his insights on managing a large engineering team and the significance of digital transformation and generative AI under the leadership of CEO Doug McMillan.

We delve into the intricate integration of AI and human expertise, particularly through Walmart's innovative platform, Element, which optimizes the selection of AI models for specific tasks. Discover how the challenges of prompt engineering are being addressed and how AI is transforming internal knowledge management. The discussion highlights the importance of achieving accurate, deterministic answers in critical scenarios and the dynamic interplay between AI capabilities and human oversight. This chapter provides a fascinating look at how Walmart is harnessing AI to enhance efficiency and integrate new technologies seamlessly into existing workflows.

Finally, we explore the broader impact of AI on large organizations and the retail sector, focusing on Walmart's pioneering efforts. AI's transformative potential is immense, from boosting productivity and improving customer service to revolutionizing e-commerce personalization and inventory management. Hear David's optimistic perspectives on AI and its future, as well as ethical considerations and the importance of precise guidelines to ensure responsible AI use. This episode is a compelling journey through the innovative ways Walmart is leveraging AI to stay at the forefront of technological advancement.

Key Quote :

"I'm always skeptical of the term guardrails.  These guardrails [are] like we're going to make sure that you don't drive off the cliff into the, into the river or into the ocean, but can we be more precise in guardrails?  I'd rather have lane lines, guardrails are, if you missed the lane lines and you're not paying attention and this and that, we're going to save you, but could we be more precise and say, 'look, you need to be between these lane lines.' And obviously, there's philosophers and the best engineering talent in the world and governance and lawyers and everybody who's trying to weigh in on this."

-David Glick

Time Stamps:

(00:45) What is Enterprise Business Services at Walmart 

(01:55) Building vs. Buying: Walmart's In-House Strategy

(05:28) Integrating AI with Human Operations

(08:15) Challenges and Opportunities in AI Implementation

(20:38) AI's Impact on Retail and E-commerce

(33:45) Ethical Considerations with AI Implementation

Links:

Connect with David 

Visit Walmart

Connect with Vahe

Visit Cognaize

Episode Transcription


 

0:00:03.3 David: I think everybody is trying to figure this out, 'cause you know think about, it was only 18 months ago that we had our ChatGPT moment. So everybody's trying to figure that out. But you know, Walmart's, we describe ourselves as people led, tech powered. And so we do want our people doing a lot of the work, but having the, the GenAI or the assistant simplifying things for them.

0:00:29.2 Vahe: I'm excited to introduce my guest, David Glick. David is senior vice president of enterprise business services at Walmart. Welcome. And thank you for being here today.

0:00:34.8 David: Thanks for having me, Vahe. I'm looking forward to it.

0:00:37.3 Vahe: Tell us what enterprise business services at Walmart actually means.

0:00:44.5 David: Yeah, we like to say it's everything you need to run the enterprise. So when you check out at Walmart, either in the stores or on the website, you use our payment system. When we close the books, that's my FinTech systems. I own all the people technology. So understanding where people are, how much they're paid and so on, and making sure they get paid on time. And then our associate digital experience, which is how do we make a delightful experiences for our associates every time they interact with our systems or open their laptops. And finally, we own philanthropy and governance and technology around legal. In addition to the technology, my team owns the operations of finance and of people. And so one of the things we've done is moved the operations close to engineering. And so we can have the engineers sit with the operations folks and figure out how to streamline and optimize their processes and automate some of them, which drives better customer experience for our associates, as well as cost reduction.

0:01:41.8 Vahe: I think I saw that as a theme generally from your posts and your interviews that I think you and Walmart are driving the strategy of building more in-house now than you used to. Can you elaborate? 

0:01:57.3 David: One of the things that I found over the last many years is that many third-party systems don't scale to Walmart scale. There are thousands of people or tens of thousands of people, not millions of folks. And so we do, every time we look at a system, we want to decide, should we build or should we buy? And obviously, everybody does that. My experience says that when we build, we can build all the different ramifications or all the different optimizations in, and it allows us to be more agile rather than wonderful. If you have to take something out of the box, it usually puts you ahead a few weeks, months or years. But if you wanna move this button over here or change a workflow or take into account a special use case, I love the ability that we can do that in a sprint rather than having the right business requirements and go back to the vendor and so on to get these changes put in.

0:02:53.2 David: So obviously, sometimes it's right to buy and sometimes it's right to build. But I do admit I have a bias towards building and having control.

0:03:01.7 Vahe: Yeah, especially at your scale, of course, it makes a lot of sense, right? Not many companies are at the scale that Walmart is.

0:03:10.4 David: Yeah, really zero companies. One company is at the scale of Walmart.

0:03:16.9 Vahe: Exactly, exactly. But you know, if you're managing a team and then especially with this emerging technology, I mean, AI as a word is maybe not emerging, but GenAI certainly is and it's penetrating a lot of processes and things, like how do you manage then, what projects are being done, how you do that, how you do the knowledge transfer and all those important things around it? 

0:03:40.2 David: Yeah, I mean, one of the great things about being at Walmart is Doug McMillan has been a real advocate of building our technology team. We're up to 25,000 engineers strong. And so if you think about companies who can do digital transformation, there's a very small number of those. And I believe we're one of those. And you look at obviously, there's big tech companies who have more engineers than us, but not very many of them. And so I think you'd throw us in with the Googles and Microsofts of the world in terms of being a big technology organization. Doug came out at CS this year and has been pretty consistent that he wants us to be a leader in GenAI. And so not just let's find a few places to use it, but every single organization should think about how can we use this technology and be on the forefront of them.

0:04:25.8 David: One of the cool things we did that I hadn't seen before is what we called idea jams. And so we get the business folks in a room for an hour or an hour and a half and they're explaining what can GenAI do for you? Or what can GenAI do generally? And then have them come back and say, oh, here's some use cases which we think are relevant to us. And we have the engineers in the room with them saying categorizing, oh, this we could do this will be harder, or just taking away all these things and let us come back and pick three, pick one or pick three, and let's go dive into those, because there's sort of infinite opportunity at this point.

0:05:00.6 Vahe: Yes, it really seems like that. And then when you focus so much on GenAI, I mean, certainly the question at least I think arises, what's the role of humans and how is the interaction with humans? Is it even necessary or is it like totally autonomous? Or how do you cooperate or incorporate humans with AI or the other way around this? 

0:05:24.7 David: Yeah, I mean, I think everybody is trying to figure this out. Think about it was only 18 months ago that we had our ChatGPT moment. So everybody's trying to figure that out. But Walmart's, we describe ourselves as people led, tech powered. And so we do want our people doing a lot of the work, but having the GenAI or the assistant simplifying things for them. And so a simple use case is, oh, we need to do a press release. So let's have the AI write a first draft of that. And then we can have the human touch it up and get it to be right. Or we started this project by writing down principles like North Star guiding principles. And so I asked one of my team members to do that. And he sent me these well written, authentic guiding principles. I'm like, Oh, this is great. He's like, yeah, I use my assistant, which is our GenAI tool, which is built into our consumer grade associate facing tool.

0:06:20.1 David: And so I thought that was great that we're starting to use it every day. And we can look at what are people using it for? That's a lots of companies are saying, Oh, for tops down, we'll tell you what you should do with GenAI, or we'll hire a consultant to tell us what to do. But really, we've come with both tops down, we have some things we wanna do, but also a sort of a crowdsource bottoms up, what are people using this for? And then if we see what people are using it for, that will inform what's the next thing we wanna work on.

0:06:46.8 Vahe: I think that that is a really interesting approach, right? From coming from both sides, basically, because it is a very new technology, and it's hard from either direction only to approach it, right? 

0:07:00.3 David: Yeah, I think back to this book I read, like 25 years ago, Stephen Covey's book, The Seven Habits. And I think one of the seven habits was think from a position of abundance, not scarcity. So many people say, Oh, we can only do two GenAI projects, like let's choose carefully.

0:07:18.2 Vahe: And it's like, we can do as many as we can think of and that are useful. And so one of the fun things about this is coming, oh, we could do this, or we could do that. And let's do it all. And so being at a company that has the resources, and have both the resources in terms of cash flow, but also in terms of engineering talent, really opens up and makes it more fun than being at someplace where they don't have the resources.

0:07:40.1 Vahe: Makes sense. But in your size, the question, I mean, this question I usually don't ask, but in your size, I almost have to ask, do you actually train from ground up like language models? Or do you like use fine tune or work with some of those? 

0:08:00.0 David: Yeah, another good thing, I feel like this is a good choice we made is we have a team in the platform organization, who has built a platform called Element. And below that are open source AI, open source LLMs, and non open source and hyperscalers. And there's all these options. And then I as an app developer, get to sit on top of that. And so I can just call Element, and they will get the right AI to answer the right questions at the right time for the right cost. And so well, that is a good question. It's fortunately, I'm not answering that question. For the enterprise, I just call it our centralized platform. And they do a lot of that.

0:08:35.5 Vahe: There are no difficulties though, right? Because like, unfortunately, prompt engineering is still a thing, right? It would be great if it wouldn't, but it still is a thing. And I wonder how you manage if you have like several different language models, let's say open source or whatever, which models? Like, do you have like a semantic kernel that translates then a query for a certain language model? Or do you just trust that it will be good enough to understand the same like prompt? 

0:09:00.5 David: Yeah, I mean, I think it's early days. And we're just like it's been 18 months. So shouldn't be that early days. But if you think about it, the grand scale and the grand scale of things, it's still pretty early days. One of the things I always think back to is like the Netflix, you remember Netflix did this competition for the recommendations engine, and the person who won or the team that won took a bunch of different engines and averaged them. And that was the winning solution or a bunch of different algorithms. And our chief economist, the previous company taught me that. And so one of the interesting things is if you think, take a two by two, and you say, I'm running this model, and it agrees with it, I'm running two models, they both agreed. Okay, great. They both say this is a good thing. Oh, we should probably go with that. Or they both say this is a bad thing. We should go with that. But the interesting things are when you run two different models and get two different answers. And so you can have a human go dig into that and try to figure out, like, why it came up with different answers, as well as what we're gonna do and how we're going to leverage that to get the right answer.

0:10:07.1 David: I think that's where the fun comes.

0:10:09.3 Vahe: True, true. Basically, you're building an ensemble of these things, like almost like a decision, a random forest kind of thing, where you have several models, and then you choose the...

0:10:19.2 David: The algorithm of algorithms.

0:10:20.0 Vahe: Yes. Interesting. Yes. And but you also have, I assume, lots of internal knowledge and things that you want to incorporate into your assistance. How do you deal with that? 

0:10:33.6 David: Yeah, I mean, I'm gonna oversimplify it. But we, I'll give you an example. I own the benefits help desk. So when an associate starts, they get a 300 page benefits guide. And they can read it and try to figure out, do they have a 401k? Or can they get an arm surgery, elbow surgery? And it's hard to know. And so they call the benefits help desk and ask them. And there's several hundred agents who have read the guide. But the guide changes all the time. And so think about, like, what is better at memorizing a 300 page document, an agent, or an LLM? And I think the answer over the long term is an LLM. And so putting that into a VectorDB is actually pretty easy. Like, I had a principal engineer go do that, and it took him a weekend, and he was able to answer 90% of the questions pre the agent said, these are the right answers. But you don't want to give people the wrong answer. So the hard part is not the technology.

0:11:37.7 David: It's not sticking a benefits guide into the VectorDB. It's trying to ensure that giving a problem, how do we turn a probabilistic answer from the LLM into a deterministic answer? Because if you tell an associate, oh, you qualify for this surgery, and they go spend $20,000 on a surgery, and then it turns out that they didn't, that's a life-changing event. And so it's important to us to figure out where we need the LLM to sort of give source material, but have a human or some other mechanism to make sure that it's right. And so that's the fun and interesting part.

0:12:05.4 Vahe: I feel the same. The integration with human and AI is the most interesting part, actually. And it's also, I think, very interesting because it's very dynamic and changing because with every added capability of AI, this needs to be rethought again. Like, how are these two actually cooperating meaningfully with each other? And I think that's very important also from an efficiency point, which is, of course, one of your hallmarks. And if you look at your fascinating CD, efficiency has been one theme across that. How do you think about that with GenAI now? 

0:12:13.0 David: Yeah. I mean, it's interesting because most of the time, automation is much easier than the other pieces. Like the way I think about it is automation, adoption, and audit. So first we have to write software, and we all know how to do that. And it's actually pretty simple, especially with GenAI. You get a huge lever. And so you can, like I said, in a week, you can put things in the vector database and figure it out. And then the second piece is adoption. And like, how do you fit this into the workflow? How do you get the humans to use it? How do you make it part of everyday life? And then the third is audit. We should do quality audits of everything humans do, but the AI can screw things up much faster than humans. And so we have to have the right mechanisms to make sure it doesn't run off and do the wrong thing at scale. And so that's been my whole career, whether it's pricing automation, inventory ordering, fulfillment center, any of those things, you can apply that principle of automation, adoption, and audit to the adoption piece is the thing that I really made a career of, which is, how do we have an engineer build something that's actually useful to the humans? 

0:13:56.1 David: And then how do we get the humans to buy into it and say, this is actually making my life easier? And of course, in the background, it's like what is the future for me? And what we found is that the best people actually turn into like product managers. When you do a big transformation, people who are domain experts have access to all these engineers to help them do their job better. And so rather than making widgets or being an operator, they can go be a product manager and that leads them into a whole new career path.

0:14:23.6 Vahe: It reminds me very much of Steve Job's story about his product managers who choose to be that, but they could actually be something different, but they chose to be this because they were so much interested in solving that problem. And so obviously those people very much relate to the issues because they went through them and can much better design systems around this if they have the skills for doing that.

0:14:49.8 David: Yeah, it's funny sometimes we have people push back and like you have a whole engineering team who are listening to you wanting to build exactly the thing that you want. How wonderful is that? And so whether you put the job class of engineer or product manager or whatever, being a product owner and getting to design your own system that takes away all the frustration that you've had from either doing things manually or using a third-party system that doesn't quite meet your needs. Like this is an exciting time for the best of the people who are using the system and helping us drive it forward.

0:15:22.1 Vahe: That brings me to an interesting question where you just said for the best of the people, right? Do you think that like in you have such a large organization, obviously there are differences in people's preferences, but also in skill levels and other things. Does everybody benefit the same way from AI? Like is it enabling everybody or do you see a huge difference in whose benefit? 

0:15:48.7 David: I think you probably sort of the middle 90% or the middle 80% all kind of get the same benefits. You've got some folks at the top in particular who are like, I'm gonna use this. I'm gonna figure out how to do my job and they're gonna be 10 times more productive. I remember a story and this doesn't have to do with AI, but I don't know if you remember Tim Ferriss wrote this book, The 4-Hour Workweek, and he talked about virtual assistants in India. And there was someone at a previous company who was on the seller sales team, signing up sellers from the marketplace. And she found a virtual assistant in India for whatever, four bucks an hour. And she would send them a list of leads every night before she went home and she'd come in and there would be all this data about these sellers. And so she knew who to call and what buttons to press.

0:16:35.1 David: And she was like 10 times more productive. And finally someone went to her and said, what are you doing? And then we went back and built a process and scaled it up. And so we want to give people, like the creativity is gonna come from the humans. And so we want to give them as much leeway in being creative. And then how do we harvest each of those great ideas and build it at scale so that everybody gets the benefit.

0:16:55.7 Vahe: Before we go to retail, because I'm very interested on that, just because this question was popped up in my mind before, when you were building assistants, are you actually doing different modalities like voice also, or is it like text-based? 

0:17:11.2 David: It's primarily text. What I'm doing is primarily text-based. I mean, it's interesting because when you have customer service contacts from outside, you have probably mostly email, a little bit of phone. In our benefits help desk, it's like 99% phone, which is a much harder problem than parsing emails because you have to do it in real time.

0:17:33.6 David: But one of the exciting things is how do we paste that into our LLM? And the LLM can tell the agent who's talking to the associate, the LLM could say, ask them this question, ask them this question. Or it can say, oh, we know who this person is. They are salaried and they work out of this state and they've been here this long. And that will inform, like do they qualify for a 401k or do they have insurance for this surgery and so on. On ranking the human in real time is actually a quite hard problem, but it's becoming, as these LLMs get less latent and more performant, it's becoming possible for sure.

0:18:10.4 Vahe: Basically like a just-in-time assistant that would literally follow or not follow, but immediately answer questions and then augment the data to answer in the best possible way.

0:18:24.5 David: And one of the things, the best agent is probably gonna be better than the LLM. They have been working at this for 10 years. They know what the benefits guide said, all these things. But one of the most expensive parts about running a help desk or running any operation is training. And so if you can have someone on day one, have the LLM walk them through the questions they need to ask, that's gonna take the training time way down.

0:18:48.0 David: The previous company, we went from customer service using green screens and command line tools to a UI. And the old timers were like, oh, this is horrible. It takes me twice as long. But the folks who were coming on were like, this is much easier to use and I don't need to set my Alias files and Unix and all this stuff. And so making it simpler for the user is one of the key components to this.

0:19:11.2 Vahe: And let me switch a little bit to retail because when I did my MBA, I think probably five or six of the case studies were about Walmart, how great this company is, how well they manage certain things. So, how do you think that AI is impacting retail? 

0:19:29.7 David: That's a great question. I think Walmart has been running stores for a long time and are the best in the world at doing that. In fact, a previous company, we hired a bunch of people from Walmart because we're like, they're the best operators in the world and we want some of that. And so we're great at that. E-commerce is relatively new. I mean, we are I think the second biggest e-commerce company in the world or in the country. So we're not nothing, but how do we build a better search, right? How do we build a more personalized search? How do we rank things correctly? How do we rank product detail pages? All of that on e-commerce is like a wide open space where we can play in. And I think there's vast opportunities to improve. If you think about the stores how can we... And this is probably more AI than GenAI, but how can we figure out what inventory to count, like not all inventory is created equal.

0:20:26.7 David: And you can do a wall to wall count once a year and that costs you X dollars. But if you say this thing turns a lot faster, so we should count it more often, or we should count it less often, or we should do it with RFID. All of those things are sort of anomaly detection and sophisticated algorithms that we used to call it machine learning. Now we call AI, maybe we'll call it GenAI in the future, but one of the cool things about this whole GenAI revolution we're in is it's brought AI to the forefront as well. And so using call it sophisticated algorithms... Lots of people call lots of stuff AI, but even a decision tree is AI, right? Using even less sophisticated algorithms can help you with anomaly detection and so on.

0:21:12.4 David: And as I think when I try to explain AI to folks, I think about if I'm a human, I say, oh, if this happens and this happens, I know the root cause with this. And I can do that with maybe two, maybe three variables in my head. But if I want a 100 variables or a 1000 variables or they're training 128 billion variable models now or whatever, a computer has to do that. And so being able to find anomalies in this multivariate space is another fun and exciting area, which is not really GenAI, but it's AI/ML.

0:21:45.5 Vahe: Yeah. And a big topic not only in retail but in many other areas as well. I mean, the area we're talking about e-commerce, search and ranking and so on. Like, I always argue that with AI what you can really do now is deliver a much more intimate experience for somebody, like just imagine I would expect actually at some point that if I opened Walmart e-commerce site, it looks entirely different than if my wife is opening it. Because like I have entirely different preferences. Like if we just assume for a moment that for whatever reason Walmart knows that, like it could even generate different descriptions of products based on my preference. Do you see that like happening? 

0:22:32.8 David: I mean, people have been working on personalization for decades, and first it was just people who bought this book, bought that book, and let's put that at the top. And then it was like, let's use some sort of simple machine learning and then let's use AI and now let's use generative AI. But I think the generative AI was a step change and be able to understand what are the attributes about you and how do we build that into our model? In my car we've got a nav system and I pull on my driveway and it always wants me to turn left because that's the shortest distance into Seattle. But I always wanna turn in right because I go up the straight road rather than the windy road, and I'm waiting for my car to learn that every single time it tells me to turn left, I turn right. And so it should personalize that for me. And so even the most sophisticated automakers or the automaker still struggles with this personalization. And so it'll be great for us to jump ahead and be able to build a whole different store for you. And you may come to the video games tag and your wife may go to the apparel tag as the home homepage or vice versa.

0:23:38.5 Vahe: I feel that too because ultimately sales is a lot about building intimacy, right? And understanding and then having empathy towards your potential or real customer. And with this, you can do it in a much deeper level because we can argue a lot about intelligence and how intelligent is it and so on. But I think one thing that is really answered is that these LLMs are really mastered language. The content might be wrong from time to time, but the way they present it is so convincing because they honestly wouldn't master the language part of it, right? So we could actually tap into that.

0:24:16.9 David: That's the most scary. I think there was a description of someone is often wrong, never uncertain, like they are so charismatic and so convincing that even if it's a wrong answer, you'll believe them. And I think that LLM has some of that problem now too.

0:24:34.0 Vahe: Yeah, I wrote a blog post that was picked up about that because ultimately I was asking a question about the color of the uniform of the National Guard during the Napoleonic Wars, and I was very sure what the color is, and it answered an opposite thing, but so convincingly, and I promise you I knew that information and I don't know it now anymore because I'm now confused myself. Like I unlearned it because I read all these things about it. I was like, is this true? And then now I don't even remember which one is the true answer. So that can be dangerous to some extent. Yeah.

0:25:12.9 David: Yeah. And I mentioned automation, adoption, audit. One of the audits we can do is ask the AI questions that we know the answers to, and that will tell us if it's good or not because, and I read about some effect, I don't remember what it was called, but people will say, oh, this doesn't work because I know the answer to this. But then they'll go to something that they don't understand and they'll believe the LLM.

0:25:34.9 Vahe: Yes.

0:25:39.1 David: When they should carry over this healthy skepticism. It's a hard problem. This whole content moderation and getting the right answers. I'm glad that's not my full-time job.

[laughter]

0:25:50.0 Vahe: Yeah, I mean, it's also, I have to say an interesting filter, right? Because lots of interesting things are happening there, and one of the things that is happening is this very fire models, right That you were also talking about but also like literally training a model for verification purposes that of course shifting computational power needs from training to inference, which is not desirable. But then again, if it's that much better, it might be the way to go, right? That is, I think it's an interesting field and you brought a very important point, like some answers you know already, right? There's a technique where you can actually embed automatically some questions into this where you know the answer. And then based on that, assume if the rest is correct or not.

0:26:36.7 David: And if in reading unit tests, you wanna assert, if this is one, like test it if it's one. And I know the answer, if the input is one or the input is zero, they should at least give the right answers there.

0:26:48.1 Vahe: I also read one of your articles, very interesting article, and you were also there mentioning that besides operational tasks, you would also like to see at some point in strategic decision making, AI in strategic decision making, like, can you elaborate? 

0:27:06.4 David: Obviously, we can streamline things, and I've spent a lot of time driving optimized operations over the last couple of decades. I remember reading or listening to NPR probably 15 years ago where they were talking about Auto Steve. At Apple, they were trying to capture all of the inputs that Steve Jobs had when he made a decision and then capture his decision and see if they could build a machine learning model to replicate him. They called it Auto Steve, or they called that project that. And today we have, I read on Google news, In my personalized feed, I get stuff about Bitcoin, right? And it says like the AI has been trained to tell us what the Bitcoin value is gonna be at the end of the year. And like, how do I know if it's right or wrong? 

0:27:53.4 David: But with these multi-variate models with hundreds of billions or trillions of inputs, it'll be fascinating to see if it starts getting things right. It reminds me of... There was a guy I was interviewing for an intern, I think, and he was like, yeah, my spare time I write code to figure out who I should bet on in the football games, and I can only beat the spread 51% of the time. So I gave up on it, like 51% that's great, [laughter] you should be able to, anyway, computers can make these... If they have enough data, they can make good decisions often even about telling the future.

0:28:32.9 Vahe: Yeah. And I think it's very important because the difficulties that we have in many areas, being from environment all the way to healthcare, will require support probably of these kind of systems to actually tackle them and solve ever more complicated issues, right? 

0:28:50.8 David: I think healthcare is like maybe the killer app for this thing. There's AlphaFold that they just released, which is gonna allow us to test millions and millions of drug or protein folding in the same amount of time it took... A 10th of the time it took to do one drug, today you can do millions of these. And for our economy, I don't want to go too a far off tangent, but our economy is dependent on good healthcare. And so if we can come up with the right medicines to keep people healthy, that'll be a huge boon.

0:29:21.7 Vahe: Well. I think so too. It's really difficult topics, right? If you look at some of those things that the more we prolong our lives, we have like more cases of cancer and more cases of things that are really difficult to solve. And I feel like without AI it's almost impossible to really have a chance on solving those things. So, what is... As you touched off it, so, from your positive attitude, I'm a 100% convinced of your answer, I'm still gonna ask it, what does AI bring us good, only good, or are there problems? And, if not, then tell us why.

0:30:05.5 David: There's always gonna be problems but I think it's mostly good. I think about who's going to win? Is it gonna be OpenAI? Is it gonna be DeepMind? Is it gonna be somebody else? Like, who knows? But in the end, we're all going to be... We're all gonna benefit from this. And I think about the same thing with autonomous cars. Is it Tesla? Is it BYD? Is it somebody else? That's gonna bring us autonomous cars, but in the end, we're gonna have very... We're gonna have less accidents, because these folks are spending billions of dollars on Nvidia clusters and so on, or building their own. But like, I think for the most part, this is all good. I would not want to get scared away from all the good things. Like you just have to manage problems. Be aware with 'em and don't have Pollyanna glasses, but you need to manage these things and take the good.

0:30:56.6 Vahe: Yeah, I think so too. We probably just need it also for all these problems to solve and yeah, and there will be side effects and I think there are a lot of voices out there that are very critical, and I can absolutely see that there are risks attached to it. But we can also manage those things, right? It's not like that we're just receiving end of it. We can actually design things around how we use it and how it is. Having said that, do you... I would assume that at Walmart you also have like ethical standards and other guidelines for these AI models. We also work actively on guard railing, like those things like making sure that certain things are just not possible.

0:31:41.3 David: I spent a lot of time with the governance team and the legal team and the HR team, and it's super important that we do the right thing for our associates and do the right things for our customers and keep everybody safe. Because as you know data's the new oil. And privacy is super important. And I'm always skeptical of the term guardrails, these guardrails is like, we're gonna make sure that you don't drive off the cliff into the river or into the ocean, but can we be more precise in guardrails? Like I'd rather have lane lines, guardrails are, if you miss the lane lines and you're not paying attention and this and that, we're gonna save you. But could we be more precise and say, look, you need to be between these lane lines and obviously, there's philosophers and the best engineering talent in the world and governance and lawyers and everybody who's trying to weigh in on this. And I'm happy that that's not my full-time job again, [laughter] I'm just making widgets, trying to make them as efficiently as possible.

0:32:38.9 Vahe: But that's what I wanted to say. Highly efficient ones, right? And that's the fun part of it. So what is your prediction about in retail, what AI can do in terms of... And I don't wanna, like, I'm not talking about numbers and P&L and these things, but in terms of really advancing this level of experience within the next five years, like where do you think we'll land in five years in terms of what is possible with AI in retail? 

0:33:07.7 David: Yeah. I think marketers, advertisers have been trying to figure out how to put the right thing in front of the right customer at the right time for millennia, really. And obviously, when you go from having stores where you can change prices once a week or it's very expensive to change prices or to put something on a different shelf to e-commerce site where you can change the price in a flip of a switch or even let, you know, the computer has changed the prices and you've got automated Google sponsored links and all these things. So we've already seen a big acceleration of putting the right thing in front of customers and this'll just supercharge it. I assume you've read Black Swan or you know of Black Swan and they talk about positive feedback loops are sometimes bad, right? 

0:33:57.6 David: Because it can send things off the rail, the guardrail, I guess. But I think we're in a positive feedback loop, right? The more data we have, the more precise we can give you the product that you're looking for. I was talking to a friend the other day when I was out running and he's like, you need some new shorts. You lost a bunch of weight, your shorts are falling down. And I get back and on Instagram it shows me all the advertisements I got were for running shorts. And so I'm like, how do I feel about that? And I better go check my settings and see if it's listening to me. But broadly, I guess, if I'm going to have to look at advertisements, I'd rather look at advertisements which are relevant to me than not.

0:34:32.8 Vahe: Honestly, I think that maybe advertisements are also just one step, right? Because you could imagine if these things are so, so correct in the predictions, you could just wake up the next morning with a doorbell and somebody giving you your next shorts and then you can send it back or not send it back if you like it. I mean, theoretically, I understand there are lots of legals and other implications to that, but that's certainly the next level of dreaming about how this could work out.

0:35:00.7 David: Well, there was a meme about 10 years ago called Yesterday Shipping. I don't know if you saw the YouTube video, that the retailers are moving to Yesterday Shipping where we'll ship... You go to the website and look at this thing and say, oh, we already shipped it to you. It should be on your porch in a few minutes. But one step back from that is putting inventory close to customers drives speed, drives cost reduction. And so you know what company is within 10 miles of 90% of the people in the US it's Walmart. And so putting inventory close to customers, Walmart has a huge advantage to that because we have inventory storage places right next to customers. And so ideally Yesterday Shipping, we put something on your porch before you asked for it. But one step back from that is we put it in an inventory near you so you can drive there, or we now do order pickup and delivery where we do same day shipping from stores or you can drive to the parking lot, we'll load your trunk anyway Putting inventory close to the customer is sort of the holy grail for retailers and this data, Walmart has more data and more inventory sources than anybody else in the world. And so we should be great at that.

0:36:13.9 Vahe: Absolutely fascinating. Thank you very much for your time and the insights that you delivered. I mean, it's very refreshing to see these positive attitudes towards AI I have to say these days, but love it. And thank you so much for your time.

0:36:29.7 David: Yeah, thank you for having me. It was a pleasure to be here. Glad to spread some optimism. [laughter]

0:36:34.2 Announcer: Thanks for listening to this edition of Hybrid Minds. This podcast is brought to you by Cognaize, the first of its kind, intelligent document processing company, which automates unstructured data with hybrid intelligence and a data-centric AI platform. To learn more, visit cognaize.com, and be sure to catch the next episode of Hybrid Minds wherever you get your podcast.