This week, join in on an engaging conversation with Beena Ammanath, the driving force behind the Global Deloitte AI Institute and Trustworthy AI, and Lara Abrash, Chair of Deloitte US, in a captivating dialogue exploring the evolving synergy between human intelligence and AI, along with its profound ethical and societal implications.
This week, join in on an engaging conversation with Beena Ammanath, the driving force behind the Global Deloitte AI Institute and Trustworthy AI, and Lara Abrash, Chair of Deloitte US, in a captivating dialogue exploring the evolving synergy between human intelligence and AI, along with its profound ethical and societal implications.
Beena sheds light on the boundless potential of AI as it transforms industries and simplifies everyday tasks, underscoring its emergence as a powerful tool across diverse sectors.
Meanwhile, Lara delves into the challenges organizations face amid the rapid evolution of AI, emphasizing the critical need to foster fair and inclusive opportunities in its application.
Together, they emphasize the imperative of embedding ethics into the core of AI development and more broadly throughout organizational frameworks. Their insightful conversation navigates the intricate intersection of technology, ethics, and societal impact, highlighting the pivotal role of responsible AI deployment in shaping our collective future.
Key Quotes:
Time stamps:
Lara: [00:00:00] Generative of AI is not something that's going away. It's ultimately about being focused on things like reliability and trust. And also making sure that our capabilities of our humans are evolving as the technology is evolving. So we have the ability to have more of us capable of understanding, leveraging, and analyzing generative AI.
Vahe: Please join me in welcoming Beena Ammanath. Beena is an influential advocate for women and multicultural inclusion in the tech industry. Her work and contributions have been acknowledged with numerous awards and recognitions. Beena currently leads the Global Deloitte AI Institute and Trustworthy AI and ethical technology at Deloitte.
Prior to this, she was the CTO at Hewlett Packard Enterprise. Before HPE, she was Vice President of Innovation and Data Science at General Electric. [00:01:00] And welcome. And then also, please also join me in welcoming Lara. She is the Chair of Deloitte VS, the largest professional services organization And with more than 170, 000 professionals, Lara stepped into this role in June 2023, after serving four years as the Chair and Chief Executive Officer of Deloitte Ventures, where she was responsible for overseeing the U.S. audit and insurance business. Lara is a member of Deloitte Global's Board of Directors and Chair of the Deloitte Foundation. Tell us about your experience of AI thus far, perhaps starting with you, Beena.
Beena: So I actually studied computer science and, you know, this was a long time ago. I studied AI in theory because at that time we couldn't really actually do anything with AI.
but, you know, fortunately, looking back, it is fortunate I chose, path of, you know, to focus on data, right? After I finished my computer science, I started out [00:02:00] as a programmer, you know, database administrator, data developer, saw the whole evolution of business intelligence, data warehousing, big data.
So I've seen AI evolve. Moving from transactional systems to business intelligence systems now to ai. And it is very exciting times for us to be living through this journey, seeing this whole journey of, you know, AI going from theory to reality, and now it's everywhere. It's prevalent, right? So, I I feel very fortunate to be part of this journey
Vahe:. Yeah, it's true. It's, it's a very special time. You're right. Le Lara, what about your experience?
Lara: Well, I would say my formative years are a lot different than being as, you know, much of my year, I grew up in a, much of my career, rather, I grew up, in our audit and assurance business at Deloitte, and much of it was not a technology based set of work, nor were our clients.
We were not exposed deeply to technology. And about eight [00:03:00] years ago, I was asked to lead the transformation of our audit business. And I started to really immerse myself into technology, into understanding the power of data, to understanding analytics. And ultimately we were just dabbling on artificial intelligence and you could see as I was working our way through that the pace of change was just unbelievable.
Eight to six to four years ago, when I think about it today, it really is one of the things that's going, I would just say, the fastest I've ever seen in my 30 plus years of working. It's, it's like anything before. So when I think about, you know, my coming into this movie, it's ultimately something that I had very little involvement in.
And society, about 8 or 10 years ago, really got deep into everybody touching technology, and now everybody's touching generative AI. And in my role, I also spend a lot of time with boards and board chairs, and this is at the top of every board agenda. [00:04:00] It's a very top, item. There's not a client meeting we have where this is not a question about how do we approach it.
So, I see it in all facets, but it really has come like a fast movie in the last couple years.
Vahe: That's true. Beena, you recently posted a piece about the state of ethical standards for companies using AI, and I think it's a shocking number that don't know if they actually have standards. What's your take on this?
Were you shocked when you, when you? Did this research?
Beena: Yeah, and in a way it's not shocking, right? I think the pace of technology is growing so fast. The positive value creation, business value creation from using AI or any technology, it's all the investments, research primarily goes into that. Not as much investment or research goes into identifying the ethical risks or any kind of risks that come with using the technology.
We've seen that happening with social media, we've seen that happening with other [00:05:00] technologies as well. Right? Not as much emphasis. And trust me, I'm a computer science scientist by training, and it's very easy to stay focused on the shiny value, the cool things technology can do, and not worry about the negative impacts.
Right? But now things are changing slowly. And our, you know, our focus is really to raise awareness. So that companies can put equal effort into thinking about the ethical challenges that come with using any technology so that you can proactively address it. If you don't know, you're not going to be able to solve for it.
So I think that's the idea behind the report and I'm actually optimistic that we will see those numbers change in the next Anwin survey.
Vahe: It's very, very on point. So people do seem to have an inherent mistrust of AI to some extent, but with such ease of access to generative AI like Chad GPT, is that public mistrust going down?
Beena: You know, I think, it's a balancing act, and it [00:06:00] all depends on what's out there in the media, the level of awareness, the level of engagement with technology, and their own experience. we are still in very nascent stages with generative AI, but what we've seen over the last nine, ten months is, you know, that it's The public is much more aware of AI.
Everybody can touch and feel and play with it, which was not the case. Like Lara mentioned, AI has been around. It's been used by large enterprises for a while now. It's just now much more publicly accessible. So there are a There are media headlines that are guiding some of those opinions. I think, you know, it will even out and balance out over, you know, over the next few months.
And I don't think it's going up or down. I just see the level of awareness increasing.
Vahe: And Lara, Deloitte came out recently with a guide of use cases of, I think it's called a generative AI dossier. [00:07:00] What are your biggest takeaways from that report?
Lara: Yeah. So, you know, generative AI has implications for every industry.
We'll just start there. And we really are at the tip of the sphere when it comes to discovery. organizations, companies, society at large are going to continue to identify. New use cases, but it's really up to all of us to determine whether they're worth pursuing. And that's going to be a give and take, but ultimately, the things that I, I'll say I'm focused on are my biggest takeaways are there are a lot of things that we need to really think about when we talk about generative AI.
We need to talk about them openly and transparently. it's things around what are the implications of a cost reduction and process efficiency. You know, what is the impact on our workforce? You know, are we thinking about reliability and trust? Are we thinking about and injecting emotional intelligence into the process?
So there's a lot of benefits that people are already [00:08:00] seeing from generative AI. And that seems to be where the focus is. Cost reduction, efficiencies, growth. But ultimately, we need to make sure we're understanding the implicit risks that come along and mitigating them, not avoiding them. Generative AI is not something that's going away.
It's ultimately about being focused on things like reliability and trust. and also making sure that our capabilities of our humans are evolving as the technology is evolving so we have the ability to have more of us capable of understanding, leveraging, and analyzing generative AI. And ultimately, my biggest takeaway is we're going to use it where it makes sense.
But hopefully, as a society, we don't use it where it doesn't.
Vahe: When you say where it makes sense, I think, generative AI right now seems to be very expensive. And it was also mentioned, I think, in a Forbes piece, that it is wearing many hats. is there a point where it can be wearing too many hats?
Do you think there are limits to where it can be
Lara: applied to? [00:09:00] Yeah, and so there's a big difference between AI supporting the work we do and AI doing it for us, being the boss, dictating the work we do. We need to be focused on being fair, responsible, and transparent on our adoption. We have to make sure we have the appropriate safeguards.
And make sure it focuses us on the ability to utilize AI in the area it's intended, really as a tool. So there are so many places we could use it, but we have to make sure there are clear guidelines that we're putting in place as we move it forward. And we have to make sure that We assess that there are a lot of hats AI can wear, and we may discover a new hat or a role, but we have to make sure again that it's, it's less of a replacement.
It's really about opening up new doors for us. This is not about taking away things that we do, but enhancing the human experience and the experience as a society.
Vahe: Yes, you touch on a very interesting thing. So, If AI is being [00:10:00] used to help regulate industries, to the discussion we've been in about ethical standards, is there a potential of a conflict of interest in the AI itself establishing safeguards?
Lara: Well, of course, I mean, that's why we need checks and balances. This is incredibly important. And it's also incredibly important that we have a combination of AI and the human intelligence. It's not one or the other. We're going to have AI, but we always have to have human intelligence as part of it. This is really important.
we don't make our own rules today, and so we need to make sure we're accountable to ourselves. And the same should be true for AI. It can't make its own rules. There needs to be some sort of tension. So we need to have a commitment to trustworthy development and the use of generative AI. It'll only become more important as the capabilities grow and governing bodies shape rules for their application.
Vahe: Hybrid Minds is about combining the power of AI and human intelligence. As we touched on the beginning of the episode, Beena, are we just at the start of [00:11:00] combining these powers?
Lara: I don't think so.
Beena: I think we've been on this journey for a bit longer. It's just that more people and, you know, the awareness is high.
So more people, more companies, more industries and functions are aware of it. So I think, you know, we've been on this journey, but I do think generative AI can be, can help us accelerate, right? The, it has become more accessible to the average human being compared to, you know, just large, large organizations.
And I envision where AI, if we do it right, will become a very powerful tool in our, you know, in our toolkit, right? Whether it is to write code faster, whether it is to debug faster, whether it is to, you know. Find directions or help explain medical terms in simple language. There are so many ways it can become a true co pilot for everything that we do.[00:12:00]
You know, you remember that there was an old TV show where there was a robot girl who stayed in the family and did all the, you know, manual boring work. I mean, it was a bit, You know, we are at that time, but I think, you know, if you look at it from a software perspective, we are at that point where, you know, things that you don't want to do, or it's just tedious work, like, you know, coordinating across calendars, or, you know, making this technology work, those are all things, if we do it right, and if we focus on the right capabilities, then AI can really help do it better.
Vahe: Lara, as we unlock the power of AI and IQ convergence, what ethical considerations must be prioritized to ensure that technology serves humanity's best interests?
Lara: We need to start by making sure we're creating something that's transparent. We need to look at this collectively. We need to make sure, you know, back to Beena's point, we really understand One, the benefits of AI, which she shared a ton, [00:13:00] but we also need to understand the associated risks, and by understanding those, you're going to influence how we're going to think about ethical considerations.
So we need to constantly be asking ourselves, does this technology serve humanity's best interests? And there's a couple lenses we can put onto it. One, we can continue to ask ourselves, is it fair and impartial? Is it transparent and explainable? Is it safe and secure? Is it accountable? Is it responsible?
And is it private? From here, we could start thinking about that future of work, thinking about things like how is it going to impact data, technologies, where does the future of work relative to the ability to have critical thinking come in. But what technical proficiencies do we need to have? All of those things are going to be really important.
We often talk about the importance of humans being able to be a significant part of augmenting the [00:14:00] technology, augmenting the generative AI. But we also need to think about that experience and how do we create. People who can do that. And as I said earlier, we need to think about how do we make sure those people bring and, and complement what the gender of AI is not going to do.
We need to be thinking about things like professional skepticisms, our experiences, and how they've informed us. Emotional quotient and how does it influence our ability not just to interpret it, but also to communicate the outcome. So the future of work aspect is really, really important. We also need to make sure we understand what our role is as leaders of society, as leaders of the organizations we're in.
It's up to us. We need to set the responsibilities for what leaders say, when they're out talking about this. It's up to us to set the tone and make sure that we are infusing ethics. And all of our decision making. And then it's not just about a part of the organization. It's core to our strategy. We can't just talk [00:15:00] about ethics when it's convenient.
We need to have it as an overhang in everything we do. And so that when people start to build generative AI at scale, it's really part of something that's really front and center.
Vahe: And as we continue to advance in combining human intelligence with AI, do you have any foresight to what is coming, maybe to both of you, this question?
Beena: I can go first. I think, you know, this is just the tip of the iceberg on what we're seeing with generative AI and AI, right? There, you know, we're going to see similar big technological inventions coming specifically in the field of AI. And I say that based on, you know, the amount of academic research or research groups that are focused on discovering that next big thing within AI.
We could be looking at computer vision. We could be looking at, you know, how images and videos are translated, right? There are, there, you know, how do these technologies combine with other existing technologies, [00:16:00] which are maturing, right? How does, you know, say the metaverse and Web3 and AI come together to create powerful new technologies?
So I think the, you know, AI itself is still growing. We're going to see more evolutions there. We're also going to see it combining with other technologies to create even more powerful ways to use the technology. This is very exciting times to be in this
Lara: space. I'll add to it. I mean, the excitement that Beena shared, you know, most organizations around the world are trying to grapple with how do they find their space in that excitement and starting to build their use cases, focusing on their data strategy, learning where and how this is going to accelerate their firm, accelerate society.
At the same time, they're looking for disruption. So as I look across the ecosystem of businesses, you know, most companies right now are grappling with it. pace and moving this at the right level of pace. But ultimately, there [00:17:00] are really significant societal implications that we need to be thinking about.
We need to think about that workforce of the future and how do we create equity to make sure everybody has the ability to be in roles where they can be that human intersection with the technology. And that's a big part of what companies are focused on. Risk has been on a company and board's agenda. for decades now, and over the last decade, the types of things that are creating risk, both known and unknown, continues to go up.
When you put generative AI in, the risk now is no longer focused just on the technology itself. It really is now spread across all elements of an organization. So really having a handle as a as a board and a c suite on what is your risk appetite with generative AI? It can't be I'm not going to use it because that, you know, generative AI is not going away.
But with an acceptance of how am I going to use it and how am I going to make sure I'm mitigating trust? We [00:18:00] spent a lot of time talking today about Trust and the importance of ethical AI. It's going to be really important that companies are thinking about how they use that as a risk mitigation tool and how across their organization they're thinking about that.
So the future is really bright for generative AI, as Beena said, but ultimately, as I see it coming to life and making its way towards the future, it's going to be really important that companies are starting to think about the benefits, the risks, and how do they drive this systematically into their strategy.
Vahe: Very interesting. Beena, you're also the host of the AI Ignition, exploring the future of AI in the enterprise. Tell us where people can find that and what it is about.
Beena: It is all on the podcast, you know, any podcast hosting tools, Spotify, Apple Music, you'd find it. It's focused on, you know, talking to thought leaders in AI space.
I have to, you know, address the last question you just asked us, Vaheo. Where is the future heading? [00:19:00] So, you know, more of looking into the future with AI, where do we see some of the advances happening and what's, what's going to come, become real in the next three to five years? I think
Vahe: it's very interesting because it's somewhat a little bit unpredictable, right?
Because, things are evolving very, very fast. and thank you so much for doing this. And, where can people find you both
Lara: on social media? Lara Abrash, I'm on LinkedIn. looking forward to seeing all of you. I
Beena: can say that I'm the only one with my name. So if you just search for Beena Ammanath and you find a HIC, that's probably me.
So far, that's the only one that exists. so though my cousin has threatened to name his daughter, so, but, you know, just, just search my name and it's.
Vahe: Perfect. Thank you so much. Thank you for all the insights that you gave in this interview. Thank you very
Lara: much. Thank you for having us. Thank you for having us.
Announcer
Thanks for listening to this edition of Hybrid Minds. This [00:20:00] podcast is brought to you by Cognize, the first of its kind intelligent document processing company, which automates unstructured data with hybrid intelligence and a data centric AI platform. To learn more, visit Cognize. And be sure to catch the next episode of Hybrid Minds, wherever you get your podcasts.