Join host Vahe Andonians on the latest episode of Hybrid Minds into the dynamic realm of AI's transformative impact on creativity and the workforce in this engaging episode featuring Joseph Drambarean, CPO, and CTO at Trovata. Discover how AI is not just a tool but a catalyst for personal growth, making individuals more creative, faster, and ultimately more prolific.
Join host Vahe Andonians on the latest episode of Hybrid Minds into the dynamic realm of AI's transformative impact on creativity and the workforce in this engaging episode featuring Joseph Drambarean, CPO, and CTO at Trovata. Discover how AI is not just a tool but a catalyst for personal growth, making individuals more creative, faster, and ultimately more prolific.
As Joseph navigates the evolving landscape of AI, he challenges the common narrative about job displacement, offering a nuanced perspective on how technological adoption reshapes the landscape of human capital needs over time. Gain insights into the exciting future Joseph envisions—a seamless blend of human and AI creativity, where the lines between "us and them" blur, creating a space for unprecedented, uncharted behavior and collaboration.
Delve into the depths of AI's impact on creative exploration as Joseph shares his experiences launching a generative AI tool. Explore the joy and engagement that result from human-like interactions with AI, and how this symbiotic relationship is poised to redefine our understanding of creativity.
Join us for a captivating exploration of the intersection between AI and human ingenuity, where Joseph's expertise and forward-thinking insights shed light on a future where innovation, creativity, and technology converge to shape a new frontier of possibilities.
Key Quotes:
Time Stamps:
Links:
Joseph: [00:00:00] When you have the low latency experience of being able to get the satisfaction of the result of any one of those dimensions or modalities, and be able to access it within seconds, this is why I'm so confident that the creative forces That are innate in all of us, they get extracted, and that's why the willingness to leap and take risks is so aggressive, because we're just naturally built for it.
Vahe: Please join me in welcoming Joseph Jambarian working with key fortune level brands, including J. P. Morgan. Capital One, Marriott International, Microsoft, Harley Davidson, and Allstate Insurance, Joseph has helped brands navigate the digital landscape by creating and executing innovative digital strategies and enterprise products integrations that incorporate cloud architecture, analytical insights, industry leading UI, UX, and technical recommendations designed to bring [00:01:00] measurable return on investments.
Welcome, Joseph.
Joseph: Thank you so much for having me.
Vahe: Joseph this is tough topic at the beginning, but I think that you are perfectly fit to answer this question. What do you think will be the impact of AI going to be on, on jobs?
Joseph: Right out of the gate. Not holding back here. Yeah, not hol
I think that the easiest way to answer this question is, In the short term, the impacts will be positive. And this is why I, I tend to want to kind of start with this whenever I think about the impact of AI and productivity. That's where the, the obvious benefits kind of play out. And if you think of the roles that have been using it the most, whether it's in copywriting or in programming or other areas where The boilerplate of daily tasks that can easily be automated because the corpus of information that has been scraped [00:02:00] through, you know, the entirety of the internet and is the basis for a lot of what can be generated.
Because the boilerplate is so easy to reproduce, jobs that may have had a lot of annoying parts to the job where creative blockers and things that just need to be done can be done more efficiently and quickly because their starting point is generated by AI. I think That leads to more productivity, which creates more opportunity from a job perspective, because it puts you in a position where you can be more creative, you can be more strategic, you can be more influential than you were in the past.
Now, the flip side of that, if you play it out over time, is that naturally, some will be better than others at that process. And the process, of course, being the adoption of these techniques and these technologies. So if you think about it, you know, we're in the early innings of this technology. Being adopted and, you know, open AI had an amazing keynote [00:03:00] that just presented even more incredible innovations to the, the technology that they bring forward.
And they're just, they're not the only ones, right? Google's been doing the same thing. Apple, you know, is rumored to be working on similar technologies. Microsoft is obviously all in with open AI and, and being in all of that. So if you imagine the permeation of this technology is everywhere, it's, it's in every facet of life.
And then similar to when mobile came out, you know, we, we don't even think about it anymore, but there was a time before having a, you know, an iPhone or an Android device. And as that technology really permeated throughout, you know, all of society, and we got used to the ideas of using apps and got used to the always on availability of information and the effects of push notifications and all of that, behaviors changed.
And the way that work is done changed. Entire economies were created. The gig economies of, you know, Uber and, and, and others as, as mobile really drove [00:04:00] that revolution. Well, in a similar way, I think that is playing out currently with, with AI, generative AI specifically, but it'll manifest even more aggressively over time.
Where, if you adopt this technology, it makes you a better version of yourself, ultimately. A more creative, a faster, a more productive, a more prolific version of the skill set that you already kind of had innately. If you don't adopt it, and you almost feel a an unwillingness to adopt it because of a variety of reasons, and, you know, folks have cited so many different, you know, issues, whether it's, um You know, the, the, the, the copyright side of the house, the bias side of the house, all the things that introduce fear into the equation and the reason for thinking, Oh, I'm going to stay away from it.
I'm not going to adopt this technology. Well, just like with mobile, just like with computing generally, there will be. Parts of the population, [00:05:00] maybe even generations, that are left behind as this technology permeates throughout society and through the daily life routines of so many individuals. And I think that's why, when you think about it from a jobs perspective, it's not as simple as just saying, Oh, well, AI is just simply going to replace this job.
Period. Today. Because it's just better and more effective. I don't think that that's how society works. I think what happens is that over time, there are, there's adoption of technology and that plays out into how the needs for human capital get played out. Whether it's in current job functions that exist or that are redeployed because of innovations or advancements that took place.
And this is extremely common. If you think, you know, 20 years ago, we weren't talking about cloud at all. There was no such thing as, you know, sysadmins for cloud operations, right? [00:06:00] Today, folks that might have been in traditional database management jobs or that would have worked with an IT, they would have had to retool their skills and level up in order to be productive workers in a cloud economy.
One where, you know, there are different sets of skills that are needed and that still are building on the skills you may have had in prior careers, but nevertheless, you would have had to adapt. And I think that that is happening with AI as well.
Vahe: Two interesting things that I want to elaborate a little bit on.
So the first thing is you said that in the beginning productivity will increase, right, and I find this interesting because when you look at the last industrial revolution, computers, what we saw there is that productivity declined for about 10 to 15 years. I think there's this famous saying from Robert Solo who said you can see the computer age everywhere, but in productivity statistics.
Right. And it took companies or, you know, the [00:07:00] whole society basically 10 to 15 years to get adapted to this technology and actually be able to use it in a meaningful productivity increasing way. So you don't see that happening here now?
Joseph: No, I think that there are similar cycles playing out, which is why I, I believe we're in the early innings of, of this whole revolution taking place.
One thing to kind of consider, especially with the. The onset of computing technology made available at mass scale, right? Because that's ultimately the thing that also impacted the productivity gains with the onset of computing is that at the beginning, the types of computers, the applications and how they would be used in enterprises and in personal computing, they were very limited.
And it was at the beginning of the curve of innovation. And as that curve accelerated, and there were more use cases, more developers, more influence that drove the adoption. Obviously, you see [00:08:00] the cascading effects from a productivity perspective. In a similar vein, and I think wildly more truncated in terms of timeline, because we already have the foundations of mobile, cloud, social media, etc.
These other revolutions that have taken place since, that have Prime the pump, if you will, in terms of taking advantage of new technology and integrating it into your daily life. We're kind of in a almost we're not starting from a standstill. We're, we're almost going from a running start, especially considering that AI is not new, right?
We've been, we've been taking advantage of AI. For over a decade at this point, right? Whether it's in the form of buying things and, and ad tech, and we don't really consider ad tech AI, but I mean, that's what it is. The recommendations that you see, the things that, that you buy and the, the spookiness of how accurate it is at, at recommending things that you would, you might like, it's all based on the same foundations.[00:09:00]
The manifestation of generative AI is simply a, it's a, it's a transformation of the use case. into one that is targeting a new mental model, a new way of approaching the same core fundamental technologies, which is why I think it's a running start. Now, similar to that revolution, I would say that you're absolutely right.
There's, there's this initial period where adoption really depends on the willingness to take risk, right? And we're watching it play out. We're watching it and how users first interact with These technologies. And, you know, we, we have a lot of experience at Travata with this and, and users using our generative AI functions.
But even outside of that, in my own personal use, others personal use of AI, the first steps that you take are test and learn, right? Where you haven't made a conscious decision that, oh, you know what? AI is going to replace, quite simply, this entire task that I've been doing for [00:10:00] years. And that takes me You know, X amount of time in productivity value.
I'm just now blindly going to trust this tool to do it for me, right? What happens is you have to go through your own personalized test and learn strategy, if you will, of taking risks with the technology, seeing the benefits in the way that it's able to respond and building on that and play that out now with the 100 million active per week user base.
Of open ai, just as an example, and think about how those tests and learn strategies are playing out at mass scale and the adoption is happening in real time, if you will. And that's why I believe that the first things that we're going to see are the productivity benefits because of the risk. that folks are already taking on the shoulders of having had the experiences of prior kind of revolutions, whether it's mobile, whether it's social media, all the different things that we've gone through, [00:11:00] heck even throw crypto in there and all the risks that, that, that folks were taking over the last five years.
So I think that, that, that primes the pump in a very real way.
Vahe: Oh, that makes totally sense. But the second thing that I would like to, to point out here and maybe elaborate a little bit further is, I, I very much agree with you that we should look at job functions, not the jobs generally. I mean, there will be also entire jobs that might be replaced, like after all, I think my favorite example is the word computer, which used to be a person who does calculation and that job is certainly gone forever.
But most, most, in most places, the computer really changed job functions. Now the problem with, it's not a problem, but the, but the fact with AI is that there are so many job functions it can potentially. Touch, or already touches, or is already able to touch, that that it could be really a [00:12:00] massive change never seen before, before, before, before you said, obviously every industrial revolution was the biggest at that time, right?
The third industrial revolution was bigger than the second, the second was bigger than the first, I get that, but this one seems to be very special because it also, it also Definition or the identification of humans, right? We identify ourselves as, you know, the most intelligent being on earth. Now that is certainly in question and some believe already answered that we are not anymore, but at least at the very least it's in question now.
Right. And that's the first time that something touches our. core identity. And so how do you think that we will be able to cope with that and how will it influence our daily lives, basically, right? It's like this, this question.
Joseph: It's a great question. And the reason I kind of pause is because I think that there's.[00:13:00]
There's probably going to be milestones that we experience along the way and, you know, putting a futurist hat, I guess, and thinking through the perspective of experiencing this now in real time, thinking about my own behaviors, my own thoughts personally on the topic and how I've been coming to terms with.
A reality where I genuinely enjoy talking to a computer. And what does that mean? You know, where does that line start to begin and end? And, and I say that in the context of the personification of, of an inanimate object, right? And what does it do? to kind of enhance my life but also does it do anything to enhance what it means to be a human altogether?
When I think about this problem, you know, I don't think I'll ever be convinced that machines are outpacing Human creativity and, [00:14:00] and the, the capacity for a human to, to, to impact and create at a, at a large scale, because ultimately we created these things that we're talking about. Now, I think what's happening though, is that the lines are being blurred because in our creative capacities today, we have the, the added artificial benefit.
These new you know, these arms and legs and you know, new, new rigging that lets us jump higher, punch faster, run faster, whatever it is because of the fact that, that we've created them. And does that mean that every single stage, every milestone of what humanity looks like is dependent on these training wheels, if you will.
And as we get ladder up higher and higher and higher. That the lines are blurred more and more between what is human, what is machine, and it ultimately is hybriding into some new thing, right? Some new [00:15:00] declarative object that is a mixture of the two. And I know that that's, you know We get a lot of this philosophical thinking from movies that we've seen, from books that we've read.
There are science fiction writers and philosophers that have been thinking about this for decades, because even prior to the technology being created, and I think it's because ultimately we yearn for this type of interaction at our core, being able to create in this way, being able to create almost something analogous to us, right?
Be proud of it. And because we're, we're striving so hard to do that we're gonna wanna have a relationship with whatever it is that, that this thing is. Right. And I think it's gonna create for special behavior, I think instead almost behavior that is uncharted that we don't even understand. And I think right now, what, what tends to play out is this us versus them when in reality I don't think [00:16:00] that's what's gonna happen.
I actually think it's gonna be us and them blended. in a way where you can't even differentiate. And that's what I think is most fascinating about how this plays out is that just imagine yourself, you know, if I could put, put a question to you, how would you feel after 10, 20 years of daily being used to this technology and it being better every six months and you relying on its access to information and conversational and intuitive interfaces to just create almost like.
If, if Steve Jobs said that the computer was the bicycle, if, what is this? This is like the, the, the rocket for, for your mind.
Vahe: I totally agree. It's, it's even the title of our podcast here. Right. Hybrid Minds. Because I absolutely agree that that's the case. It's it's certainly. Debatable though, right, even though this is my stance and and your stance, there are of course others that see lots of [00:17:00] dangers and a different path coming to us.
I mean, even Elon Musk was very vocal about about the dangers. So yeah, I think it is difficult to predict the future. It's, it's always been difficult. I, I, I, I bring as an example, always, you know, even in science fiction, right, when we try to talk to, to invent something in science fiction, it's like, Like Star Trek, for example, right?
There's this famous line that says, Scotty, beam me up. So we can imagine a world where we can defy the laws of physics, but what we can't imagine at that point is that we don't need Scotty standing there and putting a lever up and down, right? Which makes no sense already, right? So, so, so, so it's hard, of course, to predict the future.
And you know, I, I certainly hope and think and work towards as the same way you do that it is going to be melting like a hybrid kind of thing which is manifesting in some areas already, like in medicine, for example, we can see that the combination of humans and AI results in better results than just a human [00:18:00] or just an AI.
You talk about something else, which I think is very, very interesting. You touched upon, upon the topic of creativity. So now. To confess, I never thought of it that way, but a couple of days ago, I was at a conference in Boston and while I am, or we are very much concerned with extraction of information, which really should be absolutely correct the way it was written there, right, they are looking at something very different.
And they were looking at hallucination as something positive, because they are trying to find new avenues to follow. And one of the opening speeches was actually about this. They showed six or seven, I think it was six very famous creative people, like painters, writers. And then they said, you know what all these people have common?
They have in common that they were regularly using hallucinogenic drugs and were hallucinating all the time, basically. And at that time they were able to produce these [00:19:00] things. So the question hallucination in In generative AI or large language models is a feature or a bug, and if it is a feature, do you think that that's something we should, you know, cope with, deal with?
How do you live with it? And is it really creativity or is that something
Joseph: else altogether? I guess from a technical perspective, I don't even know if the folks at OpenAI or any large language model engineer would ever say that. It's a, it's a feature that they plan for, because in many cases, you actually don't know the outcomes that will play out because of the sophistication and the just the vastness of possible variables, right?
You could calculate it. Maybe, but you don't, you can never predict the outcome, which means that you, you didn't have a role to play in, in, you know, it being a feature. Now, it being an unintended consequence that may be a [00:20:00] benefit, that I think is a fascinating way of looking at it. And that's actually something that, that I've thought about and that we have thought about for a while on the product team and just talking also with friends about the general use case of.
Using generative AI technology, whether it's in the form of text or in the form of imagery to be a creative. unblocking force. And the thing that you kind of, when you're in a, in any industry where you're required to produce content, you're per, you're, you're required to create something new on a regular basis.
You know, if you're thinking of it from an engineering standpoint, writing code, writing new solutions to, you know, existing problems or new problems that haven't even been considered. If you're a writer. A copywriter whether in fiction or nonfiction, creating stories and narratives that didn't exist before and doing [00:21:00] it in a way that either is monetizable, you know, in the form of, of, of books or, or magazines or whatever you're copywriting for if you're in, in the Creative arts, painting, taking photos, photography, videography, all of those different things, and planning for all of those projects.
What's common across all of those, those, those jobs, if you will, or of those, those creative endeavors, is that the initial moment of creation always is dependent on some inspiration, some thing that can spark that, you know, that train, if you will, of creative juice that produces the art and that creates almost this this unstoppable force of whatever it is that your mind is producing.
But that creative block is also the thing that if you talk to a programmer or a painter or a copywriter, It's the most annoying, the most difficult to displace, and we've [00:22:00] spent so much time, you know, as a, as, as humanity trying to deal with that problem, whether it's, you know, exercise, taking drugs going and enjoying something being under the influence of, of, of other things, you know, alcohol, whatever it might be to alter the state of your mindset.
Enough so that you can begin the creative process and allow the natural talent that you have to take force, right? What's interesting about this technology, and the reason why I feel like even with the hallucinatory tendencies that we've seen, is that because we don't know what causes human creativity, but we know when it happens, The, the possibility, almost the, the, the luck surface area of having something happen within a generative AI experience that triggers creative capacity is much higher than without it.
Right? [00:23:00] So if you think about the natural ways that we approach being creative one of the common ones is asking questions about the thing that we're trying to solve. And it's kind of interesting that the entirety of the interfaces that we're talking about when it comes to generative AI are based on this notion of attacking a problem by asking and prompting the right things to see if we could find a way to a solution to that problem.
Right? Right. The classic design thinking model. And that's why I think that ultimately, similarly to how when you create art or when you create any creative endeavor, the reason why so many talk about the journey is the whole point is because it doesn't matter if the first iterations or the first things that you do are accurate.
They might be a prototype that is solving for something other than the end accuracy of the thing that you're trying to do. They might be establishing something that You need [00:24:00] to see first before actually solving the true solution, the thing that you're, you're, you're trying to, striving to create, the end narrative, the you know, the, the, the right piece of software, the, the, the highest fidelity version of a piece of art that you're trying to produce, right?
And that's actually often the case when you look at the notebook of, you know, an artist, or if you look at, The, you know, the crumpled up papers of a copywriter that had written you know, a piece of, of tract, whether it was a book or whatever it might be, you'll see that the, the variations and the things that they went through, they might not have any connection whatsoever with the final product, but they still played a role.
And the final product, which is why I think that the hallucinatory nature ultimately can be perceived as a valuable part of generative AI, because it's taking your mind in directions that it may have not gone to of its own [00:25:00] accord, but that creates connections and solutions that might lead you to a new thought That takes you to the final solution.
So I think that's, that's really the way that, that I've seen the use of this weird part of generative AI in the most beneficial way.
Vahe: Yes. You touched on something also very, very interesting in in your answer, which was you don't call it a feature because it wasn't planned for That, that part that it is not planned for, I, I can't agree more with you because in reality we have no idea what we're doing at the end of the day, right?
I mean, we are just I think I heard this quote once we are growing aliens. So we really don't know. We're just feeding it with more and more data and see what happens. And so I think this is also best documented in this paper, the emergent abilities of large language models, where they even show that, you know, it's even unpredictable what's you know, when and what it will learn.
aT the same time, the result of this is we are able to produce something undoubtedly [00:26:00] intelligent without knowing how we actually achieve it by doing very fast iterations, right? We do super fast iterations and very large networks. These large networks though, they lead to a problem because if you have like 1.
7 trillion parameters, and now maybe even more because when they increased the When they increased context window size that much, you know, it's like n squared increasing the computational power needs. So the question could be now much more so the question I'm asking myself is at some point it gets to a.
A size or a need of energy and, you know, and computational power that is not sustainable, right? I mean, Microsoft was playing with the, with the thought of building its own nuclear power plant already. So, do you see, do you see a chance? And that we get to a stage where we have the same capabilities, right?
The same general capabilities, but with much [00:27:00] smaller networks anytime soon. Or do you think that that is just like utopia for now? Lukas
Joseph: That's a great question. And what I think can inform the answer is The innovation that price economics has forced on OpenAI, for example, just as a one vendor of this technology and how the force and the pressure of market economics drove them to innovate and to make the model more efficient, to make it burn less energy on their Azure data centers, to work with Microsoft to create better compute stacks that gave them more efficiency and leverage.
very much. And I think the easiest path to the answer to this question is we're human, so we're probably going to find a way and the, the, the way we'll be influenced by external factors, not because we want to, it's because we have to, it's because we run out of [00:28:00] resources or because somebody wants it so badly, but they're only able to pay so much, and we want that, that entire mass of somebody that's willing to pay.
And it's funny, you know, I hate thinking in this way, but I always love banking on human greed to drive incredible innovation. And we're unfortunately at that stage still of this technology where a lot of the influence of the use cases and the use of it altogether is being fueled by that, right? Now, we're going to get to a point where the compute capacity all of our data centers cannot handle the complexity of this problem.
The good news is that we'll know that well in advance. That's not the type of thing that you just stumble into, right? They, it's, they are already seeing the pace of growth of queries of, of token utilization. Everything that has played out in this mass experiment that now has, you know, At this [00:29:00] point, well over you know, the, the, the size of a country using an entire piece of, of technology every week.
I think that, that ultimately, we see all the indicators that tell us how we need to scale. At minimal data center capacity. in order to account for this technology over the next 10 years. And I think that already we see that energy will be a major factor in how this is, this is kind of playing out. So I, my belief is that we'll solve it because We won't want to lose it, ultimately, and greed will, will be the reason.
Vahe: Yeah, I think the, the greed part that you're touching upon is very interesting. So, searching back to the very first question, I actually also believe that we will, at the end, create more jobs than we destroy. And I, I come to this conclusion because I looked at the last three industrial revolutions, and two factors played a big role there.
And both of these factors are still. [00:30:00] And the first factor was human greed. So we are always inventing new products. I bring as an example I bring Tamagotchi, which is really a totally useless thing that you have to feed. If you don't feed it, it just dies. But it's a billion dollar industry, right? And there's another example I bring and, and, and sorry, but I always bring a goat yoga as an example.
Right? So there are people who are doing yoga with goats walking around and over at them and on them and that. You know, I'm not arguing if there's value to it or not, but it wasn't there 10 years ago or 15 or 20 years ago, right? Somebody invented this and now it created jobs because they have to feed these goats and you have to, you know, clean them and do all these things with them.
And the second factor of course is and here and here I borrow from, from, from Kramer, the, the O ring theory. So our. Our processes, right, in, in, that, that we use to produce almost anything. They are so interconnected to each other that if, if one thing gets better, 10%, 20 percent [00:31:00] or to machine perfection.
It puts a tremendous pressure on all the other processes because the output doesn't get better of 10%, right? The output is still very bad because if one thing in the whole process chain doesn't work, the output is just very bad, right? Like an O ring Challenger rocket, right? So, so, because most of our chains are like this, it puts a tremendous pressure on every other process that you haven't automated yet, that where you're not using AI yet.
And that creates, of course, lots of jobs because people now have to find solutions to this, right? But the danger that I see, and this is the point that I want to to, that I want to, to pick up, the danger that I see is if we achieve what you just said, I mean, that's a typical capital, capitalistic view of it, right?
So there's so much pressure and you innovate, innovate, innovate, and you get down with the cost. But at some point you achieve a level where the marginal cost is near zero. And when you achieve that point, Usually what happens, at least that happens in all the places where we achieve [00:32:00] that, is a huge centralization to just a few players.
Just look at books, right, so e books. Zero marginal cost. How many players are out there? You look at music, right? Zero marginal cost now. I mean, you're younger than I am, but I remember a time where we were paying like, what, 15, 20 for a CD, right? And it had like 10 songs. And now you're playing, what, 5, 6, and you have all the music of the whole world for free, almost free, right?
Because the marginal cost is almost zero. So do you see a risk there that when we achieve, if we achieve, a near zero marginal cost for AI that it leads to a mass problem with a huge centralization because it clusters all together to a few companies that basically have the, like have access to this and others don't.
Joseph: Yeah, it's a great question. It's one that I've considered from two perspectives that are kind of right off the top of my mind. [00:33:00] One is we already have that centralization playing out. in the technology space on even more simple things, right? Just having computers that are networked together that can perform operations and that are available through web services, right?
The cloud, as we refer to it. That You would think would have more competitive forces driving more players that, that can have a significant market share. But really at this point, is it, is it worth to even compare outside of the big three, right? Of, of Microsoft, Google, and, and, and Amazon and with their AWS business?
No. The thing is, though, that we look at this lens from the prism of a Western mindset when we look at a lot of this technology. But the reality is that if you look at it from a global and humanistic sense, there are other areas [00:34:00] on the planet that are also investing heavily in this technology for different reasons, with different biases and different outcomes and endgames that they're planning for, right?
If you think of China. And how they have approached investment in this technology and the role that they want it to play in their society and how, how it plays into their plan for the next hundred years, how they're going to supply the compute capacity for all of that. And they have no no plans whatsoever.
Let's just put it to expand that compute capacity to Western countries and to make it available as a global enterprise, right? It's interesting that the reach of this technology seems to be totally horizontal, where everyone in, in humanity qualifies, right, that there will be some point where, you know, billions of people on earth will, will take advantage of this technology in one way or another, and they will be delivered this service at near zero cost, as you're saying[00:35:00] in a way where we almost assume that it is effective, effectively a utility.
You know, the same way that we drink water, we have access to connected AI platforms and internet and all of that. The question though is, what's the source? And is the source going to be driven by technology companies, by state actors, by coalitions of state actors that have similar values, if you will?
And ultimately, does that mean that there will be different versions of AI that exist? Throughout humanity and throughout generations of humanity, right? Ones that have inherent biases that are influenced by the political and ideological forces of the underpinnings of that society. Ones that are built for hopefully more humanistic reasons.
Maybe the protection of our environment, the protection of our resources, things like that. Where we can all agree that maybe this thing should not be influenced [00:36:00] by X factors, whatever it might be across different societies. And I think that this is the the shaping of the future, if you will, of this technology is at its infancy and we know it will have mass scale and will be horizontal across all humanity.
How it manifests itself in variations and for the different use cases that it can solve, that I think very much so remains to be played out. Because what you're saying is true, I think the cost will be ultimately zero. for this technology. But who is driving the force of that cost going down to zero? Is it the state?
Is it technology because of unit economics? I don't know.
Vahe: I agree with you. I'm also very surprised that when you say states, because that actually really happened, right? I mean, UAE did invest a lot in Falcon, the French government or the French Republic invested in Mistral and so on. So that we actually see that really happening.
Yes. Let me, [00:37:00] let me go to another direction because I know that you have launched a generative AI tool. So how is your experience with that or how is your customers or users
Joseph: experience with it? Yeah. I, well, first of all, it was really thrilling to launch something that for the first time, I wasn't certain the way that it would, it would play out in terms of its adoption and in terms of how it would be used.
Had good ideas because of. Having played with, played with it on the product side and, you know, internally with, with staff and seeing how it might be used. But similarly to the rollout of, of chat GPT and just having it be in the wild, there is this natural progression that, that takes place. The, the risk taking that I kind of mentioned earlier, and within the context of the product that we built, we took the functions of the, you know, Intent matching of generative AI, where you can ask it any question and it'll know what your intent is and a possible [00:38:00] solution for, for that, you know, that intent.
And we married that with the precision of our analytics platform that can use your financial data that you are storing securely within our platform to find answers to questions that you may have about your liquidity, your cash, you know, your, your, your money ultimately if you're a large corporation.
Ed. What was fascinating was that the first questions were the obvious ones. The find me this thing. You know, it's ultimately a lazy search. It's I, you know, I can't be bothered to use a search bar. I'm just going to, in free language, ask for it to find me this thing. And there were literally hundreds of, of those questions within the first days.
And it was fascinating to see that as almost the, the crutch of risk taking, right? That if it can get this right, maybe I can, I can trust it with a harder question. What we didn't expect were the leaps. The leaps from what is seemingly a very simple question, you [00:39:00] know, find me this thing in this, this haystack, to multi layered, complex, and specialized questions, right?
Ones where you actually are putting in through the prompting enough context. to tell it, Hey, I want you to basically replace this person by producing this very specific type of report that you need to know if you are a treasurer or financial analyst. And I'm not going to tell you what it is. I'm just going to tell you what the acronym is.
Go figure it out and make it for me. That is where the gear started to turn in our heads. Because what we started to realize was the going from zero to 60 miles an hour, from 60 to 100 miles an hour, it is, it is so compressed, unexpectedly so. And I think it's just because of this, this same thread of conversation that we've been having, that our natural tendency as, as creators, as, as, as a, [00:40:00] you know, as a species is to, is to fiddle.
That's the easiest way to put it, I think, for me. When you have something that you know you can tinker with, and you want to see, well, what if I ask it in this way, or what if I rotate it in this way, or I wonder if I apply this shade to it, or I wonder if I shake it and then I look at it, you know, what will happen?
When you have the low latency experience of being able to get the satisfaction of the result of any one of those dimensions, Or modalities and have it at your fingertips and be able to access it within seconds, this is why I'm so confident that the creative forces. that are innate in all of us, they get extracted because of this, this pattern.
And that's why the willingness to leap and take risks is so aggressive, because we're, we're just naturally built for it. We, we, we see and we don't need to see much, right? We'll see, oh, you were able to do this? [00:41:00] I wonder if I can stump you. Let me, let me ask you to do this really hard thing. And then we see, oh, that wasn't really that hard for it.
Okay, let me see if I can really stump it now. And that process of iteration, we don't even realize, is a test and learn strategy. We are finding its limitations naturally by just tinkering with it. And that's, that's what's been playing out. And what's interesting is that not once has the problem of accuracy or hallucination in the sense of an answer that is wildly off.
come up as a detractor from using the technology. And if you would have asked me day one, I, I genuinely, and I even wrote articles about it because I was so worried that this would be the main issue, right? That folks would be worried about, well, how can I trust that this is going to calculate the things that I'm asking for in the best way, even though we, we, [00:42:00] we did everything we could to try to guarantee that calculation on our platform.
What's interesting is that That's not what it's being used for. It's being used for this creative capacity. And I just think that that's why this is so fascinating. And this is within the microcosm of corporate finance, which is, it tends to be a hyper underserved community when it comes to technology.
And imagine if this community starts to get used to these types of tools. And, you know, it's similar to third world countries where they take the leap from not having internet to all of a sudden having 4G in one generation. And it just completely transforms how they do business, how they interact with each other, communicate, all of it.
And I see the same thing kind of playing out. That's wonderful.
Vahe: I want to touch upon as a last question on this here. So you said something very interesting which is you know, people are playing with it around and I, I agree with you. I think it is really a perfect gamification. Why? Because the [00:43:00] anticipation of rewards.
Is valued higher in humans than the actual reward. So that's why it's important that you have a variable reward, right? That's why like, if Facebook is so addictive, because every time you open your wall, your newsfeed, there's something different in there, right? So it's not always the same thing. And here you have the same thing just on steroids, because it is always answering something that you can't anticipate really, right?
You can't, you don't know exactly what reward is going to be, but you know there is a reward coming. And that's like very addictive. So you. Literally play with it more than you, than you work with it. So I made this experience and this is my question to you. I made this experience that even though the actual process sometimes might take longer than if you use a different technology, like even a search bar with just a normal search, but this is fun, whereas the other one is not fun.
Is that something that you're experiencing too, or subscribe to?
Joseph:Absolutely. There's this, [00:44:00] and I think we intuitively know it when we see it. When things are delightful, when things are built to make us smile, when they're built to add a flavor of intuition. to what we're doing. It's a naturally more attractive interaction in everything that we do.
It doesn't have to be finance. It's in every part of life, right? And I, I remember the first time that I heard Johnny Ive, who was you know, the head industrial designer at, at Apple, and he was describing. The, the design process of even working on the pieces that people will never see. But for those that do, they will innately understand the, the, the delight and the quality because they see the effort that went into it.
And I think that that's something that, that we think about often when we think about this technology is that you are delighted because of how it's been set up, [00:45:00] right? Generative AI, it has many manifestations. It doesn't have to be a chat interaction. It doesn't have to be, you know, an upload tool for you know, an image generator.
There's so many ways that you could approach the problem. What's interesting about the conversational part of it is that it's tapping into something that as humans. We are just so wired to do, we're wired to interact in this way. And as you said, the, the reward, the delight that we get is that the result is so similar to what we would expect of a real person.
And it's almost like this I guess to use a bad analogy, right? I have a dog. I'm a dog lover. I have two dogs actually. And one of the most fun experiences that you have as a, as a dog owner sometimes is training your dog, a new trick, a trick where they're mimicking in some way. Human behavior you know, shaking your hand or turning around when you tell them to, [00:46:00] or something like that.
And the delight that you're feeling is like, Oh my gosh, they're doing something that a human could do. And they, they're not a human. They're, it's a dog. And it's able to do this thing that I, that I taught it. It's almost, if you could, you know, bottle up that, that, that feeling, it's a similar thing when you're interacting with this type of technology, because it's almost like, Man, I can't believe it.
It's literally using the exact type of language that I would expect somebody if I were chatting with them on Slack to use. How is this possible? And it's, it's that, that I think the reward of getting that feeling every time you use this technology in far out ways. the, the time, right? Because of course, if you use a search bar, it's probably way faster.
It might take more steps in some cases, but I think that the delight factor really has a strong kind of component.
Vahe: Joseph, this was so very much refreshing. Thank you very much. Thank you for your insights. Thank you for this fun, enlightening talk. And[00:47:00] And I wish you continuing luck on your, you know, on your path with generative AI for your company, of course, and yourself.
Thank you for this.
Joseph: Thank you so much for having me.
ANNOUNCER:
Thanks for listening to this edition of Hybrid Minds. This podcast is brought to you by Cognize, the first of its kind intelligent document processing company, which automates unstructured data with hybrid intelligence and a data centric AI platform. To learn more, visit cognize. com. And be sure to catch the next episode of Hybrid Minds wherever you get your podcasts.