The Real Danger of AI in Finance - Bias, Monitoring & Compliance Cracks

· 47:14

Guest: Dr. Edgar Lopez Rojas & Valery Chua, Founders of Revizor

AI risks, artificial intelligence threats, and risks of AI in finance are evolving faster than regulation can catch up. In this episode of The Curiosity Code, we explore the hidden flaws in AI monitoring, the systemic cracks in artificial intelligence risk & governance, and the fast-rising use of AI in financial crime networks.Joining us are Dr. Edgar Lopez Rojas, founder of Revizor, and Valery Chua, an advocate of ethical AI, who expose the blin

Transcript

Alex: Hi everybody. We're recording another episode of the Charistic Code from London and we're getting into the hard truths about AI and finance, the hidden risks no one is talking about the traps that blow up during POCs and why real monitoring and validation are now survival tools. Joining us are Dr. Edgar Lapis Rojas, founder of Revisor, creator of the FinCrime vaccines model for AI driven financial crime prevention and the global voice of ethical AI. Valerie Chua, head of business development at Revizor, helping firms stay ahead of tough new standards like the EU AI Act. Welcome to the show.

Edgar: Thank you, Alex.

Valery: Thank you.

Alex: Let's begin the conversation with a bit of background story. Edgar Short how you came to the point where you driving reviser.

Edgar: Yeah, I mean the story is long so let me try to summarize it by. Originally from Colombia. So I grew up in the the Columbia public school art. You know, there was a lot of fraud and money from the drugs. So I saw basically how dirty money changed our society and I wanted to do something about it. So I did my research in the area of finance and crime analytics in Sweden. And a few years after, you know, the UK has very similar problems in terms of money laundering. An answer crime but in a much higher magnitude. So I came to work in a project with the FCA and I just realized, you know, this is something that people needed. So I founded my first company called Fincran Dynamics on that and two years ago I exited that company and I was thinking about what's next and obviously with ChatGPT coming in all the, the penetration of AI and financial services, my, my first question was can we trust this? And that's when the idea of Revisor came in. And lucky for us, I mean we got support for innovative UK and that actually made things a lot easier and that's why we're here talking to you.

Alex: Awesome. Let's dive in into Think Crime Dynamics research and you know, you exposed systematic flaws and high AI flags financial crime. And from your point of view, what invisible biases in transaction data more dangerous at the moment and why?

Edgar: Yeah, I mean the era of financial crime is still, I mean it's been several years that we're trying to do big efforts on that. There's a lot of money being spent on that and we're still not getting it right. So my previous company started to use the power of synthetic data. And then I'm talking about synthetic data like computer generated, that we can tailor at our needs. And if we move very quickly to the problem of bias, I think the most dangerous bias is the one that we don't know.

Alex: All right?

Edgar: And you know, this unconscious bias is something that is around us. It's something that people are not aware of. Of. But sometimes when we realize, let's say we have a gender bias, we have a bias for ethnicities or something, then we can practically start doing something about it. And that's why synthetic data plays a role, because it can highlight this, say, hidden problem and then make us aware so we can take steps towards that.

Alex: So why, in your opinion, current models and the market is still blind to that? I mean, as you said, bias that we don't know. It's very interesting problem to work with. So in your opinion, the root cause is the synthetic data, the quality of data, or there are other.

Edgar: Obviously synthetic data doesn't solve any single problem that we're talking about bias. But it's good that you don't. That you asked about, like why we still have this, why we still live with this. And I think that among many possible answers, the one I think about is that there is a huge disconnection between the data scientists and the domain experts. The domain experts knows pretty well in how to address certain problems in financial crime and to detect financial crime. They have some internal programming in their brain that has inherent bias from what their experience is. And some of these bias, don't get me wrong, it's not all negative. But not knowing about those bias is what makes them dangerous. And the people that apply the control to data scientists, they get a little bit blind on the data and then without having a deep understanding of the domain, it's a dangerous tool. They can just try to have blind trust in AI.

Alex: All right, Valerie, I'd like to switch over to you for a bit. You're on the business side of things at Reviser and you talk a lot about the firms who are adopting AI compliance tools. How do you frame the risks of bias amplification in fraud detection? And do you believe that most firms are genuinely prepared to tackle this or are they still underestimating this systematic risks?

Valery: I think they're still underestimating this risk even though especially the financial institutions, they are very well established in, you know, the risk mitigation and you know, all the GRC's standards and compliance. But with regards to AI, it's still at its early stage. Even though, you know, a lot of firms are implementing AI systems and we find that they are still keeping that look and see mindset to be aware of whether are they meeting this AI risk and all these mitigations to ensure that they are compliance. So with EU AI act it will be one of the enforcement to push the firms to actually get into more awareness and to implement the AI. Yeah, the AI compliance side maybe. Edgar, what you think?

Edgar: Yeah, I mean, totally agree with you.

Valery: That's why you come up with reviser.

Edgar: Totally agree with you.

Valery: We see that there is a, you know, there is a need for that.

Edgar: And I had to say that we still say we're doing big efforts towards getting a solution that works, but I would say the perfect solution has to come from a community. That's probably the bit I wanted to share there.

Alex: Yeah, I think anything AI related these days, it's evolving so fast that it's not like a single entity problem in a way, it's more like community effort to come up with a way how to govern it, how to work with it, how to make it work for everybody. Speaking about that, we've seen many new solutions in the market and they usually like large firms, they address it from the proof of concept point of view, which is great way to introduce something novel into the process and test it out. What ethical shortcut you've seen in those POCs that look harmless at the beginning but could explode in a full scale deployment.

Edgar: Yeah, that's a really good point, Alex. So I'm an entrepreneur myself, serial entrepreneur. So I'm in my second company and I have been in several accelerator programs. And the first thing that they tell you is craft your value proposition what problem you're going to solve. But then they never tell you what are the ethical considerations. Those could come a little bit along the journey. And then most of the entrepreneurs are focusing on solving the problem that brings the value but not on the side effects of solving the problem. And I think when we start thinking about AI first solutions, if we put ethical concerns before we actually put this at the consideration of a customer, we are having a proactive action into that. And that's probably one of the biggest issues with the POCs right now, that we don't have enough time to actually prove or test intensively this ethical AI implications. And even big organizations still don't have clear what ethical AI means and if we go even further, third party providers, each of them, they have their own view of what ethical AI is. So the efforts that we're doing with Revisor is to try to convey an industry towards a unified effort to, towards making AI better for good for people and businesses.

Alex: I think it's also cultural sort of problem in the startup community because if we look at how startup community has evolved over past, call it 20 years, right. We observed how concepts like lean startup testing, validating the ideas, making sure that you get early clients to help with the cash flow and all of that before you deploy full scale product. So it was really focused on business aspect of starting a new venture and running and testing the idea. While here we're dealing with completely different beast. And in the previous episode that we recorded here we were talking about financial modeling and how it's really difficult, challenging to even calculate ROI for the eipower its solutions because there is not enough data to even convince investors to do that. And I think here what we're talking about, it's completely another aspect which in my opinion seems to be undervalued when we are looking at AI powered solutions because we're as entrepreneurs want to go faster, launch it faster. And yeah, I find it really interesting what is happening in this direction.

Edgar: I totally agree with you Alex and I'm a truly believer that the market justice forces will help us to actually create differentiator that it would be like a winning criteria for those companies that are using ethical AI as one of their selling points. But not only as a selling point but also actively engaging with communities like the one Revisor is building that will help them to just get a cultural overview, organizational overview, regulatory frameworks and best practices. Now we have standards and regulations that can help us to basically get it right. I think nobody say the ethical police but we can do a lot of efforts towards creating a better society that as I mentioned before, we want AI to be a force for good.

Alex: What so we're sort of getting into this subject but let's, let's talk into specifics. If you were advising a new AI powered startup, what would be your advice? And keep an eye on it from the ethical point of view before the actual launch to poc.

Edgar: I think when I was in my early days I was an engineer and I just remember this from back to the classroom that errors that are caught at design phase cost little when you start implementing, they cost more when you deploy and you put them in front of the customers. The magnitude could be a Lot higher. So I think it's exactly the same what's happening with AI. If we're waiting to put our tools just in front of the customers to try to understand if they like the ethical AI or not, I think we're doing it wrong. So that's the key point on that. So if we start addressing this early on, and I know there's a lot of tools in the market, including the one that we have to help startups at a cost effective proposition.

Alex: I'd like to chat about monitoring systems that are in use at the moment in banks or financial firms. What frameworks or tools would you recommend and what's still missing in the current tech stack that these firms are using?

Edgar: Yeah, I mean, well the, you know, there's no standard solution, but many organizations follow some grc, I would say philosophies or ways of working. The G stands for governance, the R for risk and the C for compliance. And everyone interprets this in a very different way. So every vendor has their own flavors. But one thing that I have seen very commonly happen is that the compliance plays a big role in financial services and sometimes is not addressed properly within the systems because many, many people still use compliance as a checklist. And then what happens basically, or the effort that you need to do to tick this box is not properly addressed. And that's why I think that's one of the things that we're missing. And it's really good that you touch the word monitoring. I would probably add ongoing monitoring because it's something that we are probably missing in the tech stack. AI brings new challenges. Traditional solver systems, they were an algorithm and sometimes you have an input, you have an output and when you go back, going back to the testing, you try to test the, the space of inputs to try to understand the behavior with the outputs. But with AI, the, the behavior of the same input today is not the same behavior tomorrow because the data is changing. AI hits on data and there are many models that are learning through the time that they get more ingestion of data. And that's why we need to keep online monitoring because there are high risk in some systems to be like for instance, data poisoning. What if a bad actor trains along the way, assisting to behave in a different way?

Alex: So it's quite complex governance solution in a way because you need to monitor so many different points throughout the system. Is there a solution in the market at the moment that could do that, that you've seen in production scale, enterprise level?

Edgar: I had to say nobody cares about that. Two years ago and after ChatGPT started to come into place, obviously regulation plays a big role. The EU act is forcing firms to think about this. Very recently I was a couple of days at the fca they launched an initiative called AI Labs in December and we actually were part of that program. We had the opportunity to in in front of more than 300 attendees at the FCA the solution revisor but what I what I find for instance from the. From the regulatory bodies is that they are taking this seriously and when they, when they start to take this seriously is when the market starts to get ready for these solutions. And that's why solutions like the one revisor and also a lot of other tools in the ecosystem that can be competitors or just complementary solutions are emerging.

Alex: Question to both of you. In your opinion, the regulation that is about to roll out in EU UK would help with AI innovation or it will actually stall it because there is perception that when way in tech world compare US markets EU market that Europe is regulating US is innovating. So interesting to hear your opinion about the regulations.

Valery: Yeah, I have actually attended a lot of such events about AI regulations and there are two sides of the view that is whether is it affecting the innovation or overregulating will impact the tech economy to expand. So I see that there should strike a balance. When the compliance is there we have to follow. But when there is innovation we need to strike in how innovation can work together with the compliance as well as with the governance. With the governance is actually helping to ensure that we are not going the wrong way and going the right way. And with compliance and with the supports from all the communities, I think we will be able to enter into that era where AI is going to be part of everything and we are also able to ensure that we are in compliance.

Alex: So.

Valery: EU AI act is there and it's already enforced in eu so we have to follow on that. And the UK market we see that it is actually a lot of clusters everywhere and the UK will also follow suit because UK is also trading with the eu. So EU AI act will affect all these institutions. Yeah. And anything.

Edgar: Yeah. I mean I'm a little bit biased here because as a, as an entrepreneur, regulation makes my life easier because it creates some sense of need for action into the company demand from business. So there is a lot of demand for the solutions. I never started the company revised or as a way to only fulfill compliance. I think regulation is never enough, especially in the early days. Regulation is very flexible. There are very interesting points in the regulation, for instance, that help us to understand, for instance prohibited systems system that should just not be in the market. And unregulated jurisdictions can have those. And it's a high potential risk for society. So that's probably one of the biggest risks of considerations that unregulated regulated jurisdictions like the UK and United States should start to address. On the other hand, I think the innovation should never be stopped by compliance. I think it should be empowered by compliance. Compliance always bring barriers to the market. So imagine if you're a financial institution and you have 100 companies knocking on your door and when you have compliance in place, you at least have some way to filter out some of the companies that are trying to do their best or not. But when the system is unregulated, like in the United States markets, you will need to spend a lot of effort in trying to understand what are the key differentiators. And we believe also for those jurisdiction that what we offer is this proactive, not compliance, but proactive adoption to best practices of ethical AI, responsible AI, trustworthy AI. Because the biggest problem that we have in the system is not about your compliant or not. The biggest problem is if you trust the AI system for your organization. Right.

Alex: What I find interesting from your points is that who would give the power to define what's right and not right, what's right, what's wrong. And another thing is sounds to me that if like yes, regulation creates structure and framework and sometimes as you said, demand for the business, so it's for entrepreneurs. Yes, it kind of gives you a spotlight of opportunities to create the products. But at the same time, in my opinion it also creates this balanced market in a sense that one geography can actually innovate without looking behind at the regulators. While EU market for example will be really refrained from innovating because they have to follow the regulations. So it's an interesting balance and I'm curious to see how it will work out.

Edgar: I think you probably met one of my colleagues, Denise, that mentioned that more AI is not better AI. So I will exactly adhere to that. It's probably better to have a curated list of companies that actually following good AI and not just magnitude or tons of companies that you just don't trust. So I think that's probably my opinion on that. And I believe the market forces will filter out these companies, but in regulated environments they can be filtered out faster.

Alex: Let's continue the discussion on AI monitoring. And Valerie, you're talking to lots of businesses from the middle business perspective, in your opinion, what's the biggest cultural or operational obstacle when you're trying to convince financial firms to adopt real time AI monitoring.

Valery: Yeah, I just want to echo what Edgar has just shared about about AI is not just fixed, it is actually an ongoing process. So if a firm thought that AI is desert and set it at the first phase of when they implement and then as time goes, AI actually need to be reiterated and if they are not aware of that, that's actually a blocker. And so they need to be aware that they need to constantly monitor AI and it's just an ongoing process. So I feel that this is one of the blocker and in terms of culture, I think it's also the mindset of the environment and of the ecosystem about their readiness to be able to ensure that they are aware that the AI has to be in compliance. Yes.

Alex: Do you have anything to add about AI monitoring?

Edgar: I mean, if you don't have it, you're at risk basically. That's the only thing I would like to add on that. I think one of the reasons why we are focusing on financial services because it's a highly regulated environment and the appetite for risk there is very low and they will probably avoid working with companies that are not compliant or companies that are not having everything in place, then allow the opportunity for companies that are just emerging in the market without any say proper ethical framework behind.

Valery: And also a lot of companies think that, oh, we are small, so we are not affected. Actually to me, big or small, you know, if you are not, if you have to be regulated, it's just like accounting, it's just like gdpr. Whether you are big or you're small, you have to follow the regulations.

Alex: Do you see like high rate of adoption by financial institutions implementing AI mentoring systems or not Really, I had to.

Edgar: Say when getting there, I had been in many events at the SCA and they have a auditorium that has a capacity for like 300 people. And I never seen that auditorium full until they started to cover the topic of AI. When they covered the topic of AI, 400 people register. They had to open another room and stream the what was happening. So I had to say there is a lot of interest in the industry to see what the regulation posture is around AI. They want to have some questions of how do we ensure that we're doing our best efforts not only in terms of compliance, because as I mentioned before, compliance is just a minimum that you can do, but also thinking about reputational damage. Imagine that you're using AI to take fully automatic decisions and you don't understand how it works. That's a very high risk for institutions. So some survey from the bank of England and the FCA from November 2024 shows that 47% of the companies have a partial understanding of what AI is doing. There is an 18% of the industry that says we don't understand very well what AI is doing. And what I'm more afraid than the rest, which is like 30 something percent thinks that they completely understand AI. That's probably, I think I go back to the question originally about the most dangerous bias is the one that you don't know and that's if they think they're doing the best. I think that's probably some of these companies will suffer the repercussions later.

Valery: Yeah. And I see that the awareness is rising, adoption is still not there yet. So that is where we are at the process of educating the community.

Alex: I think it also correlates with how innovative firms are in general because in my opinion, financial services industry in general, it's quite conservative from the technical point of view, from the technology point of view. So it takes a while to start getting into the technology trends. Yes, fintech, small startups, they there, that's why the startups. But large banks and large institutions, it will take them a while. And what I find interesting with AI in particular is that it's such a dynamic technology. So I'm in technical world, I work there every day and I find it's really difficult for myself to catch up what's going on. It's something new every day and I'm still struggling with understanding how to implement AI monitoring system that will be at the same level dynamic as AI technology is evolving. I find it quite challenging even from the technology point of view, from conceptual point of view. Like how do you, if we as technologists cannot catch up, how can you design the system that will monitor it so that it could stay at the same rate?

Edgar: Let me try to answer that by telling you we would never do it because AI is growing at this speed. Human capabilities are very little. And what we are trying to do is to create tools to enhance the human capabilities to try to catch up or at least try to address the most serious problems. And that's what the regulation is trying to address, like for instance, systemic risk. Like you know, systemic risk is when things are connected and then you change something that creates this chain effect that could be very big at the end. So we can not catch up with the speed that AI is doing. But we're trying to do our Best. And if you think about the movie Terminator, the first movie they send a human to stop a Terminator, the poor guy died. And in the second movie, they were probably a little bit clever, they sent another Terminator. But this Terminator was not, I would say, the perfect tool for the humans to do it. So John Connor had to instruct, you should have these guardrails. Don't kill people. Don't do this. Follow these instructions. We need to do this. Basically, you need to train another AI to help to fight evil AI, if.

Alex: You call it that way. Right. Is this fundamental concept behind the reviser?

Edgar: I would say I'm a big fan of the franchise, James Cameron, and it inspires me basically to stop the Skynet from happening.

Alex: That's quite interesting. So what I'm getting from this discussion is that we always have to accept there is a risk because technology evolves faster than we can control it. So from the risk management point of view, we always have to just accept the risk and take any precaution measures possible to minimize it. But it's always there.

Edgar: Yeah, I had to agree on that. But I think the beauty of this is that there is a lot of unknown risks that we haven't seen yet. So what I will add to that is that we need to learn from what is happening, learn fast and adapt. That's basically what we need to do. And try to think bad things will happen in the future with AI and I hope it's not the Skynet effect, but we need to learn fast from what goes wrong, fix it and move on.

Alex: Edgar, I'd like to talk a bit about fraud systems. And when you look at traditional AI validation processes used by banks, what do you find most dangerously outdated?

Edgar: Well, I think one of the biggest frustrations that I have seen in the financing industry for several years is that they're trying to play a race with the criminals, with the fraudsters, and they are losing, miserable. The technology that fraudsters are using 10 years ago was effective, efficient, and they tried to, let's say, use role based systems. They started to use AI to stop it. And then there was some peaks where we were seeing progress and then the criminals adapted and then we were down. And now what we have seen with AI is that it's creating a full new vector of attack for criminals to abuse the financial services. And unfortunately we're not ready. I have seen, for instance from my colleagues and from some friends in my network in LinkedIn posting that it's so easy now to get, to get a picture of a Fake passport to pass the KYCs of many banks to create a completely anonymous identity. And that was something that 10 years ago was not happening. So what I'm afraid is that we will have to catch up somehow and try to learn and adapt and see how we can mitigate these evil efforts on AI. If you call it that way.

Alex: Yeah, I think I've seen also lots of posts in social media about how well and real looking the receipts are by generate by. I just, it's basic stuff. Right. And if this is really easy to do then I, I can't imagine how many things are really going on in the fraud environment at the moment. And as we talk, the systems that used to detect frauds based on machine learning technology probably cannot catch up with the pace of AI development.

Edgar: Yeah, exactly. And technologies that we use to trust, like audio video that were actually legal evidence in court. Nowadays we adopt dotting them, we don't trust them fully because there is a lot of fake information on that.

Alex: Talk a lot about AI, right? And fundamentally behind each AI solution there is an AI model. In your opinion, I would address the question to both of you and let's chat a bit about that. What cracks are you seeing in how firms are deploying these models and validating their performance? We talked about ethical aspect, we talked about the risk aspect. Let's try to summarize it and, and see maybe we could come up in this conversation in some sort of guidance to the listeners that are implementing AI in their firms and launching their role models.

Edgar: I mean we can probably have a full podcast on that. But just to summarize it, AI is just as good as the data. So many, many of the biggest problems we have is that companies don't have a proper data strategy. And maybe now with the GDPR regulations there is a lot of issues with personal information. There's a lot of third party providers that are training data is not just enough and then they are carrying on systemic bias based on what they train the models for. So I think those are probably some of the ones I can just summarize. Like not having the right data is a big issue.

Valery: And also I see a lot of companies are actually having a specialist for AI adoption in the companies and I think that also will help into focusing on ensuring that the whole company's AI systems and being educated appropriately and implement appropriately.

Alex: I totally agree with you Edgar, about the data and the quality of data though. I have a question. Do you think that there could be cases when there is too much data? What is enough at all? Never enough.

Edgar: Yeah, I mean the dream is to have all the data in the world basically and to have access. But I think the challenge of finding the relevant data matters a lot. And I think we all have seen that hallucinations happens because the unstructured data creates ambiguity. So one document can tell you this is true, another document can tell you this is false, and then you just choose. It's a statistical system, basically the generative AI and it chooses whatever comes at the time with the pseudo random model. So I think when we have, I would say probably it's not about like having too much data, but having the right kind of data. And there is actually some metrics to measure that. Like one of them is, one of those is called the entropy. So for instance, if you only have a dictionary that only have once, the only thing that you can say is one. But you have so much variety, the entropy is a lot higher and then you have a lot more information. Basically valuing the information is very related with entropy. So low entropy means that everything is the same. High entropy means that you have a lot of variety. So those are the kind of things that we can start using to address the question quality of the data that we have and we can start seeing, for instance, if the entropy doesn't go up, it means that we have enough data, or at least we have enough data of the more data that we try to add. So we can always try to add more data and then try to see if it changes the entropy or not. And then we see if this variety changes and then it help us to actually understand that we are having better data and not just more data.

Alex: Have you faced any challenges at Revizor with quality of data or amount of data?

Edgar: Not yet. I think we're still early stages, but we're learning day by day. And I think one of the biggest issues that the startups suffer is that we don't have enough data. So I wish that could be my problem, but I have heard a lot from financial institutions, especially when we are addressing the topic of synthetic data. They think they had too much data. And I just go back to this cartoon of the caveman trying to push some rock. And then there is someone coming with a wheel saying, use this. And then they say, no, no, I'm too busy to think about using a wheel. And I think having the right data, that's what synthetic data has such a power that it can help you to see what you currently have, but also enhance and agreement what you don't have.

Alex: I'd like to Wrap it up with discussion about ethical AI in finance. And in your opinion, you've been in the industry and in the community for a long time, so you've heard many conversations. What is the biggest myth in ethical AI that drives you crazy?

Edgar: Oh, I think. I'm not sure if it's a myth that everybody in the community shares, but I have heard that many people think that AI is fair and unbiased because it's basically using the data that they see and then they give you an unbiased opinion on the data. But I had to say the data was put there by someone and that's what I think is a myth. AI doesn't choose the data, it's us. We are choosing the sources of data. And that's where we're creating basically these biases in AI.

Valery: Okay, yeah, we live in a very biased society and I see that this bianess will always be there. So that's why we need that. I mean, even AI is biased, so building that along to create that awareness is very important.

Alex: So how do you solve this bias aspect of ethical AI if the data that was initially trained skews it to be biased in a sense?

Edgar: I mentioned a couple of times today that it's not one army job or one person job is probably more like a community effort. And we need to start bringing into the organization the discussion about what do we stand for, what are values, what does our AI governance policies say about how should we work about AI? And I think once we start creating these, let's say community effort and we will learn from each other. There are a lot of organizations that are bringing just people like you and me to have a say on what ethical AI is. One of those is for instance, for humanity. I've been a fellow of that organization for a while, even before I found the reviser. I think that was one of my inspirations because the founder thought about, you know, my kids are growing quite fast and then the world is surrounded by AI, but how do I make sure that AI is actually helping them to be better individuals in the society and not just making them follow whoever is manipulating the data. Go back to that thing. There should be some community efforts to create these criteria and they should be always human in the loop that can address these issues. I think there is some say simple tasks like you need to have a killer switch for AI and all these kind of things needs to be in place.

Alex: Have you seen like a big development in community that is working on this actively or it's just at the very Beginning.

Edgar: So coming back, for instance, for. For humanity started a couple of people that were just concerned about that during the pandemic, it grows exponentially, like more than 10,000 members. I think nowadays it's like more than 50,000 people. There are actually established chapters in the EU, established chapters in India and other jurisdictions, and it was originated by someone in New York that just thought about, like, I need to do something for my kids.

Valery: And also I think here in London, this London AI hub is also building the community. So a lot of companies or even communities will be creating more events, more kind of courses for AI, for people to be able to adapt that knowledge of AI.

Alex: Yeah, I've seen many courses and training materials. I still think that ethical aspect is not covered enough because what we started the conversation with, there is still framework of startup business, like, hey, I have this technology. How do I make more profit with that? And ethical aspect, I think it's a little bit undervalued. But I'm happy to hear that the community is growing and I encourage the listeners and subscribers to the podcast join the community and contribute to the movement in a sense.

Edgar: Yeah, I mean, I appreciate that. We're building a vibrant community around AI testing around, having the proper evidence, not only talking about ethics, for talking about ethics, but having the proper oversight of AI is what we want to promote. And we want a lot more companies to use our tools. So reach out to us if you want to be part of this community.

Alex: Excellent. Valeri, thank you very much for being on the podcast. It's been a pleasure talking to you and for the listeners, thanks for staying with us. And don't forget to hit the like button. Subscribe to the podcast, leave the reviews, comments, you know what to do, and thanks a lot again, and see you in the next episodes.