Are you confident your organization’s data is secure as employees rapidly adopt new AI tools? Are current security gateways truly enough to protect sensitive information in the age of AI? How can visibility into AI app usage transform your go-to-market efforts in cybersecurity sales? If these questions hit home, this episode is for you.
In this conversation we discuss:
👉 The growing gap between traditional security measures and the reality of employee AI tool adoption
👉 Aurascape’s innovative approach to “AI security” using intelligent agent-based technology
👉 How showing true AI usage visibility creates ‘aha’ moments for CISOs and jumpstarts the sales process
About our guest:
Moinul Khan is the CEO and co-founder of Aurascape, a fast-growing cybersecurity startup. With over 25 years of experience, including roles at Netskope and a visionary background in AI-native defense, Moinul has led Aurascape to be a 2025 RSA Innovation Sandbox finalist.
Summary:
Tune in to hear Moinul Khan share how Aurascape delivers the visibility, control, and speed enterprises need to stay ahead of the AI security curve—and why sales teams should rethink the conversation about AI risk. Learn what triggers CISOs to act and how to leverage these insights to create faster sales momentum. Don’t miss this chance to hear firsthand what innovation looks like for security go-to-market.
Links:
- Connect with Moinul Khan on Linkedin
- Explore Aurascape: Company Website
- Interested in discussing your go-to-market strategy? Book a meeting with Andrew Monaghan
Follow me on LinkedIn for regular posts about growing your cybersecurity startup
Want to grow your revenue faster? Check out my consulting and training
Need ideas about how to grow your pipeline? Sign up for my newsletter.
[00:00:00] Hey, it's Andrew here. Just quickly before we start the episode, I want to tell you about one of my favorite newsletters. It's called Strategy of Security. If you want to understand the company's ideas and trends shaping cybersecurity and its submarkets, you should take a look. Cole Gromos runs the newsletter and he has spent the last 20 years in cybersecurity, including stints at PwC and Momentum Cyber, the investment bank dedicated to cybersecurity.
[00:00:27] Recent articles I'd like include how could platformization work in cybersecurity, where he talks about there being lots of single vendor platforms, but not a multi-estate platform. And also one called demystifying cybersecurity's public companies, where he explores the pure play ones and also hybrid companies, which are in cyber. He lists all of them and then breaks down the numbers in all sorts of different ways.
[00:00:52] Now, this is not a paid promotion. I just simply enjoy what Cole is publishing. Check it out at strategyofsecurity.com. Now on with this episode.
[00:01:11] Welcome to the Cybersecurity Go-to-Market Podcast for an episode where we're talking to leaders of the companies selected for the 2025 RSA Conference Innovation Sandbox. These are the very, very few, in fact, just 10 companies that judges have selected out of hundreds as the most innovative startups in cybersecurity today.
[00:01:38] I am your host, Andrew Monaghan, and we're talking with Moynal Khan, CEO and co-founder at Aura Escape. Moynal, welcome to the podcast. Andrew, thanks for having me. I appreciate it. Yeah, I'm looking forward to our conversation, Moynal. You guys are an exciting time in your journey. I know a lot of work goes into getting into the whole process and finally being selected is a big deal.
[00:02:03] Before we get into the business side, though, Moynal, where in the world did you have your first sandbox? Yeah, that's an interesting question. So I am originally from Bangladesh, a small country in Southeast Asia. I was born and raised, you know, I was there up until my high school graduation. And then the year I graduated from high school, I came to the United States, you know, for higher education.
[00:02:31] When I was there, I had I had an interesting childhood. I actually grew up in a boarding school run by militaries and armies. So my parents shipped me when when I was a six years old first grader and then I was there for 12 years. Yeah, even though it was sort of like tough at the beginning, not being with the family and parents. But that school really taught me a lot of things, mainly, you know, how to be disciplined, how to be sort of like self-sufficient.
[00:02:58] So when I when I came to this country, I was 17 years old, but but I was able to manage myself. So, you know, it's it's sort of like history since then. Yeah. Yeah. You're taught independence and resilience in a boarding school environment. And that boarding school was in Bangladesh. Is that right? That is correct. That is correct. Yeah. It's it was one of those elite school, you know, again, run by army.
[00:03:22] It was in the capital. So, you know, I remember when I got selected as a first grader, my my parents threw out a party sort of like a, you know, as as big of a sort of like a wedding party. So so everyone everyone was proud. The downside was that you have to be a boarding student. You do not have any options. So that's how I grew up. But I loved it. I loved it. Well, at 17, you came over to this country and you've been on a journey. And now you're the CEO and co-founder of Oriscape.
[00:03:51] What's the problem that Oriscape is solving? We are we are, you know, specializing in AI security. And what that really means is today, you know, everyone is crazy about, you know, AI applications. Right. Your users, business users, they're using hundreds of generative AI. Hundreds of embedded AI tools are connected to legacy application. People love Agent TKI.
[00:04:17] The users love Agent TKI because they can take actions on behalf of them. Right. So, you know, as as AI tsunami has already came in, you know, there has to be some specialized security for AI. And that's exactly what we are doing. All focus on AI security, AI tools. At the end of the day, what we are really trying to do is when your users are using these applications, how do you keep the bad guys out from adversary perspective?
[00:04:46] And then how do you protect your organization's intellectual property and sensitive data is exactly what our focus. So in that model, then you're looking both at, I guess, attacks on the AI infrastructure usage internally, but also also data going not going places it shouldn't. Is that right? That is correct. That is correct. So so think of this like, you know, when you think about these AI applications, this is sort of like one one new channel.
[00:05:15] Like your users have been using Internet and web for a long time. They're using hundreds of SaaS based services. But interestingly, the AI tools is slowly replacing all your productivity tools. Right. So our focus is when your users are using all kinds of new shiny toys, the AI tools, you need to make sure that you have a great threat prevention stack so that the bad guys cannot come in.
[00:05:40] And then you need to have a very strong data security stack so that your intellectual property, your sensitive data is always protected while your users are using all these AI tools. Our mission is not to be a sledgehammer. Our mission is to enable your users to use more and more AI tools because these AI tools are powerful. They're making your employees productive. We want to enable you to innovate, innovate fearlessly. That's exactly what we are trying to do.
[00:06:10] And as you said, the tsunami of AI apps, AI models is is alive right now and huge. What are people doing right now to protect themselves, if anything? Yeah. So, you know, like, you know, depends on what kind of organization and enterprise customers that we are talking to. We, by and large, we actually see a false sense of security.
[00:06:35] And what I mean by that is if you talk to any enterprise customer, they're using some type of secure web gateway or they're using a secure service edge and SSE product. They're using, you know, old school data loss prevention solution. And using all of these Internet egress firewall proxies, they think that they are blocking all kinds of AI tools and they're only sanctioning on Microsoft Copilot. Right.
[00:07:03] So, you know, when I talk to a lot of CISOs, it's like, huh, we don't really have any problem because we are blocking all the AI tools. All unsanctioned AI tools are blocked through URL filtering. And then we tell our users that if you need an AI tool, go use Microsoft Copilot. And that is a false sense of security because you have to understand that your users are looking for an AI tool that is very specific for his intention and for his use case.
[00:07:32] You know, they're not just limited by using a corporate sanctioned Microsoft Copilot because there are hundreds of other AI tools that make them productive. So that false sense of security is sort of like creating a huge risk for any organizations because you are not able to block all the applications because your existing technology is not capable of supporting it.
[00:07:58] And that's why you need kind of like an AI native security platform like R-Scape. It kind of reminds me of the CASB space 10 to 15 years ago, right? And CASB did very well just turning on visibility only just to show that, oh, you thought you were using 15 or 50 apps. Actually, look, here's the 500 that are actually in your environment. I would imagine it's even a bigger scale in AI given how easy it is now to just pull down little agents here, there and everywhere and start using them individually.
[00:08:27] And you are right. By the way, you know, I more than a decade ago, you know, I actually worked at Netscope. You know, I was I was very early on. And, you know, when the company was at Stilt Mode, when when we tried to create that CASB market, it was very interesting because at the time, every organization was using a SWIG, a secure web gateway. And the approach was, you know, be a sledgehammer. Right. You know, if people are using an unsanctioned app, just block it.
[00:08:57] But you can't really do that. Like, you know, you cannot be sledgehammer when people are trying to use these productivity tools. So the CASB's focus was how do you block an upload versus download? Right. Sort of like that's that's really what we focused on. So AI application changes everything because a single application has many different intentions. Right. So if you are using Copilot, you know, a user, a lazy user will post an 80 page PDF document and will say, I don't have time to read it.
[00:09:26] Give me five bullet points. Now, when users are doing it, that's an HTTP transaction and CASB existing proxy and firewall will be able to crack open that SSL connection and inspect the content. But within that same application as a user, if I'm constantly sending prompts and responses are coming back, I'm basically doing a high frequency chat message.
[00:09:47] The existing technology is completely blind because behind the behind the scene, that very application for the specific intention is using a different technology. They're using WebSockets and existing technology are not doing content inspection on WebSockets. So these are some of the things that we knew, like being in this space for almost now 25 plus years, all our founding engineers and myself, we kind of like knew that you have to build something from the scratch.
[00:10:14] Your foundational technology has to change when you are serious about AI security. And that's exactly why we founded Aurascape. And Moino, there's a lot of companies right now who believe, as you do, that you need specialist technology in the AI space to actually do AI security. So what's the big innovation? What are you doing at Aurascape that you think the judges latched onto and said that's worthy of showcasing at the conference?
[00:10:40] At a 10,000 foot level, our foundational technology is absolutely using AI to fight against AI. And what I mean by that is in the age of AI, we realize that if we use AI and ML and small and language model behind the scene, we can innovate. We can innovate. We can put together a cybersecurity stack that will be relevant for the next two decades.
[00:11:03] We are not limited by building a firewall and proxy that were built 20 years ago with sort of like old school traditional technology. So at the end of the day, what we build is three robots. And we call them robots. It's actually easy for people to understand. But essentially, we basically build three, you know, agentic AI and AI agent. What we call is neuroplane, neurolog, and neuroops.
[00:11:31] And these are, we file 14 patents on this, right? So essentially, what we do is we have a robot that crawls through the internet. It finds all kinds of new applications that were released recently, AI tools. And that robot automatically builds signatures and deep decoders so that we understand what is that AI tool that your users are using. And we capture word by word in, you know, the conversation in clear text.
[00:12:01] And that provides a very deep granular visibility with full context so that organization can think about what is it that I need to do when I think about policy enforcement, right? And then when they move into the policy enforcement phase, then our second robot kicks in, which is neurolock. And it basically identifies your sensitive data. It identifies what kinds of threats are coming in.
[00:12:28] And based on that information, it takes automatic actions, right? I will give you a simple analogy. You will probably understand. Behind the scene, all these robots are using different type of models, different type of AI models that are out there. If I take GPT-4.0, the GPT-4.0, the entire internet has been dumped to GPT-4.0, right? So if I go to GPT-4.0 and if I ask a question, why did Angelina Jolie and Brad Pitt got a divorce?
[00:12:57] It will have an answer. Most likely it will have a very accurate answer, right? But as a security vendor, I don't need that entire internet, right? So what we did is we basically took all kinds of LLMs that are out there. We tweaked them. We made them small language model and we build our technology on top of it, right? So what it allows us to do is if I see a malicious link or potentially malicious link,
[00:13:23] and if I put it on a large language model and if I say, is this something that I need to worry about? The answer will be yes, right? If I see a PDF document and if I really leverage LLM and if I say, if I say, what are some of the characterization of this PDF file, it will be able to tell me that, hey, this is, we are looking at some financial conversation. Within financial conversation, we are looking at a payslip.
[00:13:49] Within payslip, we are looking at someone's personal PII data, right? So that is sort of like, you know, our foundational technology. We heavily utilize ML, small language model, large language model. We tweak them and then we specialize for our own use case. And then the third robot is even more interesting, what we call NeuroOps. It fully automates your stock operations and incident management workflow.
[00:14:18] Again, it's an AI agent itself. That means first it interacts between your end user and admin user. And then it basically enables you to take automated actions, sort of like fully automated workflow operations for thousands of alerts that you may potentially get out of all these violations from your users. So as you were talking, I was thinking three different elements of your product. You've already been in existence since last year.
[00:14:48] That seems like a heck of a rate of innovation and development to have to deliver that quite quickly. That is absolutely, I think, I believe that this is why 10 months later we have been recognized by RSA. This is, Andrew, this is all about the team, okay? If you look at our engineering team, the entire engineering team comes from Palo Alto Networks, Zscaler, NetScope, Google, and Amazon, okay?
[00:15:16] The engineers from Palo Alto Networks, NetScope, and Zscaler, they have years of experience building distributed firewall architecture and distributed proxy. They have years of experience on the endpoint so that not only we are just looking at and doing content inspection in the cloud, but we are actually doing a lot of stuff on the endpoint itself. Our CTO is writing code every single day and driving innovation, right? And then we are an AI company.
[00:15:46] So obviously, we needed some really great engineers from Google and Amazon who has years of experience on these models, right? So the team, you know, we took sort of like a harder route. Like if you look at the startup industry, the natural inclination is that, hey, we need some architect here, but let's go hire in India, right? We didn't do it because we knew that this company is going to be all about innovation.
[00:16:13] And we put together now a fully engineering team all in Santa Clara, all in Silicon Valley. Everyone has 15, 20 years of experience. And that's how agility came. That's how the speed of innovation came, right? So, you know, that's why like 10 months later, we launched our product. We already got customers, we filed 14 patents, and then we got the recognition from RSA. Well, I love that pace of innovation.
[00:16:41] I mean, in this day and age, it seems to be the way things are going. The world in AI is changing so much. You know, what I kind of look, I'm trying to keep up with what's happening just in the sales AI world, never mind in the security AI world. And it seems like every week there's some new thing that comes out. So kudos to you guys for delivering so fast.
[00:16:59] I'm wondering, though, as you're showing potential customers what you're doing, what's the aha moments they get as you're doing the demo or talking about what you do when they sit up and go, oh, now I really truly do get it. You know, their aha moment is when we start the POC and give them visibility. Remember, I told you that there was a false sense of security. Everybody, the CISOs of the world, the CIOs of the world today, you know, they're worried.
[00:17:28] They know that, you know, everybody is using some type of AI tool, right? A software developer is getting a code assistant tool, and they want to make sure that they're doing their things right. You know, how to optimize the code, how to correct the code that they're writing. But they don't realize that this is a huge compliance violation and a risk for their company's IP, right?
[00:17:51] You know, people are going to chat GPT, and they're putting all kinds of PII information and generating content, right? So the CIO and CISOs, they recognize that all their users are using some type of AI tools, right? But like I said, they thought that everything is sort of like blocked. The employees can only use it when they're home and when they're using their personal device. So when we start the POC, the first thing we do is we do the data discovery, right?
[00:18:20] Like we show them hundreds of AI tools are there. And not only we just tell them, hey, these are the applications, but for each and every single application, we have a risk score, you know, with different attributes. At the same time, we give them the full context because we are capturing the conversation. So this is our technology is not about firewall logs and proxy logs, right? Like TCP and UDP connection that doesn't have any context. What we capture is a conversation log, right?
[00:18:49] So, you know, if someone out there, a software developer goes to Cursor and then they're saying, hey, here's I'm cutting and pasting my company source code. Tell me what does it do? A response comes back. Like it says, well, this code is calling AWS FIFO, right? SQS or whatever. How can I optimize it for batch processing? And then the new code comes in. We are capturing that step-by-step conversation.
[00:19:15] And then when the customer sees it, they're like, oh, my God, is this what's going on in my organization? And I'm not ready for my next board meeting. I need to take some action. And that's exactly how we got the initial traction. And then once that visibility was delivered, then now because of that context, rich context, the CISOs and InfoSec team can say, this is what we are going to allow. This is not what we are going to allow.
[00:19:44] But again, they're not trying to block everything. They're not trying to be a sledgehammer. They want their employees to innovate, but we are now helping them to enforce granular policies so that their intellectual property data is always secure. Now, Moino, I want you to take us back to a special moment when you got your first real live order from a customer who wasn't a friendly.
[00:20:07] It wasn't a friend of the VC or a friend of you guys as the founding team where someone came in, reasonably called, looked at what you did, loved this much. They gave you a big PO. What was that moment like and when was it? Yeah, I remember that date. That was January 16th this year. Okay. So we were, and by the way, the case study is actually available in our website, or escape.ai. But this company, Winwire, they're a technology company.
[00:20:34] And they are absolutely A-OK for their users, their software developers to use all kinds of AI tools. Right. Right. But at the same time, they were worried. They wanted to make sure that the software engineers are not violating company policy. They also do a lot of work for Microsoft. Right. And they're a technology provider. So we started with the visibility. So the POC ran for about three weeks. Right.
[00:21:01] So we came in and we started steering all their AI traffic to our man-in-the-middle proxy. And, you know, they looked at the, they looked at, you know, all kinds of prompt query and responses coming out. They identified some of the areas that could be very, very risky. And right away, like, they didn't even try to enforce any policy. The entire technology, or escape, was running in monitor mode. They loved it. And they actually said, I use firewall. I have been using a firewall. I also have a proxy.
[00:21:30] And we don't have what you are showing us today. Right. So that was really, the visibility was right away, you know, very, very, you know, eye-opening for them. And then we were able to close the deal. Right. And the reason I remember the date is because January 17th was, you know, I had a board meeting on January 17th. And then I said, man, there are a bunch of POCs going on. It would be great to have a deal.
[00:22:00] And then January 16th is when we got the PO. And I went to the board meeting. And then the board said, great job. And then from there, we moved on. Right. And then they asked you to do it again the next day and the next day and the next day, right? Of course, you know the board, right? Why do you not get two? Yeah, they will never stop. That is, you know, it's all about, you know, how much value we are adding to our investors. That's what it is. How did you and the team celebrate that day? You know, so it's funny.
[00:22:28] In our office, we actually have a bell. So every time we close a deal, doesn't matter if it's a small or big deal. The guy who is sort of like, it doesn't always have to be the sales guy, right? But, you know, a product manager who was involved from day one or a tech marketing engineer, even some of the engineers, right? So when we close the deal, the first thing we do is we bring the team together.
[00:22:53] We bring some good snacks and then we ring the bell and then we tell their story so that every single person in the company knows. Like, you know, sometimes, you know, the engineers don't have the output of their work, right? So when they hear a customer story, what problems that they were running, what was the selection process and how we solve that problem, it really motivates them. So we build that culture. And as of today, every single deal we close, that's exactly what we do. We ring the bell.
[00:23:23] We celebrate with some good snacks and lunch. And then we tell our stories. I love that so much. You're so right, though. I mean, you can get lost a little bit in the code or whatever your role is and not sometimes realize the impact it's having. And when you hear it firsthand from someone who's involved, it must be very inspiring for them. This is a challenge for being an engineer. If you are a doctor and if you are doing a surgery on a patient, right away, you're going to see that you just saved that patient's life.
[00:23:50] Being an engineer, you know, it's like, I don't know. Are we changing the world? Are we solving world's hunger problems? So the more you can feed to the engineering team, they get motivated. So this is a culture that we have in the company and we will continue to do that. But let's wrap up with a slightly different question, Mono. I'm really interested about the process in the Innovation Sandbox. You know, the announcement, we recorded this on Tuesday. The announcement came out, I think, last Thursday publicly.
[00:24:19] So just take us through it at a high level. You know, when did you find out and what support do you get behind the scenes to get ready for the actual big day? Yeah. So, you know, we sort of like knew about Innovation Sandbox, right? We knew that, you know, the startup companies, they thrive for, you know, innovation sandbox. They want to get selected. And it actually gives them a lot of visibility, right? Because, you know, startup company, we don't have a big marketing machine behind our back, right?
[00:24:48] So we knew this very early on. We said, hey, we're going to go for this year's Innovation Sandbox. We knew that 250 other startups are coming. So from process perspective, we applied for it. We got selected because of, well, first, you have to meet the selection criteria, right? Like, you know, there are some terms and conditions, like you need to be a startup, not too old. You know, you need to have less than a certain amount of ARR.
[00:25:18] So we met all of those criterias. But right after that, you know, we put together a demo for our solution, right? Like, and this demo talks about, you know, our team, the problems that we are trying to solve. But at the same time, you are doing sort of like a demo, product demo. So this is not a PowerPoint presentation slides, right? So we put together a demo. We talked about, you know, we highlighted our technology.
[00:25:44] We talked about all the differentiation, how we build a technology stack. And, you know, it was sort of like a package that we submitted. And then, you know, the RSA, they go through their own selection process. I mean, they look at the technology. They look at the problem statement. They look at what is the potential for this company to be a big cybersecurity company. Who are the investors behind us? Who are the board of advisors? So all of that count. And then, you know, we got a note from RSA that, hey, you are one of the top 10 finalists.
[00:26:14] And we were so happy because, you know, this year, as you can imagine, how many cybersecurity startups are going to talk about AI, AI, AI. I mean, I think I cannot think of any cybersecurity company. Like, I would love to see a company on the show floor who has a big banner that says, we are the only one who do not use AI. That will not happen, right? So, you know, out of all of these companies, like when we actually looked at this, you know,
[00:26:42] when we looked at the list and we were like, wow, you know, we are one of the top 10. And interestingly, there are only two companies in that list that are saying they do AI security. So this other vendor, they're solving a different problem. Their primary focus is application, you know, kind of like an LLM proxy. Our primary focus is user to AI security. So, you know, we kind of like saw ourselves as being the only company in that space.
[00:27:10] And we are absolutely humbled and happy. And, you know, we are going to go out there and we're going to try to win it. And how far ahead of the announcement, the public announcement, did you know? I think probably about two weeks before the press release came out. Don't have the exact date, but yeah, it was sort of like two to three weeks. And what support do you get from the conference to be ready for the big day? Yeah, absolutely. They're super helpful.
[00:27:39] So we have regular cadence in terms of, you know, we are catching up with them. You know, they have a template for three-minute pitch deck. So every week, in fact, yesterday, you know, we had to do our first dry run of our stage presentation. And we are getting some great support from RSA. You know, not only they're kind of like taking us through the process, but they're actually helping us. They're like, hey, your story is great. You know, this is really good.
[00:28:07] This is something that you might want to consider, like, tweaking. So, you know, they're holding our hands and, you know, they're doing the same thing with nine other finalists. So great support from RSA. You know, we are very grateful. Yeah. Well, congrats on getting into the finalists. I mean, just the top 10 is such an achievement itself. And really supporting you and rooting for you for the big day in two weeks' time. So, Wendell, to you and the team, all the best. Thank you so much, Andrew. Thanks for having me.
[00:28:37] It was a great talk and I loved it. It would mean a lot to me and to the continued growth of the show if you'd help get the word at. So how do you do that easily? There are two ways. Firstly, just simply send a link to a friend. Send a link to the show, to this episode.
[00:29:06] You can email it, text it, Slack it, whatever works for you and is easy for you. The second way is to leave a super quick rating. And sometimes that can seem complicated. So I've made it as easy for you as I can. You simply have to go to ratethispodcast.com slash cyber. That's ratethispodcast.com slash cyber. And it explains exactly how to do it. Either of these ways will take you less than 30 seconds to do. And it will mean the world to me. So thank you. Thank you.