The UAE vs India cybersecurity buyer — same problem, very different sale.
Wattlecorp does work on both sides of the Gulf. We sell pentests, red team engagements, compliance work, and training to enterprises in the UAE and to enterprises in India. The underlying technical work — running an engagement, finding the same classes of bug, writing the same kind of report — is largely identical between the two markets. Almost everything else is different. The buyer is different, the procurement cycle is different, the pricing logic is different, the way you close is different, and the way you keep the relationship after you close is very different.
I'm writing this for two audiences. One, peers in the region who keep asking why their UAE pipeline converts so much slower than their India pipeline (or vice versa). Two, founders thinking about expanding from one to the other and assuming it'll be a transplant. It won't.
The UAE buyer is compliance-driven and committee-led. The India buyer is ROI-driven and champion-led.
That's the headline. Everything else flows from it.
In the UAE — and this generalises across most GCC enterprises, with shades for KSA and Qatar — the cybersecurity purchase is almost always triggered by a regulatory or audit obligation. NESA, ADHICS, ISR, SAMA, sectoral regulators, internal audit committees with real authority. Buyers have a deadline that came from a regulator, a budget that's already been ringfenced, and a procurement process that involves three or four people who have to all agree before you sign. The economic buyer is rarely the same person as the technical buyer, who is rarely the same person as the procurement gatekeeper. You will pitch the same engagement four times to four different audiences, and the contract terms will get walked back in the last week by a legal review you didn't know about. The successful sale here is the one that survives that gauntlet — clear scope, clear deliverables, clear mapping to whatever regulation triggered the spend, and pricing that fits inside the pre-approved budget envelope. Going under the envelope to win on price actively hurts you in this market — the buyer assumes you cut corners. Going over forces a re-approval the buyer doesn't want to do. Land inside.
In India, the same pentest engagement is sold completely differently. The technical decision-maker, very often, is also the budget holder, and they want to talk to your senior engineer for forty-five minutes about methodology before they care about commercials. They're benchmarking you against two or three other vendors, sometimes in real time on the call. They're making a calculated bet on quality-per-rupee. The procurement step exists but it's not the gate; the technical buyer's say-so largely closes the deal. The cycle is faster — three to six weeks from first call to signed SOW is common — but the price elasticity is brutal. There is always a smaller competitor willing to undercut you by 30%. The way you win is by being so obviously better on substance that the buyer can defend the higher number internally as the right call.
Pricing logic flips between the two. In the UAE, fixed-price scopes win because they fit procurement. Time-and-materials makes the procurement officer nervous. The premium for being a known regional name is real and bookable. In India, time-and-materials is fine and often preferred — the technical buyer wants flexibility. Discounts are expected; not offering one signals you don't take the buyer seriously. The premium for being a known name is much smaller; the buyer is looking at the senior engineer's CV, not the company logo on the proposal.
The relationship after the sale is also opposite. UAE clients, once you're in, tend to stay in. The renewal happens because the procurement of switching is painful and your contact got promoted to the role that signed off on you. The compounding from a single relationship over five years is enormous. India clients are loyal to senior engineers, not to firms. If your senior moves, the client may move with them. The retention work is at the engineer level — give your seniors visible client relationships, not generic account managers — or you'll lose work you thought was sticky.
A few practical implications I've watched founders get wrong.
Sending the same proposal deck to both markets is a mistake. The UAE proposal needs the regulatory mapping section as the first substantive content; the India proposal needs the senior engineer profiles as the first substantive content. Same engagement, two completely different documents.
Hiring the same kind of seller for both markets is a mistake. The UAE sale wants somebody who can navigate procurement, government interfaces, and Arabic-language relationships at the executive layer. That's a different person than the India seller, who needs to be a working technical mind willing to scope on a call and defend methodology against three competitors live.
Pricing the same engagement at the same number in both markets is a mistake. UAE rates can be roughly 1.5–2× India rates for the same scope, and the UAE buyer will accept that price more easily than the India buyer will accept a 30% discount off it. Pricing is a strategic signal, not just a cost-plus number, and the signal it sends is read very differently in each market.
I should also say: this is not a value judgement. Both markets are excellent for this business in different ways. UAE has higher absolute revenue per client, longer cycles, stickier retention. India has more deal volume, faster cycles, deeper technical buyers, and a labour market that lets you build the engineering team that delivers everywhere. We've decided we want both, but they're being run as effectively two different go-to-market motions inside one company. I think anyone trying to scale across the region needs to do something similar, or accept they're going to be average in both.
—
Zuhair runs Wattlecorp Cybersecurity Labs. Kerala-headquartered, UAE-active, occasionally KSA. Happy to compare notes if you're sizing the same expansion.
What I look for when hiring junior pentesters that the industry doesn't.
The cybersecurity industry has converged on a hiring screen for junior pentesters that's loud, well-marketed, and only partially useful. OSCP. CTF wins. HackTheBox rank. Maybe a CEH for the procurement-driven shops. These are all fine signals for one thing — that the candidate has invested time and can do the technical work. They are weak signals, in my experience over a decade and a half of hiring, for whether the person will be a good consultant who clients want to keep paying for.
Here's what I actually look for now, in roughly the order I weight them. Almost none of this is on the standard rubric.
Stamina with frustration. Pentesting is hours of mostly-failing followed by a brief, punctuated success. The candidate I want has done something genuinely hard for a long time without a clear external reward — written a long technical blog series nobody read, completed a difficult build over months, debugged something nobody else cared about until it worked. CTF wins are okay for this but honestly they're too gamified — the dopamine arrives every few hours. I want to see proof the person can stay engaged when nothing is rewarding them for six weeks. That's pentest week three.
Written communication, judged on a real artefact. I ask candidates to send me a writeup of their best technical finding from the last year. Not a CV bullet. The actual writeup. I read it carefully. Is the argument structured? Do they explain why the vulnerability matters before they explain how it works? Is the remediation specific enough that the developer reading it could actually act on it without coming back to ask three questions? Most candidates fail this. The ones who pass are people who, ten engagements in, are going to be writing client-ready findings without me having to rewrite them. That's the unit economics of consulting. I cannot stress how much it matters.
Curiosity about the business, not just the bug. During interviews I describe a fictional client — a regional bank, say — and ask the candidate what they'd want to know about that bank before starting an engagement. The bad answer is "the scope, the assets, the rules of engagement." Those are table stakes. The good answer starts with questions like "what's their busiest week of the year — I don't want to test then," or "who's the actual decision-maker — the CISO or someone else?" or "what are their last three audit findings, are we expected to find those again or is the brief broader?" That's a candidate who's thinking about being useful to the client, not just being clever in front of them.
Ethical clarity under pressure. Every interview I run includes one scenario question. "You're three days into an engagement. You find a vulnerability that's clearly out of scope, but it's serious — say, exposed customer PII. The client said they don't want findings outside the agreed scope. What do you do?" There's no right answer in the abstract; what I'm listening for is whether the candidate has actually thought about it, or is improvising. The good ones articulate trade-offs, mention they'd raise it with the engagement lead, talk about disclosure obligations, and don't pretend it's simple. The bad ones either say "well, I'd just report it everywhere" (career-ending if they actually did it) or "I'd ignore it because the client said so" (also career-ending, just slower). Junior testers will face this kind of moment. I'd rather know now whether they have the muscle for it.
Willingness to be wrong in front of someone. I do a live technical exercise where I deliberately let the candidate go down a wrong path, watch them, and then ask "what made you think that was the right approach?" The candidates who can say "I assumed X, looks like X is wrong, here's what I'd try instead" are gold. The candidates who reflexively defend the wrong path are not, no matter how many CTF flags they've captured. Pentesting is, in practice, a lot of being wrong fast and updating. People who can't do that in a 45-minute interview can't do it in a 4-week engagement either.
A hint of being a real person. This is the most subjective one and the one I've come to value most. I want to see that the candidate has interests, opinions, friction with the world, that aren't all about computers. People who only do security all day, in my experience, plateau early as consultants because they cannot connect with clients who don't share that obsession. The seniors who do best at our firm are people who can talk about something other than the engagement for fifteen minutes at a client dinner. I'm now actively screening for this. It's hard to put on a rubric, but it's not subtle once you're looking.
What I've stopped weighting heavily. Certifications, in 2026, are a noisy signal. OSCP is fine; CEH is borderline negative because it correlates with people who've optimised for certificates over substance. CTF rank is fine for entry-level, but I've watched too many top-100 CTF players turn out to be middling consultants because the skills don't transfer. University name doesn't matter. Tier-1 college candidates and self-taught candidates, in our intake data, end up at roughly the same place after two years.
If you're a junior reading this and worried that none of the above is on your CV — write the technical blog series. Send your best finding to people whose work you respect and ask for feedback. Get past the pure-technical phase of your career and start practising the consulting half early. That's the gap nobody is teaching, and it's the one I'm hiring for.
—
Zuhair runs Wattlecorp Cybersecurity Labs. We hire two or three junior pentesters a year. The bar is honest work and honest writing.
Why we still pay senior pentesters more than we pay AI tooling — and probably always will.
The agentic SOC is coming. Most of what's being sold isn't it.
Every vendor with a SIEM has rebranded their chatbot as an "AI agent" this year. I've sat through enough of these demos to get specific about what bothers me.
A real agentic SOC, the way that phrase ought to be used, is a system that can take an alert, decide whether it matters, gather what it needs to be sure, take a containment action, and write up what it did â without a human in the loop for the routine 80% of cases. That's what agentic means in the rest of the AI world. An agent has a goal, a memory, tools, and the authority to act.
What's being demoed in most security keynotes is none of those things. It's a chat interface that summarises an alert when you ask it to. The agent is you. The security analyst is still doing every step of the workflow; the AI is just narrating it back to them in cleaner English. That's a co-pilot, and it's fine. But it's not agentic, and it doesn't deliver the economics that the agentic framing implies.
The economics matter, because that's what's being sold. When a CISO buys an "agentic SOC platform," they're not buying better dashboards. They're buying the implicit promise that they can run their L1 with three people instead of twelve. That promise is the entire reason these products command the prices they do. And almost none of the products on the market today can actually deliver it, because almost none of them have the authority to act, the containment surface to act on, or the judgment loop to know when to escalate to a human.
Let me unpack each.
Authority. Most "agentic" tools today live in a role with read-only access to your stack. They can summarise the alert; they cannot disable the user, isolate the host, revoke the session, or block the IP. Which means at the moment a decision needs to be made, control hands back to the analyst. That's not an agent. That's a smart inbox.
Containment surface. The few tools that do have write authority discover quickly that real-world IT environments are messy. Half the laptops are unmanaged, half the cloud accounts are owned by ex-employees, and the only person with the password to the legacy ERP is on holiday. An agent that can isolate a host in your EDR is useful for the 40% of your fleet that has EDR. For the rest, it's still humans running playbooks. Vendors don't say this part out loud.
Judgment loop. This is the hardest one. A real agent has to know when it's wrong, when to ask for help, and when to stop. SOCs are a particularly nasty environment for this because the cost of a false negative (a missed real attack) is enormous and the cost of a false positive (auto-isolating the CFO's laptop on a Friday) is career-ending. The current generation of LLM agents is genuinely bad at calibrated uncertainty. They will confidently triage a phishing alert as benign because the email sender is in the address book. Real attacks look exactly like that. We've already seen this in lab tests.
So when does a real agentic SOC arrive? I'd guess the first products that genuinely deserve the label show up in the next 18â24 months, and they'll come from one of two places. Either an EDR vendor extends authority into a model with proper guardrails, because they already own the containment surface and have clean telemetry. Or a cloud-native security platform builds it for cloud workloads only, where the environment is uniform enough that the agent can actually act with confidence. Both are tractable. Neither will be the "platform-agnostic agentic SOC" that's being sold today.
What I'd tell a CISO buying right now: don't pay agentic SOC prices for co-pilot value. Buy the co-pilot, get real productivity from it (because you genuinely will save L1 analyst time), but keep the headcount you have, because you still need them. When somebody shows you an agent that can actually close out a Sev-3 phishing alert end to end without a human touching it, and shows you the audit log of the last 1,000 times it did so with the false-positive and false-negative rates documented â then you're looking at the real thing.
We'll get there. The technology underneath is real. I just don't want any of my peers cutting their L1 team in 2026 on the strength of a demo, then watching a real attack walk past a chatbot that summarised it in cleaner English.
â
Zuhair runs Wattlecorp Cybersecurity Labs. Still hiring L1s.
AI pentest tools: I tried six in our pipeline. Two stayed.
Every other vendor pitch I get this year opens the same way. "Our platform uses AI agents to autonomously discover vulnerabilities…" I started keeping a folder. It's full now.
So we ran an experiment at Wattlecorp. We took six AI-driven pentest or vulnerability-discovery tools — a mix of well-funded startups and big-name additions to existing platforms — and put them through real engagements over a quarter. Not benchmarks. Real client work, alongside our human team. The brief to my testers was simple: pretend the AI tool is a junior consultant who joined this week. Use it. Tell me what you actually used it for, and what you stopped using it for.
Two stayed in the workflow. Four didn't.
I'm not naming the four. Some of them will be fine in a year, some won't. Naming them now mostly serves nobody. But here's the pattern.
The four that didn't survive had the same failure mode. They were excellent at the part of pentesting that wasn't actually the bottleneck. Generating payloads, fuzzing inputs, summarising tool output, drafting findings narrative — yes, all faster than a human. But none of that is what slows a pentest down. What slows a pentest down is judgment: which of the 200 things the scanner just flagged is actually exploitable in this specific environment, against this specific business logic, with this specific compensating control already in place. That's the bit nobody had figured out, and the four tools we shelved were essentially highly automated junior testers — productive at the easy parts, useless at the parts that matter, and confidently wrong often enough that a senior had to re-verify everything anyway. Net cost: positive. Net value: zero or negative.
The two that stayed are doing one of two things really well.
The first is scoped reconnaissance and asset discovery. We're a services firm; every engagement starts with mapping what the client actually has. AI tools that ingest a domain, a subnet, a code repo, or a cloud account and surface a clean attack surface map — including things the client themselves had forgotten about — save us hours per engagement. Not because the underlying capability is new (most of this is just orchestration over Amass, Subfinder, Nuclei, and friends), but because the tool gets a junior tester to a useful starting picture in fifteen minutes instead of half a day. We pay for that.
The second is reporting. Not "AI-generated findings" — please, no. But once a senior tester writes the technical body of a finding, the AI tool is genuinely good at producing the executive summary, the impact paragraph the CISO will actually read, and the remediation language that fits the client's documentation style. It's also fast at translating into Arabic for our UAE clients without us paying a translator. We caught two hallucinated severity ratings in the first month and put guardrails around it; since then, it's been a quiet productivity win. Boring use case. Real value.
What I'd watch out for if you're being sold one of these. Three honest questions.
The first is whether the tool is autonomous or assistive. Most useful AI in security work is assistive — it makes a senior faster. The "autonomous" framing exists because that's what raises money, not because it's what works in 2026. Ask the vendor what the human is doing during the autonomous run. If the answer is "supervising," it's assistive. Price it accordingly.
The second is whether they can show you a finding the AI surfaced that a human would have missed. Not "found faster" — missed. In our six-tool experiment, exactly zero of them produced a finding our humans wouldn't have eventually reached. They saved time on the path to known classes of bugs. They didn't expand the bug surface.
The third is what happens when the model is wrong. Vendors love showing the demo where it works. Ask for the loss curve. How often does it produce a confident finding that's not real? In our trial, false-positive rates ranged from "annoying" to "completely unusable." This number gets buried in vendor decks for a reason.
So where does that leave us? Cautiously optimistic, with two AI tools billing line items and four pilot agreements quietly not renewed. The thesis I'm running on is that AI is genuinely changing the speed of pentesting and the floor of what a junior can produce — both real, both economically meaningful — but it has not yet changed what makes pentesting valuable. The valuable part is still a human who's seen this kind of system fail before, sitting with the data, asking what's the worst thing that could happen, and not being satisfied with the obvious answer.
That's the part I'm still happy to pay senior money for. We'll revisit the experiment in Q3.
—
Zuhair runs Wattlecorp Cybersecurity Labs. We do offensive security across the Gulf and India.
Translations by Google
Labels
- Airtel Tricks (3)
- Bloggers Choice (12)
- Browser Trix (5)
- Internet Cafe (10)
- Mobile Center (14)
- OrKuT ICU (2)
- Pc Tricks (12)
- Pirated Softwares (1)
- Website Trix (6)
Blog Archive
-
▼
2026
(7)
-
▼
May
(7)
- Three engagements where the client's biggest risk ...
- The UAE vs India cybersecurity buyer — same proble...
- What I look for when hiring junior pentesters that...
- Why we still pay senior pentesters more than we pa...
- AI-generated phishing is now better than your aver...
- The agentic SOC is coming. Most of what's being so...
- AI pentest tools: I tried six in our pipeline. Two...
-
▼
May
(7)
AboutME
A Cybersecurity Entrepreneur