
Last week, we announced EFGH’s partnership with QuikBot.
A few people asked me the same question:Why would a financial infrastructure company care so much about robots?
Because this is not really a robotics story.
It is a trust story.
Scale reality
More than 205,000 professional service robots were sold globally in 2023. Nearly 113,000 of them were in transportation and logistics alone.
The machines are arriving.
The harder question is whether the systems around them are ready.
Real question
That is where most conversations still go wrong.
People talk about sensors, software and autonomy.
Fair enough.
But in the rooms that matter, the first serious question is usually much simpler:
👉 If something goes wrong, who pays?
If a robot damages property, injures someone, or disrupts operations, you do not have a technology problem anymore.
You have a liability problem.
What changed
That is why we partnered with QuikBot.
QuikBot’s robots operate in real buildings and logistics environments.
They interact with lifts, access points and shared spaces.
What interested us was not just the machine.
It was the decision layer behind the machine.
New approach
Instead of treating insurance as something that sits outside the system, we asked a different question.
What if protection moved with the action itself?
So when a robot moves, enters a building, calls a lift or completes a task, that action can be:
- recorded
- contextualised
- insured in real time
Not reconstructed days later.
Captured in the moment, with a clear record of what happened and under what conditions.
QuikBot already has deployments in Singapore sites including South Beach and Mapletree Business City, with Punggol Digital District next.
Why it matters
This matters more than it may seem.
Munich Re has said businesses hesitate to adopt AI when performance risk is unclear and not covered by traditional policies.
Swiss Re has made a similar point in automation. Insurance is shifting from the human operator to the machine and its data.
In other words, the next barrier to scale is not intelligence.
It is accountability.
Hard lesson
I have come to believe that many AI strategies are too focused on capability and not focused enough on consequence.
We obsess over what the system can do.
We spend less time on what happens when it gets it wrong.
That is backwards.
What to do
If you are building or deploying AI, three questions matter:
- Who is accountable when the system acts?
- Can you prove what happened, in real time?
- Is protection built into the system, or added after the fact?
If you cannot answer these clearly, adoption will slow. Not because the tech fails, but because trust does.
Bigger idea
That is how we think about the Finternet at EFGH.
Not as finance layered on top, but as protection, payments and accountability embedded into how systems operate.
Doing good
For me, “doing good” is not a slogan.
It means making sure ordinary people are not left carrying the risk when technology starts acting on their behalf.
Final thought
If you cannot explain who pays, your AI will not scale.
Share


