AI just broke your trust flow: humans are back into the loop

Written by Peter Wahlgren and Jens Eriksvik

You were promised automation. Slick digital workflows that handle expense claims, insurance reports, onboarding, whatever, all without human friction. Upload a photo. Scan a receipt. Auto-approve. Done. But then AI happened. Not the helpful kind that suggests headlines or organizes your calendar. No, the kind that forges receipts so well your finance system says “looks legit.” The kind that adds fake dents to cars, generates x-rays of non-existent fractures, and drafts medical notes that never came from a doctor.

AI is so good at faking reality, we’re about to reintroduce reality checks. And that means putting humans back where we just spent a decade removing them - in the loop.

“We assumed digital meant trustworthy. AI has changed that. When anyone can fabricate evidence with a prompt, verification becomes the new foundation for doing business.”

- Peter Wahlgren, Partner at Algorithma

Our systems are set up to approve digital evidence

The problem isn’t that AI can generate content. The problem is that AI can now generate evidence, and our systems were never designed to question it.

Take expenses. A growing number of people have started using image-generating AI to create completely fake receipts. The result is flawless forgeries that pass OCR checks, policy rules, and even seasoned human eyes. A startup founder recently demonstrated how easy it was to fabricate a restaurant bill using OpenAI’s image tools: coffee stains, correct tax math, merchant logo and all. Ramp, a corporate card platform, saw the trend and rushed to build a detector just to keep fake receipts out of their workflow, not because they think it’ll catch everything, but because the current system has no defense.

Same thing in insurance. Allianz UK saw a 300% increase in manipulated image fraud last year. People take photos from social media, AI-edit them to add a cracked bumper or flooded basement, then file a claim - and in many cases, it works. Even Zurich has flagged this as a top risk area: AI-assisted shallowfakes that tweak reality just enough to pass automated reviews.

In financial services, the same story plays out in identity fraud. AI-generated driver’s licenses, synthetic ID photos, even fake medical certificates, all used to open accounts or dodge compliance. In 2024, for the first time ever, digitally forged documents overtook physical ones as the most common method of ID fraud globally.

The pattern is obvious. When systems are built to trust digital inputs (photos, PDFs, screenshots, scans) they assume those inputs reflect something real. But AI breaks that assumption. It allows anyone with a browser to fabricate inputs at scale. No design review, no back-office process, no chatbot workflow was built for that threat model.

This isn't a “bad actor” problem. It’s a system design flaw. And that flaw is now baked into how many companies operate.

Fake receipts, real reimbursements

Corporate expense systems are built to process structured data: merchant name, date, total, tax. If those fields match a policy and the receipt layout looks standard, the claim gets approved. There’s no forensic analysis, no source verification, no embedded doubt, because historically, there was no reason to doubt.

That baseline no longer holds. Generative tools have collapsed the cost and skill barrier for fakes. A few lines of prompt now produce synthetic receipts that are indistinguishable from legitimate ones, not just visually, but structurally. The formatting is right. The math adds up. The visual noise, e.g. folds, stains, smudges, looks accidental, not designed.

This is where the system design flaw becomes operational. Expense automation assumes documentation equals evidence. It doesn’t anymore.

In early 2025, expense companies started to respond to this shift by building tools to detect AI-generated receipts. Not to solve the problem, but to acknowledge it. Even advanced workflows had no mechanism for questioning input provenance, because the premise of trustworthy visual evidence was never questioned in the first place.

What this reveals is not an arms race between AI-generated fakes and AI-powered filters, but a deeper imbalance: companies have optimized for processing, not for verification. The result is scale without scrutiny. And expense fraud is just the low-hanging fruit. This means humans need to be brought back into the loop, and the future workplace will be a hybrid one, with humans and AIs taking different roles. 

Insurance fraud from the comfort of your keyboard

The same structural vulnerability that enables expense fraud is now surfacing in insurance. In an effort to reduce friction and improve margins, many insurers have pushed claim processing into self-service workflows. A typical model: policyholders upload a few images of a damaged car or waterlogged basement, and the system calculates a payout based on visual evidence. No adjuster, no phone call, no delay.

That model worked. Until generative AI arrived.

Image editing used to require skill and time. Now it requires ChatGPT and a prompt. Fraudsters are taking photos from social media or salvage marketplaces, using AI to modify the visuals, and filing claims for accidents that never happened. Some alter existing images by adding damage overlays. Others use AI to generate fake damage scenes entirely. As noted, Allianz UK reported a 300% year-on-year increase in claims involving manipulated images between 2022 and 2023. Zurich has listed shallowfake claims as a top emerging threat, flagging doctored visuals that pass automated checks and get paid.

These aren’t theoretical risks. In one UK case, scammers submitted a repair bill alongside a photo of a van that appeared to have rear-end damage. Investigators later discovered the image was taken from the claimant’s own Instagram, with the dent digitally inserted after the fact. The claim had already been approved by the automated process.

The mechanics behind this are simple. Insurance systems are optimized to validate claims at scale, based on visual input. That input is now compromised. The model assumes that a photo uploaded by a user reflects a real-world event. In the age of synthetic images, that assumption fails.

The outcome is predictable. Claims get paid based on fake data, and premiums rise for everyone else. According to industry analysts, fraudulent photo-based claims are already contributing to higher costs across several product lines, especially in personal auto and home insurance.

This is a downstream effect of the same system design flaw: automation was built to optimize for speed and efficiency, not to interrogate the authenticity of visual content. AI has now flipped that tradeoff. If a generated image can pass for real, then any decision made by a system that doesn’t check provenance is up for grabs.

Digital automation was built on trust. AI just broke it.

Modern automation assumes that digital inputs reflect real-world events. Expense systems trust that a receipt image corresponds to an actual purchase. Insurance platforms assume that a photo of damage shows something that actually happened. Identity verification flows rely on the belief that a selfie and an ID belong to a real, present person.

This assumption was never validated, it was inherited. As long as fabricating realistic input was costly, manual, and rare, the system worked well enough. Fraud happened, but at a manageable scale. That equilibrium is gone.

“AI didn’t just change how we work - it broke the trust we built our systems on. Rebuilding that trust isn’t about slowing down automation, it’s about designing for doubt.”

- Jens Eriksvik, CEO at Algorithma

With Gen AI, anyone can fabricate digital inputs at speed and with precision. The barriers that once limited fraud, access to tools, time, technical skill have been removed. AI enables the creation of synthetic documentation, images, and identities with the same speed and consistency as legitimate ones. In 2024, digitally forged documents overtook physical ones as the dominant method of identity fraud globally. The same year, insurance providers like Allianz and Zurich flagged AI-manipulated images as a primary source of claim abuse, driven by tools accessible to the public.

Automation pipelines are not equipped to interrogate the origin or integrity of what they process. They optimize for structure, not truth. If a PDF parses correctly, if a photo uploads in the right format, if the totals balance - the claim proceeds.

The result is systemic: when trust in input fails, the entire automation stack becomes unreliable. Workflows that were built to remove friction now become vectors for abuse. Processes that once scaled operations now scale fraud.

We reintroduce humans. We verify the world again.

Companies are starting to realize that existing automation flows are built on a broken assumption: that digital inputs are real. Rebuilding trust in these systems doesn’t mean abandoning automation, it means redesigning it to account for synthetic content.

Here’s what that looks like in practice:

  • Require human sign-off where trust is brittle. Reintroduce manual review steps for high-risk scenarios: large expense claims, high-value insurance payouts, or onboarding flows with mismatched identity data. These are not edge cases, they’re failure points that cost real money when left unchecked.

  • Redesign input validation for provenance, not format. A receipt that “looks right” is no longer enough. Look for data cross-checks: card transaction logs, POS system data, or metadata embedded in files. If the system can’t verify where the input came from, route it for manual review.

  • Don’t just detect, dissuade. Let users know you’re watching for AI-generated fraud. Not to catch everything, but to raise the perceived cost of gaming the system. Friction isn’t always bad if it creates a deterrent effect.

  • Bring physical inspection back where it matters. Insurers are already doing this. After a 300% spike in fake damage claims, companies like Allianz and Zurich are sending more adjusters out again. If the claim value justifies it, verify the damage in person.

  • Layer in “liveness” where identity matters. For onboarding flows, move beyond ID upload and selfie comparison. Use video-based identity checks, biometric signals, or multi-step verification that’s harder to spoof with static generative content.

This is the new normal: hybrid workflows where automation handles the routine, and people are inserted where trust breaks down. Not for every case, but for the ones where getting it wrong is too expensive.

AI hasn’t just changed the cost structure of fraud, it’s changed the structure of trust. And that requires systems that do more than process data. They need to question it.

On a path to new division of labor: AI does the desk work, humans verify the world

The future of work isn’t about replacing humans with AI. It’s about restructuring who does what, and why. AI agents are already taking over tasks that used to sit squarely on the desks of analysts, coordinators, and back-office staff. They extract data, generate summaries, file reports, fill out forms, match numbers, and move tickets. They’re fast, cheap, and available 24/7.

This is the part of work that scales well: structured, repeatable, and largely transactional. It doesn’t require presence. It doesn’t require trust. It just needs data.

But as AI expands into that territory, something else might be happening: humans are being pulled in where the system hits uncertainty. Where proof matters. Where physical validation is the only way to know what’s real: Field inspectors, manual reviewers, identity validator, compliance officers and risk analysts. Their job isn’t to process, it’s to question and validate.

As organizations embrace a hybrid trust model, AI and machine learning have a complementary role to play: not verifying truth, but triaging uncertainty. By learning from past fraud patterns and false positives, AI can help focus human review on the most ambiguous or high-risk cases.

This doesn't change the fundamental shift: humans are back in the loop to verify reality. But it ensures their time is spent where it counts. In this way, AI helps restore trust - not by automating decisions, but by helping decide where automation should stop.

This isn’t a step backward. It’s a recalibration of trust. When reality can be faked at scale, organizations need people not just to manage systems, but to anchor them. In a world flooded with synthetic inputs, presence becomes a feature.

We’re entering a phase where automation handles the volume, and humans protect the integrity. That means workflows that aren’t just efficient - they’re verifiable. It means putting boots back on the ground, eyes back on evidence, and friction back into the places where certainty matters.

It will call for a re-design of organisations, rethink of processes and new talent (and educational) strategies.

The irony: AI is making office work more human

Roles that were once on the automation chopping block, inspectors, auditors, compliance officers, claims adjusters, might regain relevance. Not because the work can’t be digitized, but because the cost of blindly trusting the digital has gone up.

As AI agents take over forms, emails, expense flows, and triage queues, the humans who remain are being repositioned. They're no longer there to complete the task, they're there to validate the premise.

The patterns described here are concentrated in domains where digital evidence is a core part of operational truth, e.g. insurance, finance, procurement, onboarding, and compliance-heavy industries. In these environments, trust in inputs is not just a workflow detail, it's the system’s foundation. And that foundation is now unstable.

In insurance, investigators now supplement automated claims with on-site inspections, not because it’s efficient, but because it's conclusive. In finance, liveness checks and manual reviews are standard in high-risk onboarding. In expense auditing, red-flagged claims are rerouted for human judgment, not rule-based rejection.

These shifts are reframing the role of the knowledge worker. Less data entry, more discernment. Less form-filling, more anomaly detection. Less process, more presence.

The irony is clear. AI is absorbing the routine and transactional parts of office work. What’s left for humans is the part AI still can't handle: ambiguity, ethics, context, and the ability to say “this doesn’t add up.” In that sense, AI isn’t making work disappear. It’s pushing it closer to reality, at least in the places where reality still matters.

Trust will need architecture, not optimism

AI won't be “solved” with policies or disclaimers. If your workflow still assumes digital inputs are real, you're running on legacy trust. It’s not enough anymore. This isn’t about panicking. It’s about redesigning. Rebuilding flows that can catch AI-generated noise. Combining cryptographic proof, cross-system validation, and yes - actual human review.

Not every step needs a person. But every system now needs a point where someone checks the mirror. Because in the age of synthetic everything, reality is now a competitive advantage.

Next
Next

Building an AI infrastructure in an uncertain environment: key considerations