building ai for an industry that doesn't trust it

march 2026

freight is a trillion-dollar industry that runs on phone calls, spreadsheets, and relationships. when you walk into a brokerage and say "we're building ai agents," the response is usually polite skepticism at best.

and honestly, they're right to be skeptical. most ai products in logistics are demos dressed up as solutions. they work on clean data, in controlled environments, with patient users. the real world has none of those things.

freight trucks at a distribution center

why freight is different

a dispatcher managing 200 loads doesn't have time to babysit an ai. if the system makes a bad recommendation — wrong carrier, wrong rate, wrong lane — it doesn't just cost money. it costs trust. and in freight, trust is everything.

this means your ai can't just be accurate. it has to be explainable, fast, and gracefully wrong. when it doesn't know something, it needs to say so clearly instead of hallucinating a confident answer.

here's the basic structure of our confidence scoring system:

interface CarrierMatch {
  carrier_id: string;
  lane: string;
  rate_estimate: number;
  confidence: number; // 0-1
  explanation: string; // human-readable reasoning
}

function shouldAutoAssign(match: CarrierMatch): boolean {
  // only auto-assign if confidence is above threshold
  // AND the carrier has a history on this lane
  return match.confidence > 0.92 && hasLaneHistory(match.carrier_id, match.lane);
}

the key insight: we don't just score accuracy. we score trustworthiness. a 95% accurate system that can't explain itself is worse than an 85% accurate system that tells you exactly why it made each decision.

the trust ladder

we think about dispatcher trust in stages. each stage requires a different level of system reliability:

stage system role required confidence
1. observe show recommendations, human decides > 70%
2. suggest pre-fill forms, human confirms > 85%
3. act auto-assign routine loads > 92%
4. own handle end-to-end with exceptions > 97%

most ai companies try to jump straight to stage 4. we learned the hard way that you have to earn each stage.

what we're learning

the biggest lesson so far: the product is not the model. the product is the workflow around the model. how do you surface recommendations without interrupting someone's flow? how do you build confidence gradually, load by load, until the dispatcher trusts the system enough to let it handle the easy stuff on its own?

"the best ai feature we shipped wasn't a prediction — it was a button that said 'i disagree' and actually learned from it."

these are design problems as much as they are engineering problems. and they're the kind of problems i find most interesting — where the technical challenge is inseparable from the human one.