Evidence decomposed into verifiable claims.

The reasoning behind high-stakes decisions disappears. Bernard keeps it.

Bernard turns fragmented evidence into a structured decision record. One record holds the evidence, the claims, the contradictions, and the confidence together — readable by your team and by the agents that work alongside them.

Built for art, real estate, and complex asset markets where the hard part is not finding documents — it is deciding what to trust.

The Problem

Between discovery and conviction, expertise disappears.

Marie spent three hours verifying a Dubuffet consignment — cross-referencing provenance across two languages, flagging a dimension discrepancy, weighing what to trust. Six months later, the documents were still in the file. The reasoning she built from them was gone.

Marie’s problem is not unique. Between finding an asset worth committing to and having enough confidence to act sits days of manual research, scattered documents, and judgment trapped in people’s heads. Derek runs the same pattern on a coastal listing. Victoria runs it on a collector car. Different markets, same gap.

When Marie’s junior conservator left Maison Chauvet last spring, a decade of attribution notes left with her. The vast majority of that institutional memory is never captured — it simply disappears.

The existing infrastructure — virtual data rooms, shared folders, emailed PDFs — stores documents. It does not understand them. It cannot tell you when two provenance certificates contradict each other, score how confident you should be in a title claim, or remember what it learned when the deal closes. Copy, paste, cross-reference, document. Copy, paste, cross-reference, document. The deal closes. The memo travels. The reasoning behind it does not.

Bernard keeps that reasoning. Every claim linked to evidence. Every contradiction visible. Every confidence score explicit — and still there six months later.

What Bernard Is

Not a summary. A structured record.

Bernard is not a prettier data room or a research chatbot with better prompts. It is the structured middle state between raw evidence and a final decision — a record of what the evidence says, where it conflicts, and how confident you should be before acting. From that record, Bernard renders the exact view the workflow needs without rebuilding the reasoning.

01 Evidence linked to source

Every document stays attached to its origin — certificates of authenticity, title commitments, inspection reports, condition assessments. Nothing is summarized away.

02 Typed claims built from evidence

Provenance claims, valuation assessments, title opinions — important assertions become explicit, traceable claims, not buried conclusions in a paragraph.

03 Contradictions tracked as records

Conflicts are first-class objects, not footnotes. A dimension mismatch, a date discrepancy, a competing attribution — each gets surfaced and tracked.

Marie’s Dubuffet: 65×81 in the certificate, 81×65 in the catalogue. Bernard flagged it before she accepted.

04 Confidence made visible

Scored per-claim and calibrated for the audience. Never implied by the tone of the text.

Derek’s Olney Street analysis: 87% — high enough to move, flagged enough that the undisclosed HOA assessment didn’t become a surprise at closing.

05 Outputs rendered as views

Consignment memo, underwriting packet, compliance audit, agent payload — all from one underlying record. Never rewritten from scratch.

Victoria pulled a risk summary for Robert and an audit trail for compliance — both from the same record, in under a minute.

The room is the surface. The structured record is the product. Once evidence is broken into traceable claims, it stays useful for purposes the original analyst never anticipated.

Structured means auditable, not autonomous. Every extraction, verification, and resolution is logged and timestamped. Your data stays yours. It is not used to train our models, shared with other accounts, or made accessible outside your organization. Confidence is scored per-claim and calibrated against consequences, not hidden behind plausible prose.

Bernard does not replace judgment. It eliminates the grind so your people can focus on what actually requires expertise — with a complete record of how every conclusion was reached.

In Practice

What Marie sees when she opens the record.

Marie opens the consignment file. Five documents, three formats, two languages. She drags them into Bernard. Ninety seconds later, she’s looking at a structured record — not a summary, a record. Every provenance claim traced to its source page. Every date cross-checked. And there it is: the sale record says 65 × 81 cm, the catalogue says 81 × 65 cm. A transposition — or a different painting. That’s the kind of thing she’d catch on hour three of a manual review. Maybe. Bernard caught it before she set her phone down.

Here is what that record looks like under the hood:

Marie’s Dubuffet enters consignment review. Bernard ingests the certificate of authenticity, exhibition catalogue, and prior sale record. Within minutes:

Claim “Exhibited at Galerie Maeght, Paris, 1962” — Source: exhibition catalogue p.47Confidence: 8.2/10

Conflict The sale record lists dimensions as 65 × 81 cm. The catalogue lists 81 × 65 cm. Bernard flags this as a contradiction requiring human review — it does not silently pick one.

Result Three verified claims, one conflict record with resolution path, confidence scored and decomposable. Marie reviews in minutes what would have taken hours of manual cross-referencing.

This is what Bernard produces — not a summary paragraph, but a structured body of evidence where every assertion traces to its source and every disagreement is visible. Marie still makes the call. Bernard ensures she makes it with full information.

The Difference

An LLM gives you text. Bernard gives you a decision record.

Research is a feature. Structured trust is the product.

Dimension Current Workflow Standalone LLM Bernard
Output Folders of documents Paragraph with footnotes Structured decision record with linked evidence
Contradictions Whoever noticed last Smoothed into prose Explicit conflict record (Marie’s dimension mismatch, tracked until resolved)
Confidence Gut feeling Implied by tone Scored per claim, decomposable (Derek’s 87% told him exactly how far to trust the number)
Six months later Lost in a shared drive Session expired Full record — reasoning and confidence intact (Derek can reopen the Olney analysis a year from now and see exactly why he recommended $1.63M)
Audit trail Reconstruct when asked “Source: document.pdf” Automatic, exportable. You don’t reconstruct — you export.
Automation ceiling One task at a time, one person at a time 1–2 steps, then a human absorbs the risk Structured gates enable multi-step agent workflows
The Missing Layer

AI that can’t verify can’t decide.

Enterprise records are evidence-rich but not decision-ready. One agent summarizes a document. The next summarizes that summary. Each handoff compounds interpretation and erodes provenance — abstraction drift. Structured output alone doesn’t fix it. JSON enforces syntax, not truth. Without provenance, conflict tracking, and calibrated uncertainty, structure is just well-formatted guesswork.

An agent that cannot verify its inputs cannot be trusted with the next step.

Bernard is the decomposition layer. Documents become typed claims, each linked to source evidence. Contradictions become explicit records, not gaps smoothed into prose. Confidence becomes a calibrated score, not a tone. That structured layer lets agents operate on claims with provenance and uncertainty — not narratives about narratives. A risk agent reads the conflict log. A compliance agent checks gate status. A pricing agent consumes the confidence decomposition.

Your specialists shift from re-checking every output to reviewing exceptions. Your enterprise gets AI infrastructure it can actually underwrite.

How It Works

Evidence in. Structure first. Output later.

01 Preserve evidence

Documents are uploaded, parsed, and stored alongside originals. PDFs, images, deeds, certificates — normalized and preserved. Source material stays traceable at every stage.

02 Apply the playbook

Domain-specific rules determine what matters. Assets are categorized using domain-specific frameworks. Research runs against curated authoritative sources per vertical — targeted queries, not broad web search. Assertions become typed claims. Contradictions and unknowns are surfaced, not smoothed over. When findings are clear, Bernard proceeds. When something needs human judgment, it escalates with context.

03 Render the record

One structured decision record, rendered for any audience — memo, packet, audit trail. Every finding decomposed into individual claims, each linked to source evidence, confidence scored and auditable. Never rebuilt, never duplicated.

The reasoning kernel stays stable across every domain. What changes is the playbook: which taxonomy applies, which sources are authoritative, what contradictions look like in that domain, what confidence thresholds matter, and what the output needs to say.

Proof

One object. Four views. One underlying record.

The same structured record rendered for four different workflows. Decision-facing, evidence-facing, audit-facing, agent-facing — nothing rewritten.

Memo View
Packet View
Audit View
Agent View

Two domains. Same architecture.

Art proves the kernel under the hardest evidence conditions. Real estate proves it scales to commercially legible workflows. Same architecture, different playbooks.

The Proving Ground
Fine Art BRD-2048-Z
Claim

“Exhibited at Galerie Maeght, Paris, 1962 — attributed to the artist”

Evidence

Exhibition catalogue p.47 — cross-verified against Maeght Foundation archive

Contradiction

Sale record lists dimensions as 65 × 81 cm. Exhibition catalogue lists 81 × 65 cm. Orientation transposition or different work. Flagged for human review.

Confidence 8.2 / 10 — high, pending dimension resolution

Sparse provenance, competing attributions, high stakes — if the architecture handles this, it handles anything. And what starts as verification becomes the foundation for licensing, cultural production, and new work the original record was never built for.

The First Commercial Market
Real Estate BRD-3091-F
Claim

“Seller can convey marketable title at closing — 1247 Ocean Drive, Miami Beach”

Evidence

Title commitment Schedule A/B-II, recorded warranty deed #2019-042871, ALTA survey

Contradiction

Survey legal description does not align with Schedule B-II. Utility easement encroachment not reflected in title exceptions. Flagged for curative endorsement.

Confidence 6.4 / 10 — conditional, pending curative endorsement

Structured inputs, high volume, institutional stakes — the same architecture, scaled to commercial workflows. Issues are identified once and tracked through closing — not rediscovered by every new stakeholder who enters the deal.

Request Early Access See a live Bernard record.
The Markets

The models moved first. Decision infrastructure has not.

AI has made extraction and synthesis dramatically cheaper. That sharpens the value of structured reasoning — it does not diminish it. The new bottleneck is not reading documents. It is deciding what to trust.

Bernard is built on a transferable architecture: one reasoning kernel, different playbooks per domain.

The same architecture. Next markets taking shape.

The same reasoning kernel is designed to extend to other opaque markets through domain-specific playbooks. Adding a vertical is a configuration exercise — taxonomy, source priorities, validation rules — not an engineering rebuild. The architecture is multi-vertical. The go-to-market is focused.

See the record.

Bernard is raising a seed round and selectively onboarding early partners. Whether you’re reviewing a consignment like Marie, pricing a coastal listing like Derek, or vetting a collector car like Victoria — we’ll walk you through a live decision record built from your market.

Request Early Access