The Challenge
A large specialty reinsurance firm managed 10,000 complex contracts — each worth $100M–$1B — covering sports stadiums, office buildings, and major infrastructure. Contract language evolved constantly through negotiations and legal amendments, making it nearly impossible for humans to assess risk exposure consistently. Ad-hoc spot analysis took weeks of manual effort, leaving underwriting, claims, and balance sheet positioning exposed to significant uncertainty.
What They Built
Mike Mayes' team built a GenAI prototype on a RAG-based architecture with semantic chunking, hybrid SQL/document store retrieval, and a custom reinsurance legal dictionary, enabling both structured data queries and open-ended risk questions across 10,000 complex contracts with 97%+ accuracy.
Mike's team began by mapping the contract value chain to identify where human data entry introduced errors and where risk queries consumed the most analyst time. The architecture challenge was significant: reinsurance contracts contained both precisely structured data — dollar amounts, dates, policy regions — and highly subjective legal language around inclusions, exclusions, and cyber clauses that resisted standard pattern matching.
To handle both query types, the team designed a hybrid retrieval architecture combining SQL-style structured queries for precise data extraction with semantic document store retrieval for open-ended risk questions. Semantic chunking was tuned to preserve legal clause boundaries. A custom LLM dictionary of reinsurance-specific legal terminology was built to ensure the model interpreted domain language consistently and accurately. QA was conducted iteratively — 10 contracts first, then 100, then the full 10,000 — allowing the team to identify edge cases and tune accuracy before scaling. The system reached 97%+ accuracy at production scale, using make.com and relevance.ai alongside the core RAG layer.
AI Role
The system achieved and maintained 97%+ accuracy across 10,000 complex, 100+ page reinsurance contracts containing both highly specific structured data (dollar amounts, dates, regions) and ambiguous legal language around inclusions, exclusions, and cyber clauses.
AI Model
Custom / proprietary
Infrastructure
• RAG architecture (hybrid SQL + document store) • make.com (workflow automation) • relevance.ai (AI orchestration layer) • Custom reinsurance legal LLM dictionary
Integration Points
• Vector store connected to semantic chunking pipeline for 10,000+ contract documents • SQL layer integrated with structured contract data fields (amounts, dates, regions) • make.com automating query routing and output formatting • relevance.ai orchestrating LLM calls with domain-specific dictionary context
Mid-market to F100 companies in financial services, insurance (wealth management, underwriting, claims), and human capital management whose C-suite wants to connect AI investment directly to business strategy outcomes. Best suited for leaders with defined strategic priorities — growth or efficiency — who need a structured approach to identifying high-value AI use cases and managing the people/process changes required to realize ROI.