There is a line in every investment process where automation should stop and human judgment should begin. Most firms draw that line in the wrong place — either too conservatively, leaving enormous amounts of capacity on the table, or too aggressively, letting AI make calls that require experience, relationships, and pattern recognition to get right.
I think about this boundary every day. After two years of building and deploying production AI systems on live CRE transactions, I've developed a practical framework for where the line belongs. The short version: AI handles the eighty percent that consumes your team's capacity. Humans handle the twenty percent that actually creates value.
Getting this boundary right is the difference between firms that use AI well and firms that use it dangerously.
What belongs in the eighty percent
Data assembly. The vast majority of time spent in a typical underwriting process isn't spent analyzing — it's spent gathering, organizing, formatting, and cross-referencing information. Pulling comps. Assembling rent rolls. Formatting financial statements into your model template. Reconciling data across multiple sources. This is high-volume, repetitive, accuracy-dependent work that AI handles faster and more consistently than any analyst.
First-pass screening. When an offering memorandum hits your desk, there's a set of threshold questions that determine whether it's worth spending time on: Does it meet your return criteria? Is it in your target geography? Does the basis make sense relative to comps? Does the tenancy profile match your strategy? These are algorithmic questions — they have right answers that can be computed from data.
Monitoring and surveillance. Tracking NOI against business plans. Flagging lease expirations. Monitoring covenant compliance. Assembling monthly property reports. Surfacing anomalies in operating expenses. This work is critical but mechanical — it requires attention and consistency, not judgment.
Document generation. Quarterly reports, LP communications, IC memos in their first draft, DDQ responses, lender packages — all of these follow templates and pull from established data sources. The assembly is automatable. The editorial review is not.
What stays in the twenty percent
The deal that doesn't fit the model. The most valuable investments I've made in my career were the ones where the quantitative screening would have said no, but experience said wait — there's something here. A basis that looks high until you understand the rezoning story. A property that screens poorly on trailing NOI but has lease-up upside that only someone who knows the submarket can see. AI can surface every deal that fits your box. It cannot tell you when to step outside the box.
Relationship dynamics. The seller who will take a lower price for certainty of close. The lender who will stretch to a higher LTV because you've performed on three previous deals. The LP whose allocation timeline aligns with your deployment schedule in a way that isn't captured in any database. Relationships are the highest-returning asset class in real estate, and they are entirely human.
Structural judgment. How to structure a waterfall that aligns incentives across GP and LP. Whether a preferred equity position or a first lien note is the right risk-adjusted play in this cycle. When to exercise an extension option versus selling into strength. These are decisions that require synthesizing market conditions, relationship considerations, fund-level strategy, and instinct.
The IC conversation. The moment in an investment committee discussion when someone says "something feels off about this deal" — and they're right, but they can't point to a single number that's wrong. That pattern recognition is the product of decades of seeing deals go right and wrong. It's the most valuable thing in the room, and it's irreducibly human.
The harness, not the model
There's a useful distinction I've adopted from practitioners who've thought deeply about AI in professional contexts: the difference between using the model and harnessing the model. Using the model means asking it a question and trusting the answer. Harnessing the model means building a system around it — with guardrails, validation layers, human review checkpoints, and feedback loops — that makes the output reliably useful for professional work.
The firms that harness AI outperform the ones that merely use it by a significant margin. The reason is straightforward: in a professional investment context, the cost of a wrong answer isn't a wasted afternoon — it's a capital allocation mistake that compounds for years. The harness is what prevents the model from generating confident-sounding garbage that looks like institutional work product until someone with experience reads it and realizes the assumptions are wrong.
The practical implication
When I work with clients, the first thing we do is map every workflow and classify each step: is this data assembly or judgment? If it's assembly, it's a candidate for automation. If it's judgment, it stays human — but we make sure the human has better, faster, more complete data when they arrive at the decision point.
The goal is not to remove humans from the process. The goal is to remove the low-value work that prevents humans from spending their time on the high-value work. When your analysts aren't spending three days assembling an underwriting file, they can spend those three days actually analyzing the deal. When your IR team isn't spending two weeks on quarterly reports, they can spend that time having conversations with LPs that build real trust.
The twenty percent is where careers are made, where relationships are built, and where returns are generated. Everything else is infrastructure. Build the infrastructure right, and the twenty percent gets your full attention.