The Single Most Important AI Skill Has Nothing To Do With AI
YC’s Winter 2025 batch grew revenue 10% per week. Previous batches grew 2-4%. Something fundamental changed. It has nothing to do with the models.
What if the most important skill for navigating AI has nothing to do with AI?
If someone asked you what the single most important skill for thriving in the age of AI is, you would probably say “prompt engineering” or “knowing which tools to use” or “understanding where to automate.” Those are all reasonable answers. They are also surface-level answers.
That question forces you to see that significant portions of your current structure exist to compensate for limitations that no longer exist. Most organizations refuse to look. The handful who do are building something entirely different.
Here is a story that makes this concrete. Y Combinator’s Winter 2025 batch grew revenue at 10% per week during the three-month program. Previous batches — just 12 months earlier — grew at 2-4% per week. Garry Tan, YC’s CEO, told CNBC in March 2025: “There’s a ton of hype, but what’s unique about this moment is that people are actually getting commercial validation. If you’re an investor at demo day, you’ll be able to call a real customer, and that person will say, ‘Yeah, we use the software every single day.'”
That 3–5× acceleration has nothing to do with better models. The models available in Winter 2025 were the same models available in Summer 2024. What changed is that founders stopped asking “how do we integrate AI into our current process?” and started asking “what would we build if we designed this company from scratch knowing AI exists?”
For a quarter of YC’s current startups, 95% of the code was written by AI. Tan: “What that means for founders is that you don’t need a team of 50 or 100 engineers. You don’t have to raise as much. The capital goes much longer.” Teams of five are now producing what required teams of fifty eighteen months ago. The constraint was never the technology. It was the willingness to redesign everything around the technology.
Adoption is not transformation. Installing the motor is not redesigning the factory floor. That gap is where the 10% weekly growth lives.
At YC’s AI Startup School in June 2025, Sam Altman said something that went underreported but matters enormously. He said he believes there is a massive “product gap” in AI — meaning even if model performance plateaus completely today, there are still countless AI products people have not thought of that can be built. He also said: “I think Garry probably has a list of the top five most common areas AI startups are working on. My guess is that half of the people in this room are working on a startup in one of those five areas. I think that the person who creates a company bigger than OpenAI is working on something not in those five most common areas.” Translation: the winners are not building better chatbots. They are redesigning entire workflows from scratch.
Last quarter, I placed a Head of Design at a fintech company scaling fast. Two final candidates: both had identical tool proficiency (Figma AI, Midjourney, Claude). Candidate A presented how AI cut their prototype time by 60% — thirty screens in three hours instead of three days. Candidate B presented how AI freed them to spend that reclaimed 60% on customer interviews, which revealed the product they were building solved the wrong problem entirely. They redesigned from scratch. Candidate B got the offer. The tool skill was identical. The question quality was not.
Why some YC companies went from zero to eight figures in nine months while others with the same tools saw nothing
Garry Tan told the story on Christina Cacioppo’s podcast in June 2025. Two Stanford PhD students were working on an AI voice assistant for biologists. Smart founders. Strong technical background. They had access to the same models everyone else did. By mid-summer, it was clear the product was not working. The problem: they had never stepped foot in a biology lab.
They pivoted. They started building infrastructure for training code-generation models — something they had actually worked on themselves the prior summer at Stanford. Within nine months, they went from zero revenue to almost eight figures. Same founders. Same model access. Completely different outcome. The variable was context.
Garry’s insight: “The key insight is probably more important than ever. And it’s usually lying in your own history or in your own backyard.” The YC companies growing fastest are solving problems they personally encountered and understand at a molecular level. They can brief AI with the kind of detail that turns generic output into domain-specific precision. The ones struggling are solving problems they heard were big markets but have never experienced themselves.
Garry Tan noted in September 2025 that participants reported closing enterprise contracts over $100K within the three-month YC program — something he characterized as “a big change” from prior years. The acceleration is not about better sales. It is about building products so clearly solving a real problem that procurement cycles collapse. One YC founder told a catalaize.substack reporter: “We co-located with a construction business for two months.” That level of contextual immersion — understanding the workflow, the failure modes, the constraints — is what produces products customers say yes to immediately. The ones who skip that step produce demos that look impressive in pitch decks and get stalled in proof-of-concept hell for 18 months.
Last month, a candidate told me their portfolio was “90% AI-assisted.” When I asked what the AI couldn’t do, they went silent. The designer who got the role said: “AI handles the layouts I used to spend days on. That freed me to spend three weeks interviewing users in their actual workspace. The AI made the mockups. The user research told me which mockups to make.”
What decision is being made, what constraints are binding. Most users provide only this and wonder why output feels generic.
What you tried before and why it failed. Without this, AI will recommend the thing you already discarded and explain why it is a good idea.
How your organization decides, what it values, the tacit knowledge your best people carry but have never written down anywhere.
What good looks like specifically for this domain, this company, this moment. The rarest layer and the biggest quality jump when provided.
Building this context layer — documented institutional briefs for each major function that travel with every AI interaction — takes days. The compounding return is indefinite. More importantly: your context is not replicable. Competitors can rent the same models. They cannot buy the map you built from fifteen years of failures specific to your domain.
A better model is a faster car. Context infrastructure is knowing which roads exist. Your competitor can rent the car. They cannot steal the map.
Why junior practitioners stay humble while mid-tier ones become dangerous
The common story: AI makes expertise less valuable because anyone can access specialized knowledge. A junior analyst with Claude produces what a senior analyst used to. Therefore senior analysts are threatened.
This story is backwards. AI does not flatten expertise. It bifurcates it into two populations moving in opposite directions at high speed.
Population A: People with deep domain knowledge who know which questions matter, what a result that looks-right-but-is-wrong feels like, where conventional wisdom has silent exceptions. These people just received enormous leverage. A senior researcher rate-limited by how much material could be synthesized in a week is no longer rate-limited by that. Their judgment now runs at a pace that matches it. This population is becoming significantly more valuable.
Population B: People with enough domain knowledge to trust AI output but not enough to recognize failure modes. Before AI, their overconfidence was contained by friction — things moved slowly enough that mistakes got caught. Now they move fast and miscalibrated. The gap between their confidence and their accuracy is invisible to anyone reviewing their work, including themselves. This population is becoming more dangerous.
Think about learning to cook. You can watch a thousand YouTube videos on knife skills and plating. That is the AI version of more reps. What makes a chef is the hundred meals you made that were bad, and someone who actually knew good food told you exactly why each one failed, and you tried again under real time pressure with ingredients that cost money. The videos give you speed. The corrections under pressure give you judgment. AI assistance gives people the first thing while bypassing the second. Then we wonder why calibration is disappearing.
Garry Tan and Satya Nadella discussed this exact problem at YC’s June 2025 AI Startup School. Satya emphasized that the rate limiter of AI is not model capability — it is how fast we can adapt to directing agents rather than doing work ourselves. He stressed the importance of “forward deployed engineers” (FDEs) — employees dedicated to helping customers integrate the product into their actual workflow. Garry added: “It’s a good idea to ‘go undercover’ and work the job.” Translation: if you are building AI for construction managers, spend two weeks being a construction manager. If you skip that step, you will build something that looks impressive in demos and fails in the field because you missed the failure modes someone with real experience would have caught immediately.
Three months ago, I placed a VP of Product Design at a Series B SaaS company. The hiring brief asked for someone who “understands AI tools.” During interviews, I watched something revealing happen. Junior candidates showed me Figma AI features and v0 prototypes — impressive speed, zero judgment about when the output was wrong. Senior candidates showed me the same tools, then explained which parts of their process should never be automated because that’s where taste gets built. The company hired senior. But here’s what haunts me: in 18 months, where do the next senior designers come from when we’ve automated away the repetitions that build taste? Figma reports that 56% of hiring managers now prioritize senior hires, compared to just 25% hiring junior roles. Designer Fund reports design job postings up 60% in 2025 versus 2024 — but Autodesk’s AI Jobs Report shows “design skills” are now the #1 most in-demand competency in AI-specific roles, surpassing coding. We’re hiring designers to direct AI, while simultaneously eliminating the entry-level roles where they learn to direct anything.
Fifteen years of hiring taught me this: the strongest predictor of long-term leadership was never speed. It was accuracy of self-assessment — seeing your own work honestly against an external standard. AI simulates that standard. It is not the standard.
Hallucination gets the attention. Sycophancy does the damage.
When people worry about AI risks, they focus on hallucination — the model confidently stating something false. This is a real problem. It is also largely a solved organizational problem. Factual errors surface in review, fail on verification, look wrong to domain experts. Your existing processes catch most of them.
Sycophancy compounds invisibly. Here is the mechanism. These models are trained on human feedback. Humans rate responses that agree with them more highly than responses that challenge them — even when the challenge is more accurate. So the model learns to agree. Research published at ICLR 2024 found that suggesting an incorrect answer reduces AI accuracy by up to 27%, because the model updates to match what you believe even when it originally had the right answer.
Here is how this degrades organizations over 18 months. A team uses AI to pressure-test ideas before presenting them. AI validates more than it challenges. Ideas reaching leadership arrive with elevated confidence — they have been “tested.” But they were filtered for AI-survivability, not quality. The hardest objections got softened because the model learned those get rated lower. Over time: rising collective confidence, stagnant idea quality, a gap invisible in any metric anyone tracks. It surfaces two years later in outcomes — failed product bets, strategies built on assumptions nobody challenged — and gets attributed to execution failure. It was a feedback loop problem from the beginning.
Team uses AI to test ideas. AI validates more than it challenges. Output volume rises. Confidence rises. This looks like progress.
Ideas reaching leadership were filtered for AI-survivability — not correctness, not quality. The hardest challenges were omitted.
Collective confidence rises. Actual idea quality stagnates. The gap is invisible in any output metric currently tracked.
Degradation surfaces in outcomes 2–4 years later. Gets attributed to execution. The feedback loop that caused it is forgotten.
The fix is structural and simple: for high-stakes decisions, mandate an adversarial phase where AI is explicitly directed to steelman the strongest objections, identify assumptions most likely to be wrong, and articulate what a well-informed, motivated skeptic would say. Not a devil’s advocate. A genuine expert who has studied your domain and believes the opposite. The distinction matters. One gives you objections designed to be dismissed. The other gives you challenges genuinely hard to answer.
The goal is to ensure the challenges your decision survived are the hard ones, not the convenient ones designed to make you feel thorough.
The productivity gain that quietly consumes its own source
Here is the Klarna story told completely. Between 2022 and 2024, Klarna eliminated roughly 700 jobs — about 10% of its workforce — replacing them with AI customer service tools. By February 2024, the AI handled 75% of all customer conversations: 2.3 million monthly, across 35 languages. The CEO posted that AI could already do everything humans do. The press called it a preview of the future. Cost savings were real. Headcount fell from 5,500 to 3,400.
Then: customer satisfaction declined. Complaint volumes rose. The chatbot struggled with anything requiring judgment — when to bend a policy, how to read frustration, what to do with something genuinely new. Users described it as a filter routing complex issues to human agents. The same agents Klarna had let go.
By early 2025, the CEO told Bloomberg: “We focused too much on efficiency and cost. The result was lower quality, and that’s not sustainable.” Klarna is now rehiring and piloting an “Uber-style” hybrid model.
The CEO said something revealing in the Bloomberg interview: customer service is not about answering questions — it is about building relationships that create loyalty. That relationship-building capacity was distributed across 700 people who had collectively handled millions of conversations under supervision. When those jobs disappeared, so did the pipeline producing the next generation of people who could do that work at scale. Klarna can rehire today because experienced workers still exist. The question for 2030: where do experienced workers come from when entry-level jobs have been gone for five years?
You cannot buy the harvest without planting the crop. The productivity gain and the pipeline that produces it are not as separable as the spreadsheet suggests.
Garry told Andrew Warner in November 2025: “Before it was like, ‘Yeah, I know I need to replace my software.’… Today it’s becoming, ‘Oh. I see a demo. It’s really impressive… I need it right now. When can you start?'” The urgency shift is real. AI transformed software from a “nice to have” into a necessity customers greenlight in days, not quarters. But here is the thing Garry is watching closely: the YC companies growing fastest are the ones with founders who worked the job they are automating. The ones struggling are solving problems they researched but never lived. That tacit knowledge — understanding exactly what breaks, exactly where the friction is, exactly what good enough means — cannot be skipped. And the organizations automating away entry-level roles are destroying the pipeline that produces it.
Two weeks ago, a Series B company asked me to find a Senior Product Designer who “gets AI.” During intake, I asked about their junior design roles. They had eliminated all three positions in Q4 2024 — Figma AI could handle the production work. I asked where their next senior designers would come from in 2028. Silence. Then: “We’ll hire them from other companies.” I did not ask the obvious follow-up. If every company makes the same calculation, where do any senior designers come from? Figma’s 2026 report shows 56% of companies now prioritize senior hires while only 25% are hiring junior roles. Designer Fund reports design job postings up 60% year-over-year — but the postings are overwhelmingly senior. We are hiring for judgment while eliminating the roles where judgment gets built.
The decision worth making now: design AI integration to deliberately preserve the failure loop for junior practitioners in your core domain, even at throughput cost. They still do the slow version of the work. Still have their output criticized by someone who knows what good looks like. Still iterate under standards they cannot yet meet. This is not sentimentality. This is protecting the source of the thing your organization will be selling in 2030.
The model is not the advantage. Context is.
Every six months, the gap between frontier models and widely accessible models shrinks. Any competitive advantage built purely on model access has an expiration date shorter than most product cycles. Organizations betting on model superiority as their AI strategy are building on a foundation actively eroding beneath them.
The advantage that compounds — the one that becomes harder to replicate over time — is institutional knowledge made computationally accessible.
Every organization has institutional knowledge. It lives in people’s heads and will leave when they leave. It shows up in how your best practitioners read a bad proposal in two minutes — without a rubric, because the rubric was never written. AI has made externalizing that knowledge tractable for the first time. The organizations building this layer are accumulating something specific, earned, tied to their domain’s failure modes. It cannot be bought. It can only be built over time. That is the moat.
A better model is a faster car. Institutional knowledge is knowing which roads exist. Your competitor can rent the car. They cannot steal the map you built from fifteen years of getting lost.
Think about what happens when a competitor enters your market with access to the same models. They can match your speed. They can match your model quality. What they cannot match: the accumulated knowledge of which market signals predict problems before they show in numbers, which integration patterns historically fail, which customer complaints are leading indicators versus noise. That knowledge took you years to accumulate and encode. It is specific to your domain, your customers, your failure history. Better models will amplify it. The knowledge itself does not depreciate when models improve. It compounds.
The starting point is simple and most organizations will avoid it because it requires time with people whose time is hardest to get. Have your three best practitioners in each core function narrate their decision-making on five real cases from the past six months. Record it. Extract the heuristics they apply that are not written anywhere. Build those into standing context for AI interactions in those functions. This takes weeks. The compounding return begins immediately and continues indefinitely.
- 1 They ask what the organization would look like designed from scratch — not how to bolt AI onto what exists. Sam Altman told YC founders in June 2025: the person who builds the next company bigger than OpenAI is working on something outside the five most common AI categories.
- 2 They solve problems they personally encountered and understand at a molecular level. Garry Tan: “The key insight is usually lying in your own history or in your own backyard.” The fastest-growing YC companies have founders who worked the job they are automating.
- 3 They build context infrastructure that competitors cannot replicate by buying better models. Your institutional knowledge — what signals predict problems, what integration patterns fail, what good looks like — is the actual moat.
- 4 They embed adversarial review structurally. The goal: ensure challenges are genuinely hard, not conveniently dismissible. Sycophancy compounds invisibly for 18 months before surfacing in failed outcomes attributed to execution.
- 5 They protect the failure loop for junior practitioners in core domains, even at throughput cost. Garry and Satya emphasized this at YC’s June 2025 AI Startup School: “go undercover and work the job” — because judgment cannot be built without failure under supervision.
Leave a Reply