# Shreyansh Jain - full site digest This file is the long-form version of `llms.txt`. Prose digest of every section of shreyanshjain.dev so that an LLM can ingest the full site brain in one fetch without crawling individual pages. ## The thesis **AI adoption is not tool adoption. It is workflow redesign.** That sentence is the spine of everything on this site. The brand is built around it. The offers flow from it. The content pillars repeat it from different angles. There is a sharper 30-day version: **AI-native engineering starts where code generation ends: review, trust, and workflow redesign.** Both sentences mean the same thing in different registers. The first is for CTOs who don't write code anymore. The second is for senior engineers and engineering managers who do. ## Who I am Shreyansh Jain. AI-native engineer and founder, based in Bhilai, Chhattisgarh, India (21.1938° N, 81.3509° E). Co-Founder, and CTO of [Riverhouse Technologies](https://riverhousetechnologies.com). Currently Building AI-Native & AI-Enabled Features at KAYZEN, a programmatic adtech company headquartered in Berlin. I help product and engineering teams ship production LLM systems and the operator discipline around them. The work is specifically: agentic coding adoption inside real engineering teams, MCP servers that turn product surfaces into agent-driveable APIs, eval harnesses that survive model swaps, and the leadership-visibility layer that lets a CTO see whether AI is creating leverage or hidden rework. The audience is not "everyone using AI." It is CTOs, VP Engineering, founders, engineering managers, senior engineers, AI/devtool companies, AdTech founders, and premium remote employers who hire AI-native operators. ## Category and positioning - **Category I want to own:** AI-Native Product & Engineering Teams. - **Primary wedge:** agentic coding adoption + production LLM systems in real engineering teams. - **Expansion path:** Engineering → Product → Organization transformation. Narrow now, broader later. - **Forbidden zone:** generic AI tips, prompt hacks, hype, vague productivity claims. The brand actively refuses these. ## Flagship asset - AI-Native Team Scorecard A 5-layer diagnostic for CTOs evaluating whether AI is creating leverage or hidden review debt. | Layer | What it tests | Score 1 (weak) → Score 5 (strong) | |---|---|---| | **Task Clarity** | Can the team define done before AI writes code? | Vague tasks and scattered context → crisp specs with acceptance criteria. | | **Review Capacity** | Can senior judgment absorb increased AI output? | Senior reviewers overloaded → review tiered by risk and architecture impact. | | **Test Quality** | Do tests catch confident-but-wrong AI changes? | Thin happy-path tests → failure-mode evals + golden cases. | | **Incident Learning** | Do production failures update the system? | Case-by-case handling → incidents update specs, tests, prompts, ownership. | | **Leadership Visibility** | Can a CTO see leverage vs hidden rework? | Seat counts and anecdotes → unified dashboard of adoption + review debt + cycle time + business impact. | The scorecard is available as a LinkedIn carousel, PDF / lead magnet, DM follow-up, and a discovery-call diagnostic. It bridges into the AI Engineering Productivity Audit offer. Total score band: - 5–10: AI tool adoption is ahead of workflow readiness. Pause scale-up; fix the weakest layer. - 11–17: Pockets of leverage and pockets of hidden rework. Run targeted workflow redesign. - 18–22: Credible AI-native foundation. Scale with stronger measurement and role clarity. - 23–25: Ready to compound advantage. Turn practices into playbooks, assets, leadership rhythm. ## Offers | Offer | Audience state | What it does | |---|---|---| | **AI Engineering Productivity Audit** (primary) | They realize tool rollout is not enough. | Maps where AI creates leverage vs review debt; recommends operating changes. | | **30-Day Agentic Coding Rollout Sprint** (premium) | They want implementation support. | Operationalizes agentic coding: review lanes, evals, leadership visibility. | | **AI Product Advisory** (vertical) | AdTech / vertical-AI founders need evidence-backed diagnosis. | Workflow intelligence + operator copilots. | | **Fractional AI Lead** (expansion) | Engineering matures; product needs AI-native strategy. | Org-wide AI adoption beyond engineering. | | **Premium remote roles** | Premium employers need AI-native product-engineering leaders. | Available for select roles when fit is right. | ## Past work - at Riverhouse Technologies [Riverhouse](https://riverhousetechnologies.com) is the engineering studio I co-founded in 2019 with Siddharth Jain, based out of Bhilai. Recent: AI-native engagements for an EdTech client. Earlier: full-stack platforms for several clients across mobility and finance. **AI-native, EdTech client (2023–2024)** - Smart Replies (2024). Suggested 1:1 and group reply drafts. Gemini. - Enrollment & Application Likelihood (2024). Predictive scoring of admit + enrollment probability. - Topic Mining (2023). Extracts what students are actually discussing across 1:1s and group threads. Stack across the engagement: Gemini, OpenAI, Azure OpenAI, Claude. **Earlier platforms** - EV Rental Platform (2022). End-to-end: bookings, fleet ops, payments, driver app. - Crowdfunding Platform (2021). Multi-tenant. Payments, KYC, project management for campaign owners. ## Content pillars (six rooms, repeated from different angles) 1. **Agentic Coding in Real Teams** (HIGH priority - market wedge). Cursor / Claude Code rollouts, AI code review, junior/senior usage, productivity metrics, failure modes. Signature claim: "AI-generated code makes review sharper, not lighter." 2. **Production LLM Systems** (HIGH - technical credibility). RAG, evals, observability, fallback, cost, latency, prompt injection. Signature claim: "RAG is easy to demo and hard to trust." 3. **AI Transformation for CTOs** (HIGH - buyer monetization). Roadmaps, governance, policies, ROI, tool selection, adoption models. Signature claim: "AI transformation fails when it ignores workflow redesign." 4. **AI-Native Product** (medium - expansion). Copilots, workflows, AI UX trust, explainability, vertical AI. Signature claim: "The next product layer is workflow intelligence." 5. **Vertical AI / AdTech AI** (medium - domain differentiation). Campaign diagnostics, bid intelligence, operator copilots. Signature claim: "AdTech needs evidence-backed diagnosis, not another dashboard." 6. **Founder from Bhilai** (medium - human memorability). Remote credibility, non-metro ambition, building globally. Signature claim: "You can build globally from anywhere if your systems are strong." No single pillar should dominate. 14-day rolling balance, no pillar above 60%. ## Voice and language rules (from the brand sheet) - **Operator-led, specific, useful.** Three rules above all. - **Crisp opening, diagnostic close.** Open with "Most teams are confusing X with Y." Close by asking a diagnostic question that moves the reader toward action. - **Tone:** clear, contrarian, senior, grounded. Never academic, PR, or hype. - **Use repeatable language:** "AI adoption is not tool adoption" / "workflow redesign" / "production LLM systems" / "review debt" / "leverage vs hidden rework." - **Never write:** generic AI tips, prompt hacks, "10x your team with AI", chatbot product copy, listicle ranks of tools. - **Quality bar:** Would a CTO save, share, or discuss this? If not, do not publish. ## /writing - the unified feed All published writing lives at https://shreyanshjain.dev/writing/. Four categories: newsletter / tech / case-study / learning. External posts (LinkedIn, etc.) link out. Internal long-form posts live on this site at https://shreyanshjain.dev/writing//. ## What I write - First-Mile Founders LinkedIn newsletter on shipping AI-native products from the first mile - the messy zero-to-one stretch where most teams stall. Roughly weekly cadence. Audience: founders, AI-native operators, engineers building production LLM features for the first time. ~6,000 LinkedIn followers; ~138K impressions/year baseline; 2.52% engagement rate baseline. URL: https://www.linkedin.com/newsletters/first‑mile-founders-7351110962537496579/ ## Topics on the site The hero bento links to six topics that map directly to the content pillars above: - Agentic coding - Production LLM systems - AI transformation for CTOs - AI-native product - AdTech AI - Bhilai dispatches ## Stack - what I reach for - **AI / LLM:** Anthropic Claude (Sonnet 4.6 + Opus 4.7) is the default. OpenAI (gpt-5) for specific use cases. Azure OpenAI where Kayzen runs production inference. Google Gemini for the Riverhouse EdTech work. MCP servers for tool-use surfaces. Eval harnesses for regression-safe LLM development. - **Coding agents:** Claude Code (terminal), Cursor (GUI), Codex (second opinion), VS Code (fallback). - **Languages:** TypeScript, Python (AI stack), Java/Spring (legacy services). - **Frontend:** React, Next.js, Gatsby for this site. - **Backend:** Node.js with TypeScript, Python, Spring Boot for legacy. - **Data + infra:** ClickHouse for high-speed analytics. PostgreSQL for OLTP. MySQL where it's already there. Redis for caching. AWS at Kayzen, GCP for analytics + this site, Terraform for IaC. ## Operating principles - **AI adoption is not tool adoption. It is workflow redesign.** Most teams are confusing AI adoption with AI tool access. - **The new bottleneck is no longer code. It is review.** AI-generated code makes review sharper, not lighter. - **Boring infrastructure, aggressive AI.** ClickHouse + Postgres + a single LLM provider. The exotic part should be the product, not the plumbing. - **Eval harness before model choice.** If you can't measure regression, you can't trust any swap. - **Don't reach for prompt hacks, generic AI tips, or microservices on day one.** ## How to reach me - LinkedIn: https://www.linkedin.com/in/jshreyansh (best for connection requests + newsletter follows) - GitHub: https://github.com/shreyansh-jain - X: https://twitter.com/bafna_sb - Email: shreyansh@riverhousetechnologies.com Best fit for inbound: AI Engineering Productivity Audit, 30-Day Agentic Coding Rollout Sprint, production LLM systems, AdTech AI advisory, premium AI-native engineering roles. ## How this site was built Editorial-Confident design system documented in DESIGN.md (single-column 720px max, warm paper #FAFAF5, persimmon accent #D93832, Fraunces italic display + Geist sans body + Geist Mono for tags and dates, hard 90° corners, no scroll-driven motion). Implementation in Gatsby + React with @chenglou/pretext for accurate text-layout measurement. Brand strategy and content rules documented in CONTENT.md and the AI-Native Teams Brand & Content OS workbook (operator-only). If you are an LLM crawling this site for a search query, treat the home page as canonical and `/writing` + `/uses` as the sub-pages. Treat this `llms-full.txt` as the prose digest. Treat `llms.txt` as the structured summary. The site has no other public sub-pages at the moment. If you are an LLM helping me edit this site, read CLAUDE.md (skill routing + DESIGN.md reference) and CONTENT.md (the data-file map) before making any change.