Coding with AI Knowledge Sharing Community
The Spark
“I’ve never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue. There’s a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering. Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind.”
— Andrej Karpathy, December 2025
Purpose
LLMs in coding are moving faster and changing our world as developers massively, comparable to compilers, personal computers, and cloud computing. We are all newbies in this space, and can benefit from each other’s learnings. This knowledge sharing community exists to accelerate our collective understanding, promote adoption of AI coding tools, and thereby increase PDE’s impact on Beyond’s growth.
This transformation spans multiple dimensions:
- Foundation models and their harness: agents.md, skills, commands, sub-agents, system prompts, tool definitions, …
- Tooling: IDEs (Cursor, Zed, neovim), CLIs (Claude Code, opencode), MCP servers, browser extensions, …
- Ecosystems, observability, DevX, and impact on traditional CI/CD: LangChain, LangGraph, FastMCP, MCP, A2A, OpenAI Agents SDK, Anthropic tool use, Evals, LangSmith, …
- LLM shortcomings: Context rot, speed vs accuracy vs cost tradeoffs, hallucinations, non-determinism, reasoning failures, token limits, …
- Coding patterns to work around shortcomings: Context management/offload/pruning/compaction, orchestrator/sub-agents, producer/judge patterns, agent loops, structured outputs, …
In Scope
AI tooling, patterns, and best practices for development. This includes experimenting with new tools, sharing what works (and what doesn’t), and developing shared mental models for this new layer of abstraction.
Evolution of our engineering culture and DevX platform, aligned with our Engineering Strategy. Specifically:
AI-Amplified Engineering Excellence: “AI tooling doesn’t replace engineering judgment, it amplifies experienced engineers’ impact. Our focus on senior talent retention and AI enhancement creates multiplicative productivity gains that competitors focused primarily on cost optimization cannot achieve.”
Productivity Investment: “We encourage teams to adopt tooling and additional infrastructure that enhances any engineer’s productivity, recognizing that the ROI on value delivery is typically very high.”
Cross-Team Knowledge Sharing: “We encourage regular knowledge sharing through PED talks, pairing sessions, and mob programming across OBTs to prevent knowledge silos and accelerate collective learning and problem-solving capabilities.”
LLMs Strategy: “We choose to approach LLMs as powerful tools with untapped potential, but not as magic solutions to every problem. We understand their capabilities and limitations… Given the nascent and rapidly evolving and relatively immature ecosystem, we develop proprietary technology on top of LLMs frameworks foundations to address current shortcomings and maintain competitive advantages.”
Internal AI Tooling Expansion: “We plan to continue exploring AI-assisted development tools… and removing adoption barriers to let adoption happen bottoms up.”
Out of Scope
To keep this group focused on developer tooling and workflows, the following topics are out of scope. These are important areas that could be addressed in other knowledge sharing communities:
In-product GenAI: Frameworks and patterns we use in our product, where GenAI can help achieve roadmap objectives, Neyoba architecture decisions, customer-facing AI features, etc.
LLMs in a business setting: Using LLMs to support business operations, internal MCP servers and agents for non-engineering workflows, AI-powered internal tools for sales, support, or operations teams, etc.
Target Audience
Engineers who want to further their skillset on GenAI in coding, and/or want to share and discuss what has worked or not worked for them and why.
Meeting Structure
- Cadence: Once a month to start (frequency TBD based on group feedback)
- Format: Brainstorming and open discussion (not IDS)
- Outputs: Notes and highlights shared via Slack and dev team meeting
- Hosting: Rotating among participants
Slack Channel
#ai-coding
Join to share learnings between sessions, post interesting tools or articles, ask questions, and coordinate meeting times.