2025 Wrapped: Building Leverage

On Direction

I entered this year with a bias toward action. I wanted to build quickly, test ideas in the real world, and learn through iteration rather than planning. Over time, that bias exposed a deeper question: not what I could build, but what actually lasts. And as the year progressed, my focus shfited from shipping projects to designing systems, and eventually to chosing the environments that would compound my growth. What follows is less a catalog of work rather than a record of how my understanding of leverage evolved.

Notable Updates

Velocity First

At the start of 2025, I joined B@B, and was immersed in an organization that had a significant passion for frontier technology and a high bias to action. The technical density and culture helped me think more purposefully about building–in contrast to some of my prior research and projects, I found a group of peers that prioritized rapid iteration, and my motto for the first few months became “ship fast and iterate faster.” At a time where I found a lot of different ideas appealing, moving with velocity gave me a lot of clarity on whether there was something worth building on versus certain ideas being more of a dead end. I focused on heavy experimentation and minimizing attatchment, and built the following:

Early Experiments

  • Solar (Repo) – agentic wallet interactions on the Flare Mainnet
  • AI Tokenize (Repo) – tokenization platform for democratized AI model incentive management and access
  • Neptune (Repo, Autotex Repo) – music creation and editing with Meta MusicGen
  • Heavy Weather (Paper, Repo) – PPO-based RL for adaptive portfolio management under crisis regimes
  • Sure (Repo) – your personal automatic fact-checking extension that uses your browsing activity to know what you mean to say

From Projects to Systems

After enough projects, a pattern started to emerge. The code worked, the ideas were interesting, and the iterations were fast–but the impact was fragile. And many areas that interested me had problem spaces that weren’t just inside a codebase, but spanned the interfaces between people, incentives, and memory. And so I began to shift away from isolated execution and towards designing structures to make momentum durable.

Emergent

One of the first primitives that interested me was signal—SF’s favorite buzzword. A friend and I became interested in building systems that could self-select for low noise and automatic alignment, which led us to build a platform designed to create collisions between great people and enable them to build together: the idea that became alpha.me.

Alpha’s core concept was proof-of-conviction. To join, you needed someone with signal—someone who had built something meaningful—to vouch for you. The goal wasn’t to tokenize people or attach financial weight to relationships, but to surface lightweight attestations of value: a digital version of “I’d take a coffee, lunch, or dinner with this person.” Super-connectors often have informal systems to track their relationships, but at a certain scale it becomes impossible to hold all of that context in your head. Cory Levy from Z Fellows once mentioned that he meets hundreds of people a month who would likely work well together, but short of putting everyone into a group chat, there was no good way to create meaningful collisions. Alpha aimed to broadcast that kind of implicit approval outward—making curated networks visible and actionable, without forcing people to play the usual social media or publicity games.

The problem we quickly ran into was cold start. Without a strong initial cohort of highly credible participants, the platform didn’t generate enough value for either side. More importantly, it wasn’t a product that benefited from scaling laws—growing it too quickly would have actively destroyed the signal it depended on. That realization sent us back to the drawing board to rethink how collisions should work in an AI-native world.

Our next iteration was Connectus, a tool designed to help users tap into their latent network. The assumption was simple: most professionals already know someone who can help them—they just don’t know that person is relevant right now. Connectus performed a deep search over calendars and email to identify relationships, encode interactions, and construct a social graph that surfaced the most relevant connections for a given task. As a representation mechanism, it exceeded expectations: within our small team, we generated over 50,000 profiles, with clusters emerging that mapped cleanly onto real organizations like B@B, accelerator cohorts, and M.E.T. Surprisingly, the system handled retrieval better than anticipated with a simple traversal heuristic into a vector space where RAG was effective1. The harder problem turned out to be identity itself—building rich profiles from sparse signals like email metadata, where even resolving names and roles reliably is nontrivial.

What stood out most about Connectus was how much structure it could uncover from minimal input. That pushed our thinking toward what knowledge-graph ontologies might achieve if you “turned the faucet on.” Each of us had firsthand experience with coordination failures at scale—whether across time zones, teams, or high-stakes operational environments. Around the same time, I was working on RL post-training for VLAs and noticed a parallel: despite computer use and robotics sharing similar long-horizon structure, computer-use training lagged significantly, even though high-quality data was far easier to obtain. The common bottleneck was context. Empirically, it showed up everywhere: 30% of an employee’s day spent searching for context, 48% reporting fragmented work, and 95% of leaders lacking confidence in organizational productivity.

This became the thesis behind Emergent. If we could build a system that understood how people actually work on their computers—and construct an ontology over that behavior across teams—the use cases would emerge naturally. Early iterations focused on a shared knowledge base, but the product quickly evolved into a unified context layer. Humans still rely on brittle, outdated communication tools to align on work, and AI systems are even more constrained, forced to infer intent from incomplete data. No system today can reliably infer how, or more importantly why, work happens on a computer.

The current product is a desktop application that captures keystrokes, mouse activity, and screen content, combining a custom VLM with a context ontology to synchronize insight across teams. On top of that, we’ve built an agent layer that identifies repeated workflows and generates lightweight automations—closer to autocomplete for computer use than full autonomy. One challenge we’ve encountered is that off-the-shelf computer-use agents struggle with long-horizon, multi-step tasks; fully realizing the space will likely require custom training. It’s been exciting to see early pilots take shape, and I’m curious how much further this idea can be pushed over the next year.

BWCM

I’ve come to believe that good financial discipline includes actively investing a portion of your earnings into ideas you genuinely understand. Index funds and ETFs are responsible defaults, but everyone has some form of personal alpha—and leaving it untapped is its own kind of risk. My own exposure sits squarely at the intersection of technology, venture, and markets adjacent to them. I don’t live and breathe industrials, energy, or pharma, and while I may have surface-level insights into those domains, I’m rarely the most leveraged person to capitalize on them through intelligent trading.

Through M.E.T. and Blockchain at Berkeley, however, I’ve been fortunate to build close relationships with people who do live and breathe equity research, DeFi markets, and quantitative strategy—many of whom have been operating successfully on their own for years. I initially approached them to see if they could help manage my personal portfolio, but it quickly became clear that working together allowed us to systematize these insights and bring them to market in a more durable way.

In September, five associates and I founded Blackwell Capital Management (BWCM), an investment fund that maintains a concentrated traditional markets portfolio alongside a delta-neutral DeFi strategy. As Managing Partner, my focus has been on structuring the fund and leading our first raise, which brought together friends and family alongside limited institutional participation for a seven-figure initial fund size. The process has been deliberately methodical—marked by extensive legal and operational groundwork—but with our final documents now signed, BWCM enters 2026 positioned to begin its next phase of growth.

Choosing Environments

After spending much of the year thinking about how systems compound–how coordination, trust, and context determine outcomes–it became clear that the same logic applied to me. Progress was no longer a function of how many things I could build, but of which environments I chose to operate inside. The highest leverage move then wasn’t starting another project, but placing myself within contexts that enforced rigor, depth, and perspective by default.

Depth

My technical foundation was primarily built through research, and joining BAIR was an opportunity to reenter a professional research environment to explore a domain I’ve been very interested in–VLAs for Humanoid robotics. I’ve been in RL and robotics for seven years now, and I’ve been continually excited to watch the field evolve, so when the opportunity came up to work with Google Deepmind Robotics on posttraining humanoid robots to complete challenging real world tasks, I couldn’t pass it up. I’ll be writing about my research more in-depth in a later article, but I’m very excited to see how our investigation matures over this next year. Some problems are too hard to shortcut and work around, and its been a pleasure tackling ideas thought to be impossible on the bleeding edge of robotics with my team in the lab.

Perspective

I was introduced to Dorm Room Fund through the glowing referrals of friends in my professional netowork. Shuban (founder of AnswersAI), Sam (Partner at a16z), and Sara (Thiel Fellow, Ando, Alloy) all spoke incredibly highly of the program, and recommended that I apply. Since my time joining as a Partner, I’ve continually benefitted from the perspective the role provides and the rich community it is. As a builder, investing has given me insight into what works and what it really means to be differentiated. Its also enabled me to step back a bit from the technical rabbit holes I’m always in and instead explore problem spaces orthogonal to my background, that end up sparking connections that are not intuitively obvious. We recently backed our first company–an incredibly exciting robotics synthetic data company–and the process of going through the full investment pipeline has taught me how to critically find value in the space.

I’ve also found the DRF community incredibly supportive. Of course, Molly and Madi are incredible to work with over at HQ, and the SF team is so insightful across a broad variety of subjects–in each conversation we have about anything, there’s always a true expert who’s able to provide incredible isnight on the space. But I was (positively) surprised by the openness of the alumni community–I’ve been able to chat with dozens of program alumni who’ve gone on to found incredible companies, join VC firms, or become world-class operators, and they provide a visibility into the future that has really opened my eyes as I imagine what the next decade might look like for me.

Reflection

I still remember when I first got into the M.E.T. Program. It was the second semester of high school, and as I was deciding between a number of top Ivy-league private universities, I ended up deciding to join this cohort at UC Berkeley instead. Today, I think it may be one of the best decisions I’ve made. M.E.T. is a cohort of students I consider to be truly world class–I’ve made my closest friends through the program, and am continually pushed to become better thanks to the quality of the people around me. Being in San Francisco is also an unequivocal plus–most of what I do wouldn’t really be possible if I couldnt swing down by Palo Alto or SF whenever necessary. And I have come to continually appreciate how Berkeley is such a spiky and competitive place to be. I still remember the initial program promotions that sold it as the best parts of a public and private school experience put together. I didn’t really understand the benefit of the former until I came here. Cal is really a very flat bell curve, with incredible tail end people that are legitimately the best in the world in any subject. I remember once chatting with our program founder–Michael Grimes, the former head of Tech IB at Morgan Stanley–and he said that no matter how esoteric the subject, chances are there’s someone from Berkeley that rank in the top ten worldwide there. Its been true so far in my experience–the people I run into are such deep experts at what they do that I’m always left excited to learn something new every day.

M.E.T. has a seminar class for the incoming freshman class, and this year I helped teach it alongside our program director, Saikat. Prof Chaudhuri is incredibly well-versed in almost every interesting topic in technology management, and is an incredibly high profile business consultant. He’s also one of the best speakers I’ve met, and working with him was a great opportunity to learn through osmosis. I’ve also found helping teach my underclassmen to be incredibly peaceful–its given me the opportunity to focus on distilling my insights from my time so far in university, and prompted a lot of reflection on how my worldview has shifted. I’ve had the opportunity to formalize instincts I’d previously relied on implicitly, and think it is an invaluable opportunity for anyone who wants to grow into a great leader.

Commitment

As I look to my professional future, I debated between a multitude of engineering and ressearch roles, both as internships or full-time offers, and have ultimately decided to join OpenAI. I think it is an environment that will provide me the leverage to explore the problems I am most interested in, and the people I’ve met there are those that I greatly admire and look forward to learning from. More than a role, it’s an environment that enforces the kind of rigor and scale I want to grow into.

Steering and the North Star

Looking back, the most important thing that changed this year wasn’t the scope of what I worked on, but how I decided where to place my time and attention. I started the year optimizing for velocity—building quickly, exploring broadly, and letting iteration create clarity. Over time, that instinct evolved. I began to notice that the work that mattered most wasn’t necessarily the work that shipped fastest, but the work that lived inside the right systems and environments, where effort could compound rather than dissipate.

As I look toward the next year, I’m steering myself less by the density of interesting opportunities and more by alignment. Alignment between problems that are genuinely hard, people I deeply respect, and environments that enforce rigor by default. I’ve become more comfortable being selective—saying no to good ideas in order to create space for the few that are worth sustained depth. Constraint, rather than limiting progress, has started to feel like a prerequisite for it.

My north star going forward is to work on problems that have real surface area, in contexts where learning is unavoidable and progress is durable. If the past year was about learning how to move quickly and explore widely, the next one is about committing more deliberately—choosing where to build, where to learn, and where to let time do its compounding work. I’m less interested now in how fast I can move, and more interested in whether the direction I’ve chosen will still matter years from now.


  1. There are some interesting research reports coming out on the efficacy of RAG in long context environments. Should be interesting considering the number of early stage companies that use RAG as foundational to their product. ↩︎

← Railway Barons and Commodity Tennis

Objects in Mirror May Be Closer Than They Appear →