We're building Gitlip - our platform integrates Git repositories, real-time collaboration, AI-assisted programming, and one-click deployments
This post is going to expand upon some of the architectural decisions behind Gitlip, highlighting how Cloudflare’s Durable Objects are a building block of any real-time app built on Cloudflare, including our own. Along the way, we’ll share why Durable Objects are an amazing example of good system design, and why we’re surprised (and delighted) by what they’ve enabled, even at this early stage.
Why Good System Design Matters
Before getting into the technical parts, it's important to focus on a key idea that shaped how we built Gitlip: good system design. In software, this means designing systems that are scalable, easy to maintain, and able to adapt to new challenges.
From our point of view, good system design is:
- Hard to achieve: making something simple can require a lot of deep thinking
- Hard to define: sometimes you only recognize it when you see it in production
- High Leverage: the right abstractions create features you didn't even design
It can be thought of as a 'moat', in addition to factors like shipping velocity and can result in secondary benefits which were originally unintended.
We’ve started to notice a pattern: when the core architecture is thoughtfully designed, unexpected features start to emerge. These weren’t planned, but we believe they come from building on strong foundations.
Gitlip: Architectural Overview
Key subsystems include:
- A POSIX-compatible filesystem implemented in Durable Objects
- Git in Durable Objects (with the help of libgit2 + WebAssembly)
- A serverless Git-server (Ed Thomson, maintainer of libgit2 mentioned that Gitlip might be one of only a handful of Git servers built from scratch. This is something that encouraged us.)
- A powerful and repeatable WebAssembly build pipelie (with Nix)
- A multiplayer filesystem built with Durable Objects, Yjs, and WebSockets
- Traditional service orchestration (back-end, front-end)
Durable Objects were a direction inspiration for 1, 2, 3, and 5.
All of these were mostly originally unintended use-cases for DOs.
We think this is illustrative of good system design.
Against "Boring Tech"
A popular heuristic suggests startups should "pick boring tech" to avoid premature complexity.
We disagree!
We think that good system design anticipates scale through structural clarity, not post-hoc optimization. Durable Objects and Workers exemplify this approach:
- Intuitive developer ergonomics (e.g. Wrangler CLI ❤️)
- Decoupled compute and storage (reminiscent of functional programming languages)
- Compositional expressiveness (via RPC and service bindings)
- High shipping velocity
All of these makes Cloudflare Workers one of the not so secret sauces in our dev kitchen while we cook.
The Serverless Git Engine
The main component of our system at Gitlip is our WebAssembly based and libgit2 powered serverless Git-server.
Things to note:
- It runs in Durable Objects (so we don't manage server ourselves!)
- It scales (we can easily have as many of them as we would like)
- Performance (works well for the huge majority of codebasess by utilizing only one thread and 128MB of memory!)
- Size (fits in less than 1MB, which therefore means latency is low)
- Portable to other JS environments (such as browsers!)
Durable Objects are a storage product that we're sure were intended as an application level storage. However, it turns out it works really well as a block device also suitable to be used for a filesystem, and we see this as a testament to their composability
Streaming as Infrastructure
Cloudflare Workers were originally built entirely on the Fetch API. Today, there's also the RPC interface which is even more powerful. Thinking about the Fetch API for a moment: one extremely underutilized feature of the Fetch API are the streams!
When used right, with Workers, streams can make your whole infrasturcutre a kind of data-weaving pipeline.
For many use cases, streams in Workers let you get the job done without persisting data anywhere (not even in Durable Objects, KV, or R2) just by letting it flow through Workers and transforming it as needed.
In our system, streams aren’t just a data transport mechanism, they’re the backbone of how information moves, transforms, and flows between services. By passing streams directly across service boundaries, we avoid the overhead of buffering large payloads in memory. Each Worker stays lean and responsive, handing off data in motion rather than waiting for it to pile up. This architectural choice leads to striking performance gains, especially at scale.
But we don’t stop at just moving data. We multiplex and demultiplex streams to pack multiple data channels into one, then tease them apart downstream. We concatenate streams to build something bigger, or split them into smaller, more manageable parts. And perhaps most powerfully, we transform file formats on the fly. A tar.gz
archive, for example, can be unpacked and reshaped into a multipart form-data payload, all while streaming, without ever touching disk or waiting for the full file to arrive. This isn’t just efficient. It’s opened up a new way for us to think about how systems move and transform data.
In our infra, these techniques unlock a range of powerful use cases that squeeze every drop of performance from the Workers runtime. Here are a few examples:
- Initializing an editor: in our infra, this is a streaming operation that pipes a
tar.gz
archive from our git-server straight into one of our editors - Committing to a repo: is a streaming operation that pipes a tar.gz archive from our collaborative editor straight into our git-server which consumes it and creates a commit out of it
- Copying from anywhere on the Web: this is a streaming operation that decomposes a streaming
tar.gz
file and extracts only the relevant operation - Diffing repo-to-repo or repo-to-editor (or even editor-to-editor): this again is a streaming operation that involves two archives and multiplexing/demultiplexing
HOWEVER
At least some of the above features were unintended: and we believe this is an indicator of good system design. There's been a chain of thoughtful choices: from the Web standards committees, to Cloudflare's architecture and the way we're approaching building Gitlip. Each layer allowed for further possibilities later on.
One of the most overlooked strengths of Cloudflare Workers, including Durable Objects, is that they are a very good environment for running WebAssembly. It’s not only about performance, though Wasm is impressively fast. What really makes it special is the ability to tap into decades of native libraries that were never designed with the web in mind.
We take full advantage of that. Our WebAssembly binaries bundle native libraries that bring serious capabilities to our system. These aren't stripped-down JavaScript ports or rewrites. They’re the real thing, compiled and ready to run in a lightweight, serverless environment. For example, we include libgit2, zlib, libmagic, and libarchive.
Persistent Wasm: A Small Change with a Big Payoff
Here’s a technique we found ourselves needing while running WebAssembly inside Durable Objects. At first, we instantiated a new WebAssembly program on every HTTP call. This worked fine, until we introduced WebSockets. Suddenly we needed to maintain state across multiple messages, and per-request instantiation just wasn’t going to cut it.
So we reworked things. Now, when a Durable Object boots up, it instantiates the WebAssembly program once and reuses it for both HTTP requests and WebSocket messages. This small shift gave us two big wins: we could now use WebAssembly to process real-time WebSocket traffic, and we saw an immediate performance improvement by avoiding repeated setup and teardown.
That said, holding on to a long-lived Wasm instance means being extra careful with memory. Every invocation needs to clean up after itself. Thankfully, C being a pragmatic programming language helps here. Valgrind helps a lot, catching things missed by a careful review before it ever hits production.
Use the WebSockets Hibernation API, Get Lower Bills
If you're using WebSockets in Durable Objects, which is straightforward to set up, there’s almost no reason not to use the WebSockets Hibernation API. It's a Cloudflare-specific extension that behaves just like the standard WebSocket API but with one useful difference: it puts your Durable Object asleep while clients connected to it are idle.
This makes it possible to handle long-lived connections without paying for idle compute. The API is simple, and in our experience, switching to it came with zero downsides. Your objects wake up as soon as something happens again. It's the kind of small, thoughtful feature that quietly makes a big difference when you care about cost and scale.
Cloudflare Upgrades, and Suddenly Our Infra Grew a Brain
While we were heads-down building product features, Cloudflare quietly upgraded Durable Objects under our feet. We didn’t expect it, and honestly, we were a little shocked by what it enabled.
Our repos and editors suddenly became agents. Not metaphorically, but technically: they’re now both MCP clients and MCP servers. We’re still unpacking the implications of this, but it looks like it gives us a much clearer path toward building agentic coding features. That’s something we’ve noticed cropping up in a lot of competing products, and this shift has pushed it much higher on our roadmap.
Multiplayer Filesystem: More Than We Bargained For
The most sophisticated part of our system is what we call our Multiplayer Filesystem. It’s implemented entirely in Durable Objects, and it’s split into two conceptual parts:
DOFS: a POSIX-style interface
YFS: a Yjs-based collaborative interface
At the core is a state machine that stays in sync across the two sides. We built this to enable collaborative editing of Git repositories. But once it was up and running, we realized it could do more.
One of the most exciting uses for the future is potentially broadcasting. Someone on Gitlip could live-code into a shared repo, while an audience follows along in real time, exploring the same codebase. We're imagining something like Twitch, but designed specifically for programming. We didn’t plan for that use case, but it emerged naturally from how the system has been structured. Another example of something we can get not by planning for it directly, but by investing in clean and flexible architecture.
Composition is Underrated
Just a quick note here: composition is one of the most powerful features of the Workers platform, and it doesn’t get nearly enough attention. Service bindings and the Workers RPC model make it simple to stitch together small, focused components into a system that’s far greater than the sum of its parts. You just wire things together in a new way, and it often just works.
A Few Closing Thoughts
If you’re looking to build a next-generation real-time app, Durable Objects are a great place to start.
We’ve seen some developers describe them as not easy to use, but that hasn’t really been our experience. Like any new abstraction, they take a bit of time to click. That said, once they do, things start to fall into place.
It helps to approach them conceptually from a few different angles. Here are some perspectives we’ve found useful:
- A Cloudflare Worker plus transactional storage
- A primitive for building consistent, stateful systems
- A mini-computer in the cloud
- A surprisingly capable agent
- Both an MCP server and client
The new RPC interface in Workers was a huge quality-of-life boost. If you're starting something today, we’d recommend using it from the beginning.
We’ve seen a pattern pop up where people go through something like “Durable Object enlightenment.” First they try them. Then they understand them. Then they start seeing use cases for Durable Objects everywhere 🌌
Conclusion
We're just getting started! Stay up to date with our journey of building Gitlip by following @Gitlip_com. We support AI assisted development in the browser with Git repos and collaborative editors!