Composing Agentically (Someday)
PRs 14-20: Real-time operations and my first goosebumps using Fugue
After a dozen or so pull requests, Fugue had some basic audio synthesis capabilities and a simple method for describing generative compositions. Users could load up one of the examples using the Rust CLI and hear that things basically work. It was functional, if more than a little boring to work with. The only way to make some music was to edit a JSON file and listen to the results.
Last week, I focused on the sorely needed ability to change content in real time, and I spent this week playing with the result. Notably, this was the first set of features for Fugue that relied on a engineering workflow I wrote about recently in another post. If I’m being honest, that approach already feels a little quaint — relentless evolution is the default mode of all things today — but it let me power through several major features in an afternnoon.
The Real-time Operations Project
Initially, I wrote a single item in my task management tool. It was just a sketch, useful an idea captured for the roadmap without a firm timeline. As I fleshed it out, adding bullet point after bullet point, the scope clearly expanded beyond a simple task. Bullet points are a clear signal to break things up.
When defining a project, though, I try to avoid exhaustive detail, saving specifics for individual tasks. Human developers need wiggle room. Too much wiggle room can be dangerous for agents, but they still benefit from a loose, if short, leash. The original Real-time operations issue became a project with a basic description:
The initial iterations of the library have focused on creating basic modules and supporting a declarative document structure. Fugue must also support automated and user-initiated actions at runtime on instantiated Inventions. This includes:
Adding and removing connections between modules.
Adding and removing modules from the running invention.
Changing control values (and applying them)
The API must be ergonomic and easy to use. In addition, care must be taken to safely validate types, and it is ultimately up to individual modules to do this. Use the existing controls API as a model.
To support these features, we should also create two tools:
A TUI application or REPL for creating, loading, and modifying inventions on the fly
A MCP server for creating, loading, and modifying inventions on the fly from a LLM agent.
From there, Claude and I broke it down into individual tasks:
Please take a look at the "Real-time operations" project on Linear in the ilusiv workspace. Let's work on creating issues for the project together.
After a short back and forth, we settled on seven tasks:
Expose SignalGraph for runtime mutation — the foundational plumbing (make graph accessible, add thread-safe mutation API)
Runtime module lifecycle — add/remove modules on a running graph
Runtime connection management — add/remove connections on a running graph
Runtime control value API — unify and extend the existing controls system for the runtime context
Runtime graph validation — validate graph integrity after mutations (acyclic check, port compatibility)
TUI/REPL for live invention manipulation
MCP server for LLM-driven invention manipulation
Part of the discussion was a brief exploration of whether the new team orchestration feature was useful here.
The dependency chain strongly favors the standard subagent approach for issues 1-3 and 5. Here's why: - FUG-47 (expose graph), FUG-48 (module lifecycle), FUG-49 (connections), FUG-51 (validation) all modify the same tightly-coupled files: runtime.rs, graph.rs, builder.rs. They form a strict dependency chain. Parallel agents would step on each other constantly. - The codebase is ~6,700 LOC across 35 files — small enough that a single agent can hold full context without difficulty. FUG-50 (control API) is the one issue that could run in parallel since it touches different files (traits.rs, handles.rs) and has no blocking dependencies. But it's straightforward enough that the overhead of coordinating a separate agent isn't worth it. FUG-52 (TUI) and FUG-53 (MCP server) are the best candidates for an agent team — they're largely new code, independent of each other, and only depend on the APIs built by 1-5. But they can't start until the core work is done. My recommendation: standard subagent approach, working through 1 → 2+3 → 4+5 → then potentially teaming 6 and 7 if you want to tackle both.
The implementation phase was fast and focused. the first five pull requests were small, with unit tests making up the bulk of changes. The fifth task, centered on graph validation, had raised a red flag, as it noted checking to ensure acyclic graph — that is, nodes in the signal graph the directly or indirectly connect to themselves. In an earlier development phase, I had punted on allowing cycles. They are a natural, even fundamental, part of an audio synthesis system, but they bring the complexity of needing to track what modules have already processed samples. Early on, this seemed like a burden, but it was a problem that needed to be solved eventually. Before I let the agent get started, we clarified the need to support cycles and incorporated adding it into the plan.
Finally, fun
Last on the lists of tasks for the day were two means of (finally) interacting with Fugue in real time:
First, a terminal application, or REPL, to let users manipulate Inventions directly.
Then, a MCP server for let an agent mediate that control.
In my head, I assumed the REPL would be the more powerful of the two. With a simple set of commands for defining the signal graph and control connections, I imagined moving as quickly as I could manually move patch cables in a physical synthesizer. Using an LLM to describe connections sounded tedious.
I had it backwards.
While the agent included a tutorial in the docs to quickly assemble a working Invention, the experience of building more than a basic synthesis was painfully slow. Yes, moving patch cables around is itself tedious, but the physicality of the act can make it meditative. You start to feel like a mad scientist managing the complexity of colorful cables spiraling in all directions.
I’m not sure why, but the agent decided not to include an example prompt to use with the MCP server. I wondered if it didn’t know what to do with it yet either. I started simple:
Create a new fugue invention >> What kind of invention would you like to create? Something ambient and atmospheric >> I'll build an ambient atmospheric invention with a slow melody, LFO-modulated filter, and shaped envelopes.
It took a long time and occasionally spewed audio artifacts that were clearly bugs in the process, but it spun up an ambient, undulating drone within a minute or two. It felt magical, and it gave me goosebumps. Suddenly, I could feel a mode of composition I had always dreamed about becoming real.
For me, composing music is a practice of exploration giving way to discovery giving way to refinement. It is a cycle of these steps that only ends when I am happy with what I hear or simply run out of time. Each step is at once meaningful and monotonous, with my mind constantly battling the crushing monotony clawing its way to overtake the periodic, but expansive, sense of meaning. The tension between these extremes, I believe, is necessary. But it would be great dial back the painful part just a little. This tool was doing that for me.
Sadly, I neglected to record that first session composing ambient music with an LLM, but I captured the next one.
[Do skip around the video if you hit play. Not all the sounds are pleasant]
The music isn’t very good and it is full of glitches. I know that. But it’s a start, and it felt really good.
Ultimately, a tool like Fugue is about empowering people to be creative and expressive. While creating good music is a worthy goal, the creating part is more important than the good part. I am not yet ready to announce that I have adopted an agentic workflow for creating music, but it is now top of mind. I see it coming. As always, lot’s more to do.




