← Async Digital

English Cymraeg

Working notes ·

Thirty days of agentic development

Async Digital Ltd Cardiff, UK Apple platforms

Abstract

This note reports throughput observations from one single-developer studio across a 30-day window, 24 March to 23 April 2026, with an autonomous coding assistant in the loop. Activity touched 22 repositories. Around 426 commits landed across 100 merged pull requests, for a net of roughly 36,000 lines of code. Two-thirds of the insertion volume came from one macOS app in the run-up to an App Store release; the rest spans compliance documentation, marketing-site work, support packages, and tooling. The figures are observational, not prescriptive. They mark one window, not a steady state, and they measure motion, not value.

These figures describe one studio across one 30-day window. They are observation, not benchmark. The point is to record what happened.


§1·Numbers

The headline figures

Activity across the window spanned 22 repositories. Within those, 426 commits landed across 100 merged pull requests. The git history shows 55,900 insertions and 19,700 deletions, for a net of roughly 36,000 lines of code.

A separate session-level count, drawn from the assistant’s own activity log, places gross edits higher: 81,000 insertions and 7,700 deletions, across 2,120 files. The session count is wider because it includes work that didn’t survive into a commit: drafts, throwaway experiments, and rolled-back attempts. The committed count is what shipped.

Session edits versus committed lines Four horizontal bars comparing edit volume during agent sessions versus committed git history across the 30-day window. Session insertions are 81,000; session deletions 7,700; committed insertions 55,900; committed deletions 19,700. Session insertions Session deletions Committed insertions Committed deletions 81,000 7,700 55,900 19,700 0 20k 40k 60k 80k 100k
Figure 1 Session edits versus committed lines, 24 March to 23 April 2026. Session counts include drafts, throwaway, and rollbacks; committed counts are what shipped.

On the operator side, the assistant received 4,740 messages across the window, an average of 153 a day. The Bash tool was invoked 11,086 times. The session-time clock recorded 186 hours.

§2·Shipped

What landed

Within the 100 merged pull requests, most of the shipped value concentrated in Audient, the studio’s macOS app: new features added, surplus features removed, sandbox hardening, and a series of bug fixes against linker and dependency issues that surfaced as the app moved towards release. The marketing site shipped its launch-state copy and SEO foundation. The internal documentation and compliance workstream shipped its initial draft set.

A pull request is one shipped decision, recorded in git history. 100 in 30 days reflects the rate at which decisions could be drafted, reviewed, and committed, not the rate at which complex problems could be solved. Some PRs landed within minutes after a typo fix; others spent hours under review or across several review rounds.

§3·Surface

Where the output landed

Output was not evenly distributed. Around two-thirds of the insertion volume came from Audient, the studio’s macOS app, which was in the late run-up to an App Store release. The remaining third spread across the studio’s other surfaces: a marketing site, a compliance corpus drafted in English and Welsh, support packages and tooling for the app, and a follow-on workflow project.

Insertions by work surface Horizontal bar chart of insertion volume across the 30-day window, grouped by category. Audient is 38,446 lines (69 per cent). Documentation and marketing is 7,778 (14 per cent). App support packages is 6,000 (11 per cent). Workflow projects is 1,148 (2 per cent). Other is 2,552 (4 per cent). Audient (app) Documentation and marketing App support packages Workflow projects Other 38,446 · 69% 7,778 · 14% 6,000 · 11% 1,148 · 2% 2,552 · 4% 0 20% 40% 60% 80% 100%
Figure 2 Insertions by work surface across the window. Audient, the studio’s macOS app, accounted for around 69 per cent of insertion volume; the remainder spread across documentation, marketing, support packages, tooling, and a workflow project.

The dominant work surfaces were Markdown and Swift. The assistant’s tool log shows Markdown edits dwarfing every other file type, reflecting how much of the window was spent on compliance documentation, brand and marketing copy, knowledge-vault maintenance, and project planning. Swift work concentrated in Audient and its supporting packages.

§4·Caveats

What the numbers don’t capture

Lines of code are a measure of motion, not of value. A clean refactor that deletes 1,000 lines reduces the count without making the software worse. A careless commit that adds 1,000 lines makes the count larger without making the software better. The figures above measure how much text moved through the working tree, not whether the moves were good ones.

A single 30-day window is not a steady state. The Audient app was in a launch sprint; future windows will not hold that concentration. The compliance corpus was drafted from scratch in this period; the next 30 days will spend most of its time maintaining what was written rather than expanding it. Throughput in any one month records what happened, not what will keep happening.

These figures are descriptive, not prescriptive. Whether the same pattern would hold for a different developer, a different surface area, or a different month is outside the scope of this note.