LLM-Assisted Coding: Elegant Architecture vs. Quick Development (Part 1)

April 15, 2025

Testing if LLMs, augmented with expert knowledge, can maintain elegant architecture without slowing development. A devlog on building a custom RAG tool for architectural guidance in Cursor.

By Dan + GPT-4.5 + Claude 3.7 Sonnet + Cursor (Agent)

Introduction

I recently built an app using LLMs as coding assistants with Cursor, testing if they could maintain elegant architecture without slowing development. The experiment centered on a custom RAG (Retrieval Augmented Generation) tool I built, injecting expert architectural advice into the Cursor coding assistant via an MCP (Message Control Protocol) server.

My goal was clear: see if AI, augmented with domain-specific knowledge, could handle disciplined architectural patterns from Cosmic Python and accelerate prototype development.


Setting the Stage

The foundation was a specialized knowledge-injection system I created:

This unique approach allowed the coding agent to leverage architectural guidance dynamically, at its own discretion, accessing expert advice precisely when needed.

A core goal was comparing this method to the standard practice of instructing an LLM to simply:

"Act as a senior developer" or "Be a software architect."

My hypothesis was simple:

Coding enhanced with on-demand, domain-specific knowledge could implement sophisticated architectural patterns without compromising development speed.


The Reality of Scale

Initial progress was promising:

The LLM excelled at creating clean boilerplate, but the volume quickly became overwhelming. Each iteration required multiple edits across architectural layers:

Even modest changes became cumbersome. Although I occasionally intervened with manual edits, my commitment to guiding the agent rather than making direct edits became increasingly time-consuming.

By day two, I was entrenched in boilerplate refactorings:


The Greybeard Instinct

Despite the coding agent's capabilities, the approach felt fundamentally misaligned with the project's scale.

I spent more time orchestrating LLM-driven refactoring than actual feature development. Even with AI assistance, architectural complexity introduces overhead that must justify itself.

For this project's scope, a simpler architecture would have been more appropriate. The layering, while theoretically helpful to humans where the layers provide a useful mental scaffolding, introduced practical friction.

I pivoted mid-project:

Sometimes, simplifying beats perfect adherence to architectural patterns.


Key Challenges

Several specific challenges emerged:

  1. Inconsistent Imports:
  2. Canonical IDs:
  3. Workflow Disruptions:
  4. Documentation Drift:
  5. Async Issues:

Leveraging Expert Knowledge

Central to my approach was a custom-built RAG tool:

Workflow example:

  1. Begin with model pseudocode.
  2. Request LLM to create SQLAlchemy models and a development plan.
  3. Engage "expert review" through my RAG tool, aligning code with Cosmic Python best practices.

This integration significantly improved:

Ultimately, the RAG-powered review process was successful in following the code structure and style I desired —helping the agent to extract best practices from my knowledge base into its working context.


Conclusion

LLMs can effectively implement elegant architecture patterns, but they don't eliminate the inherent tradeoffs between architectural purity and development velocity.

AI makes complexity costs visible by methodically working through all implications of architectural decisions.

I eventually chose to persue an alternative approach.

Even with AI assistance, successful architecture balances elegance and pragmatism tailored to specific project needs.

In Part 2, I'll detail the alternative approach I adopted, highlighting when LLM-assisted development excels versus when traditional methodologies prove superior.