<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Mandelson Fleurival — Blog</title>
  <subtitle>Developer. Designer. Builder.</subtitle>
  <link href="https://mandelson.co/feed.xml" rel="self"/>
  <link href="https://mandelson.co/blog/"/>
  <updated>2026-04-17T00:00:00.000Z</updated>
  <id>https://mandelson.co/blog/</id>
  <author>
    <name>Mandelson Fleurival</name>
    <email>manny@mandelson.co</email>
  </author>
  
  <entry>
    <title>Why I default to local models for bulk work</title>
    <link href="https://mandelson.co/blog/default-to-local-models/"/>
    <updated>2026-04-17T00:00:00.000Z</updated>
    <id>https://mandelson.co/blog/default-to-local-models/</id>
    <summary>Classify, summarize, extract. That&#39;s 80% of what agents actually do, and all of it runs fine on a laptop. Here&#39;s how I decide what goes local and what stays in the cloud.</summary>
    <category term="ai"/><category term="building"/><category term="thoughts"/>
  </entry>
  
  <entry>
    <title>The bottom-up edit rule</title>
    <link href="https://mandelson.co/blog/the-bottom-up-edit-rule/"/>
    <updated>2026-04-17T00:00:00.000Z</updated>
    <id>https://mandelson.co/blog/the-bottom-up-edit-rule/</id>
    <summary>One edit always worked. Two edits usually worked. Five edits against a real file is where the flow fell apart quietly. The fix is a one-line rule in the system prompt.</summary>
    <category term="ai"/><category term="building"/><category term="process"/>
  </entry>
  
  <entry>
    <title>The 12,000-token message I didn&#39;t know I was sending</title>
    <link href="https://mandelson.co/blog/the-12000-token-message/"/>
    <updated>2026-04-16T00:00:00.000Z</updated>
    <id>https://mandelson.co/blog/the-12000-token-message/</id>
    <summary>Jarvis&#39;s context was ballooning every time the reviewer cycle fired. The message looked tiny. Three-line critique, a few rules. I was actually shipping about 12,000 tokens per fire and had no idea.</summary>
    <category term="ai"/><category term="building"/><category term="process"/>
  </entry>
  
  <entry>
    <title>How I&#39;d set up LM Studio today</title>
    <link href="https://mandelson.co/blog/how-id-set-up-lm-studio-today/"/>
    <updated>2026-04-10T00:00:00.000Z</updated>
    <id>https://mandelson.co/blog/how-id-set-up-lm-studio-today/</id>
    <summary>Local just crossed a line for me. The last Llama.cpp update bumped my token generation about 33% on identical hardware, and the gain holds across long agent sessions instead of collapsing after a few thousand tokens. This is the setup I&#39;d copy today.</summary>
    <category term="ai"/><category term="building"/><category term="notes"/>
  </entry>
  
</feed>
