I wanted a fitness tracker that worked the way I think. Macros logged in plain English -- just type "grilled chicken, rice, and broccoli" and get numbers back. Garmin data living in my own database, not locked inside an app. A daily summary in Slack instead of another app I'd forget to open within two weeks. No SaaS product does this. So I built one. In a weekend. And I'm not a software engineer.

Personal automation used to require developer skills or expensive subscriptions that got you 80% of the way there. AI coding tools have changed that equation. This is the build log of a complete fitness tracking system -- Cloudflare Worker API, Supabase database, AI-powered macro estimation, Garmin wearable data sync, twice-daily Slack reports -- designed, built, and deployed in two days with Claude Code doing the heavy lifting.

This isn't a tutorial. It's an engineering notebook entry. What worked, what broke, what Claude Code handled on the first pass, and where I had to steer the architecture myself.

Why I Built This Instead of Using MyFitnessPal

Every fitness app assumes you want their interface, their features, their data model. I didn't.

MyFitnessPal makes you scan barcodes and search food databases. I wanted to type "two eggs, toast with butter, coffee with cream" into a text box and get macro estimates back. No barcode scanning. No selecting from a list of 47 slightly different entries for "scrambled eggs." AI makes natural language macro estimation trivial now -- send a meal description to Claude, get a JSON response with calories, protein, fat, and carbs. The whole interaction takes less than a second.

Garmin Connect has my watch data -- Body Battery, HRV, resting heart rate, sleep stages -- but there's no way to query it freely or combine it with other data I care about. The data is mine, generated by my body, recorded by my watch. But it lives in Garmin's ecosystem and stays there unless I go pull it out.

Then there's the interface problem. I live in Slack for work. A daily fitness summary that arrives where I already am -- with emoji indicators showing whether each macro is on track -- beats opening another app. I've downloaded at least five fitness apps in the last three years. I used each one for about ten days. Slack messages, I actually read.

The point isn't that fitness apps are bad. They're fine for most people. The point is that the gap between "I wish this existed" and "I can build this" has collapsed. Personal automation is no longer a developer privilege. If you can describe what you want clearly, you can build it.

Saturday Morning -- The Architecture Session

The first thing I did with Claude Code wasn't writing code. It was having a conversation about architecture.

I described what I wanted: log food in plain English with AI macro estimation, track workouts and weight, store everything in a database I control, surface a dashboard, send Slack summaries on a schedule. Claude Code proposed the architecture.

The stack: a Cloudflare Worker as the API layer -- serverless functions that run globally on Cloudflare's edge network, with a free tier and built-in cron triggers for scheduled tasks. Supabase as the database -- managed Postgres with a JavaScript SDK that works natively in serverless environments, also free tier. The Anthropic API for macro estimation. Slack webhooks for notifications.

What Claude Code decided on its own: the route structure (seven GET and POST endpoints), the auth pattern (token-based, with three authentication methods -- path prefix, query parameter, or Bearer header so it works in both browsers and API calls), and the Supabase schema approach (a dedicated fitness schema with four tables: food_logs, daily_summaries, training_blocks, and workout_logs).

What I decided: the data model. What fields to track, what targets to set, what "daily summary" means for my specific goals. Claude Code can scaffold a database schema, but it can't know that I care about Body Battery trends and HRV more than step counts. It can't know I have a shoulder injury that affects which exercises go into my training templates. It can't know my macro cycling strategy -- different calorie targets on training days versus rest days. The human judgment was all about what to build, not how to build it.

This worked because of something I've written about before: the CLAUDE.md advantage. My project context was already loaded -- my tools, my preferences, my infrastructure choices -- so Claude Code didn't suggest React Native or Firebase. It knew I use Cloudflare Workers. It knew I use Supabase. The architecture matched my existing infrastructure from the first suggestion. No back-and-forth about stack choices. If you're curious about this pattern, I wrote about how we set up Claude Code for GTM teams and why project context files matter so much.

The architecture session took about 45 minutes. By the end, I had a plan file -- a structured PRD with 17 phases, success criteria, and verification steps for each. That plan file became the execution contract for the rest of the weekend.

Saturday Afternoon -- Claude Code Writes the Worker

With the architecture settled, I shifted into build mode. Things moved fast.

The entry point (index.ts). Claude Code generated the full Cloudflare Worker entry point -- route matching, auth middleware with three-method token extraction, error handling, health check endpoint. Clean TypeScript. It worked on the first deploy. The router handles seven routes: three dashboard views (today, week, month) and five POST endpoints for logging food, weight, workouts, body fat, and daily notes.

Food logging with AI macro estimation. This was the standout feature and the reason I built the whole system. You POST a plain-text meal description -- "two eggs, toast with butter, coffee with cream" -- and the Worker calls the Anthropic API with a structured prompt asking for a JSON macro estimate. Claude returns {"calories": 485, "protein_g": 28, "fat_g": 24, "carbs_g": 38}, the Worker extracts the JSON (handling the case where Claude wraps it in markdown code fences), validates that all four fields are numbers, and stores the row in Supabase. Claude Code wrote the entire Anthropic API integration -- fetch call, JSON extraction regex, validation, error handling -- in one pass. Figuring that out from the API docs alone would have taken me hours.

The other logging endpoints. Five POST handlers, each with input validation, Supabase writes, and structured responses. The weight logger upserts into the daily summary to avoid duplicate rows. The workout logger finds the active training block and associates the log entry with it. Body fat and notes endpoints followed the same pattern. Claude Code generated all of them from a description of what each should accept and store.

What needed manual work:

The wrangler.toml configuration file. Cron schedules (0 12 * * * and 0 1 * * * for 7 AM and 8 PM Eastern -- UTC offset math that I verified myself), and secret management -- running wrangler secret put five times for the Supabase URL, Supabase key, Anthropic API key, admin token, and Slack webhook URL. Claude Code proposed the configuration, but I verified the timezone math and ran the secret commands.

The Supabase schema creation. Claude Code generated the SQL -- four tables with appropriate column types, UUID primary keys, date indexes, upsert-friendly constraints, row-level security policies. Clean DDL. But I ran it myself in the Supabase dashboard.

The first deploy. wrangler deploy is a one-liner, but it surfaced a TypeScript compatibility issue requiring a compatibility_date adjustment in the config. A five-minute fix, but the kind of thing that only shows up when you actually deploy.

The Honest Scorecard

ComponentClaude Code wrote itI adjusted itI wrote it manually
Worker entry pointYesMinorNo
Auth middlewareYesNoNo
Food log + AI macrosYesNoNo
Weight/workout/bodyfat/notes handlersYesNoNo
Supabase client factoryYesNoNo
Wrangler configProposedYes (cron math)No
Database schemaProposed SQLRan manuallyNo
Environment secretsNoNoYes (CLI commands)

Claude Code generated roughly 85% of the application code. I spent most of my time on configuration, deployment, and data model decisions -- the parts that require knowing your own infrastructure and your own goals.

The pattern -- describe the problem, let Claude Code propose the architecture, generate the code, manually handle configuration and deployment -- works for any domain. I've used it for content workflows, meeting prep systems, and competitive intelligence pipelines. The fitness tracker is the example. The pattern is the point.

Sunday Morning -- The Dashboard and Slack Integration

Saturday built the API. Sunday made it useful.

The HTML dashboard. Claude Code generated a complete dark-theme dashboard -- #0a0a0a background, #e0e0e0 text, progress bars that shift from green to yellow to red based on macro targets. Bars for calories, protein, carbs, and fat. A food log history table. A workout card showing today's programmed session from the active training block. A weight input form. A Garmin recovery metrics card showing Body Battery, HRV, resting heart rate, and sleep scores. All server-rendered HTML from the Worker -- no React, no frontend framework, no build step. A Cloudflare Worker returning HTML with inline CSS and JavaScript.

Three dashboard views. Today shows the detailed daily view -- every meal logged, every macro tracked, recovery metrics from the watch. Week shows daily adherence over seven days, weight trend, training block progress. Month shows the longer picture -- a 30-day weight bar chart, body fat trend, workout adherence rate, and a daily grid. Each view is a GET endpoint that queries Supabase and renders HTML. No client-side data fetching. The page loads with everything already populated.

The Slack cron job. This is the feature that turns the system from a weekend toy into personal automation. A Cloudflare cron trigger fires twice daily -- 7 AM and 8 PM Eastern. The Worker queries today's food logs, calculates macro totals against targets, pulls Garmin recovery data from the daily summary, checks workout status, and formats everything as Slack Block Kit blocks. The morning message shows yesterday's summary and today's targets. The evening message shows progress.

The small details that matter: a macroEmoji() helper function returns a checkmark when a macro is within 10% of target, a down-triangle when under 90%, and an up-triangle when over. At a glance in Slack, I see four emoji and know where I stand. Calories on track, protein short, carbs fine, fat slightly over. No app to open. No dashboard to check. The information finds me.

What I'd change: The dashboard HTML is server-rendered and long -- template strings add up to several hundred lines. For a personal system, inline HTML is fine. For a team system, I'd split it into components with a proper templating layer. The point of a weekend build is proving the value before investing in polish.

The Garmin Data Problem (And How I'm Solving It)

Every honest build log has a section where something was harder than expected.

I wear a Garmin watch daily. It records Body Battery, resting heart rate, HRV, sleep hours, sleep quality scores, sleep stages (deep, REM, light), steps, intensity minutes, active calories, and stress levels. I wanted all of that in my fitness database, correlated with my food and workout data. Getting it there was the most architecturally interesting part of the whole build.

The Official API Is Not for Individual Developers

Garmin's official developer program exists, but it targets companies building multi-user products. The Garmin Connect API requires registering a formal application, going through an approval process, and implementing OAuth 1.0a -- one of the more painful auth protocols to work with. The documentation talks about "consumer keys" and "application registration" and webhooks. It's built for fitness apps with thousands of users, not for one person pulling their own data into Postgres.

I looked into this for about an hour. The approval process alone would have eaten my entire weekend. OAuth 1.0a signature generation is notoriously finicky -- request signing with HMAC-SHA1, nonce generation, timestamp management, parameter encoding order. Libraries exist, but getting the token exchange working against Garmin's endpoints is a multi-day project. The webhook infrastructure (Garmin pushes data to your endpoint rather than you pulling it) requires a publicly accessible server, adding another infrastructure layer.

For personal automation, the official path is overengineered. It's right if you're building a product. It's wrong if you want your own data in your own database.

The Community Library: python-garminconnect

The practical path: the python-garminconnect library -- an open-source Python package (PyPI: garminconnect) that uses the same authentication flow as Garmin's mobile app. Under the hood, it's backed by garth, a library that handles Garmin's SSO authentication and OAuth token management.

The sync script authenticates with your Garmin email and password -- the same credentials you use for the Garmin Connect app. garth handles the SSO flow, obtains OAuth tokens, and persists them to ~/.garminconnect/ on disk. On subsequent runs, it loads the cached tokens. The tokens auto-refresh for roughly a year via garth's refresh logic, so you don't re-authenticate each time.

The library exposes 100+ methods mapping to Garmin Connect's internal API endpoints. The ones I use daily:

  • get_stats(date) -- steps, active calories, intensity minutes
  • get_heart_rates(date) -- resting heart rate
  • get_sleep_data(date) -- sleep duration, quality score, sleep stages with deep/REM/light breakdowns in seconds
  • get_stress_data(date) -- average daily stress level
  • get_body_battery(date) -- body battery values (the charged value at day end)
  • get_hrv_data(date) -- heart rate variability (RMSSD), both nightly and weekly averages

The sync script pulls a day of wellness data, normalizes the response into a flat dict matching my daily_summaries column names, and UPSERTs into Supabase via PostgREST. It only writes Garmin-sourced columns -- it never overwrites weight, body fat, notes, or food-related data from other parts of the system. The garmin_synced_at timestamp tracks when each row was last updated from Garmin.

The CLI supports three modes: default (sync yesterday), --backfill N (sync last N days, max 30 per Garmin's practical rate limits), and --date YYYY-MM-DD (sync a specific date). There's also --dry-run for debugging that prints the data without writing to Supabase.

The Honest Friction Points

This integration is not fully bulletproof, and I don't expect it ever will be.

2FA is the biggest annoyance. If you have multi-factor authentication enabled on your Garmin account (which you should), the first authentication requires approving a prompt on your phone. garth handles the token refresh cycle after that, but if the tokens fully expire -- roughly once a year, or if Garmin forces a re-auth -- you need your phone to approve the login again. I've had to re-authenticate manually twice in the first few weeks.

Rate limiting is real but undocumented. The library uses Garmin's internal API endpoints -- the same ones the mobile app uses. Garmin doesn't publish rate limits for these endpoints, but aggressive backfilling can trigger temporary blocks. Syncing one day at a time with a few seconds between requests works reliably. --backfill 7 is fine. --backfill 30 sometimes gets throttled.

The API surface is unofficial and can change. The python-garminconnect library is actively maintained (v0.2.38+ as of early 2026), but it reverse-engineers Garmin's internal APIs. When Garmin updates their app, endpoints occasionally shift, and the library needs an update to match. This has happened once since I built the sync script. The fix was pip install --upgrade garminconnect and re-running.

Not all metrics are available every day. Body Battery requires a compatible device (most modern Garmin watches support it). HRV requires wearing the watch during sleep. If the watch was charging overnight, those columns come back as NULL. The system handles this gracefully -- NULL values show a dash on the dashboard, and the Slack message skips that metric.

The bigger architectural point: not everything in a personal automation system needs to be bulletproof. The Garmin sync failing for a day doesn't corrupt anything. The daily summary just has NULL values for wearable metrics. Food logging, workouts, Slack reports -- all keep working independently. Resilience through isolation, not redundancy. The maintenance burden is maybe ten minutes a month, and most months it's zero.

The Compound Effect -- Why This Isn't a Weekend Toy

It's been running daily since I built it. Every morning at 7 AM, a Slack message arrives with yesterday's summary. Every evening at 8 PM, today's progress. Every meal logged in plain English. Every workout tracked with training block association.

The data compounds. After a few weeks of macro tracking, patterns emerge in the weekly view. Days when protein was consistently under target. The correlation between sleep quality and how much I ate the previous evening. Monthly weight trends that show whether the macro targets are working or need adjustment. The dashboard shows what I care about. Nothing more.

The cost: $0 per month. Cloudflare Workers free tier handles compute. Supabase free tier handles the database. The only variable cost is Anthropic API calls for macro estimation -- roughly $0.01 to $0.02 per day, depending on how many meals I log. Under $1 per month for AI-powered food tracking.

The maintenance burden: Near zero. The Worker hasn't been touched since V2 added batch food logging and the month view. The Garmin sync occasionally needs a token refresh -- a two-minute task.

What personal automation looks like in 2026. Not Zapier workflows connecting SaaS apps. Not no-code app builders with their own limitations and monthly fees. Custom infrastructure that does what you need, built with AI coding assistance, running on free tiers indefinitely. The same tools and patterns that power Knowledge OS for professional GTM work apply to building systems for your own life. The stack is the same. The skills transfer.

Who this is for: you don't need to be a developer. I'm a GTM operator who builds systems with AI tools. You need to know what you want, describe it clearly, and iterate when the first version isn't right. Claude Code handles the code. You handle the thinking.

What I'd Build Next (And What You Could Build This Weekend)

My next project: a reading log that syncs Kindle highlights to a searchable database. Same pattern -- Worker + Supabase + Slack. Highlights arrive as a daily digest. Searchable by book, topic, or date range. The kind of thing that doesn't exist as a product because the market is too small, but matters to me.

Weekend-scale projects any operator could build:

  • A meeting prep system that pulls calendar data and pre-researches attendees from LinkedIn and your CRM
  • A personal CRM that tracks relationship touchpoints and reminds you when you haven't reached out in a while
  • A content idea capture system that turns voice memos into structured outlines
  • A portfolio tracker that sends daily Slack summaries of market movements for positions you care about

The repeatable pattern: Identify a workflow you do manually or wish existed. Describe it to Claude Code. Build the API layer on Cloudflare Workers. Store data in Supabase. Surface insights where you already live -- Slack, email, a dashboard. Deploy. Forget about it until the daily messages start arriving.

The build-vs-buy tradeoff. Not everything should be a custom build. If a SaaS tool solves 90% of the problem and the missing 10% doesn't matter much, use the SaaS tool. Build when the remaining 10% is the part that matters most -- the specific data model, the specific interface, the specific integration no product offers. My fitness tracker exists because no product gives me AI macro estimation, Garmin data integration, and Slack delivery in one system. The gap was specific enough to fill.

Anthropic's own data shows non-engineering teams building with Claude Code internally. The barrier isn't technical skill. It's clarity about what you want.

Closing

Personal automation is a shift happening without much fanfare. AI coding tools don't just help developers write code faster -- they let anyone with a clear problem and a weekend build infrastructure that serves them daily.

Two days. One Cloudflare Worker. One Supabase database. Seven API endpoints. AI macro estimation. Garmin wearable data sync. Twice-daily Slack summaries. A dark-theme dashboard with three views. $0 per month.

The best fitness tracker is the one that works the way you think. The best personal automation is the one that runs without you remembering it exists.