iTranslated by AI
Building things has been irresistibly fun since I started using AI
Seeing an App Go Viral in the Morning, I Had a Demo Running 1 Hour Later
One morning in February, an app called "World Monitor" appeared on my X timeline. It's a so-called OSINT dashboard that displays real-time information on global conflicts, earthquakes, and transportation infrastructure on a map. Its cyberpunk look was cool, and it went quite viral, even getting summarized on Togetter.
Looking at it, I thought, "I want a Japanese version of this."
In the past, I would have stopped there. I would have just consumed it as "interesting" and scrolled to the next tweet. If I wanted to properly learn D3.js for map rendering or React for the frontend, it would take several days each. It would have gone on my "someday" list and been forgotten—a pattern I've repeated many times.
But this time was different. I opened Claude Code and started brainstorming: "I want to create a cyberpunk-style news mapper for Japan." Rendering Japan's GeoJSON with D3.js, aggregating RSS feeds with Cloudflare Workers, and mapping news to prefectures—I fleshed out the design with the AI while verbalizing the concept, then left the implementation to the agent.
An hour later, the first demo was running. It was ready to be published within that same day.

This kind of thing has been happening daily lately.
What the GitHub "Grass" Tells
Looking at my GitHub contribution graph, the change is obvious.

1,570 contributions in the last year. However, the vast majority are concentrated from June onwards. From February to May, it's almost blank. From June, "grass" began to grow as if I were a completely different person.
What happened? That was exactly when I started using coding agents—specifically Claude Code—seriously for personal projects.
Before: The "Wanting to Make but Unable to" Period
Let me share a bit of my background.
I've spent most of my career in machine learning and backend development. About 10 years. Building model training pipelines, setting up inference APIs, and integrating them into products—that's the kind of work.
The tricky part of this field is that the deliverables are hard to see. Even if the model's accuracy improves by 0.3% or I build a backend that solves complex business problems, it isn't visible to users. When asked "What did you make?", I can only show terminal logs or Grafana dashboards.
On the other hand, I always had ideas like "it would be useful to have a tool like this." A dashboard to centrally manage my health data, a system to search through scattered photos using natural language, a mechanism to automatically aggregate daily records—I had many in my head.
But I always hit the same wall: the learning cost of frontend and mobile apps.
Learning React from scratch. Setting up an Android development environment. Drawing graphs with D3.js. Building a site with Astro. Each tech stack has its own learning curve. Reading official documentation, transcribing tutorials, and finally starting to write my own code—the time it took to reach that stage killed my motivation for personal development.
It wasn't that I was running out of ideas. The final step of implementation was always the bottleneck.
Turning Point: Encountering Coding Agents
Around June 2025, I started paying for a personal subscription to Claude Code and using it seriously.
The first thing that shocked me was the experience that "as long as I decide the specifications, deliverables are produced."
"I want to create a REST API to receive health data with Cloudflare Workers. The endpoint is this, the schema is that, and the authentication is via API key"—when I articulate these specifications and give them to the agent, working code comes out. All I do is review and course-correct.
What I realized here was that the ability to articulate specifications is exactly the skill I've cultivated over 10 years as an ML and backend engineer. Breaking down problems into requirements. Specifying constraints. Defining expected inputs and outputs. This is what I've been doing all along, whether in model design or backend design. What I lacked was the implementation skills for specific tech stacks, not the design ability. The AI agent filled that exact gap.
Another big thing was that the order of learning was reversed.
The traditional learning flow was "read documentation → transcribe tutorials → write your own code." With AI agents, this changes to "convey what you want to do → working code comes out → understand the unclear parts by asking the AI." There is a working thing first, and you learn backwards from there. You can absorb only the necessary knowledge at the necessary timing, in a way that fits your own context.
"The Ability to Write Specifications" Has Become the Main Character of Development
Recently, the concept of Spec-Driven Development (SDD) has been gaining attention. Tools like AWS Kiro and GitHub Spec Kit have emerged, and an approach of "first writing specifications and using them as the source of truth to let AI generate code" is becoming formalized.
In traditional development, the implemented code was the source of truth. Even if there were specification documents, the code was ultimately the reality. But in SDD, it is not the code, but the specifications that clearly state human intent that serve as the sole standard. The code is a deliverable derived from them, and if the specifications change, it can simply be regenerated.
When I learned about this, I thought, "This is exactly what I've been doing."
When I made the health-sync API, what I did was verbalize the specifications: "The endpoint is this, the schema is that, authentication is via API key, and the granularity of upsert is like this." I left the implementation to the agent. When I designed Claude Skills, what I did was write in SKILL.md "how to call this API and how to interpret the retrieved data." When I made cyber-japanese-news, what I did was convey the concept: "I want to map news to a cyberpunk-style map of Japan, the data source is RSS, and the backend is Cloudflare Workers."
In every project, my job was writing the specifications.
The work of an ML engineer is, after all, "defining problems, specifying constraints, and designing evaluation criteria." When designing a model training pipeline, what you're doing is defining inputs and outputs, establishing preprocessing specifications, and selecting evaluation metrics—which is essentially writing specifications. This ability translated directly to personal development in the era of AI agents.
In the context of SDD, I focus all my efforts on the "Specify" phase, and leave "Plan → Tasks → Implement" to the agent. This is a workflow that tools like Kiro and Spec Kit are trying to formalize, which I was doing naturally even without the tools.
What I want to emphasize here is that behind the attention SDD is receiving, there is a structural change where the value of the "ability to write specifications" has relatively increased. As the bottleneck of the ability to write code has been lowered by AI, the ability to articulate "what to make," "why make it," and "what constraints exist"—in other words, the ability to define specifications—is becoming the primary factor that determines the success or failure of development.
Because of this, the speed of prototyping has changed dramatically. The lead time from an idea to a working product has gone from days to hours. The cyber-japanese-news project mentioned at the beginning is a perfect example.
After: What I’ve Built in the Last 8 Months
Here are the things I've built since June.
Personal Healthcare Data Infrastructure. A system that aggregates weight, blood pressure, CPAP, sleep, step count, meals, and blood tests into Cloudflare Workers + D1, using Claude Skills for analysis and report generation. I can now answer "How was last month?" with data every time I visit the clinic. I also wrote the Android app (Health Connect integration) myself.
👉 健康データを全部AIに渡したら通院が変わった ── Claude Skills × Cloudflareの個人ヘルスケア基盤
Location API + Spatial Search. Built an API that imports 14 years of Google Location History (300,000 entries) and uses the H3 spatial index to reverse-lookup "when did I go to this place?". Also includes real-time tracking with OwnTracks.
👉 AIに自分の位置情報をストーキングしてもらえるSkillを作った
👉 LLMが呼び出し元なら、ジオコーディングは要らない
AI External Memory Architecture. An article discussing an architectural pattern for "giving AI read/write external memory" by abstracting the structure common to the two systems mentioned above.
👉 AIに外部記憶を持たせたら何が変わったか ── RAGとは違う、個人のためのread/write記憶基盤
Photo Mosaic Processing App (cnsr). When posting photos of horse racing or aircraft to SNS, I want to hide people's faces or license plates in the background. Existing tools are either too cumbersome or lack functionality. I built a simple app with Nuxt.js + Vue 3 where you just "open in browser, select range, apply mosaic or black-out, and download."
EXIF Information Overlay Tool (MetaMark). A tool to overlay shooting info like camera name, lens, focal length, and ISO sensitivity elegantly onto photos. I wanted to complete "photos with shooting data"—often seen on Instagram—entirely within the browser. Built with Next.js + React, processing everything from EXIF extraction to Canvas rendering on the client side, so photos are never sent to a server.
Cyberpunk-style Japan Map News Mapper. The one introduced at the beginning. Inspired by World Monitor and built with D3.js + React + Cloudflare Workers. I shaped it in a single day starting from just an idea.
Resume Web App. Made with Astro, featuring an editor. Operated by providing Claude Code with Markdown change instructions, which it then implements.
Claude Skills (5 total). health-sync, nutrition-tracker, health-report, switchbot-env, and location-sync. These themselves are products that solve the design challenge of "connecting AI with external data."
The point is that these are all things I wanted to do but couldn't touch before June. I didn't start using Astro, the Android Health Connect API, D3.js, Nuxt.js, or the Next.js Canvas API after learning them from scratch on my own. I learned by doing, working alongside the AI while building. Both cnsr and MetaMark are examples of tools I'd long wanted for my photography hobby that finally took shape.
How AI Agents Changed "Me"
The bottleneck of implementation power disappeared, allowing focus on design
This is the most significant change. The "ability to break down problems and translate them into requirements" that I cultivated as an ML and backend engineer can now be directly applied to personal development. Previously, even with that ability, I would stop at the implementation stage. Now, I can concentrate on design and requirements definition. I can spend time thinking about "what to make," "why make it," and "what constraints exist."
Ironically, this is a story of how my existing skills were unlocked by AI. I didn't acquire new abilities; rather, the bottleneck in my existing abilities was removed.
The sense of "being able to build" drives motivation
Creating one thing leads to the next idea. Building health-sync made me think, "I want location info too," leading to location-sync. Then, "I want to see the indoor environment" led to switchbot-env, and "I want a report that analyzes all of this together" led to health-report.
The "grass" doesn't stop growing not because of a sense of duty or self-improvement techniques. It's because this cycle rotates naturally. Create → Use → Want to improve → Create again. As long as the supply of ideas doesn't stop, there's a reason to keep moving.
Changed from an "Observer" to a "Practitioner"
Previously, when I saw new technology or viral products, my attitude was "it looks interesting, but I probably won't use it." I was just a consumer of tech news.
Now, it's different. Seeing World Monitor made me think "I'll make a Japanese version." Seeing someone's OSS makes me think "Can I integrate this into my project?" When a new API is released, visions of "I can do this with that" come to mind. My relationship with technology has changed from passive to active.
Who Benefits from AI Agents?
Let me generalize from my own case.
The biggest beneficiaries are those who "have ideas and specification-writing skills but lack implementation skills in certain areas." Whether you can write backend but are weak at frontend, understand server-side but have no experience with mobile, or can perform data analysis but don't know how to turn it into a web app—AI agents provide a significant buff to these kinds of developers.
Conversely, those who benefit less are those who "don't know what to make." AI can take over the coding, but it cannot take over the requirements definition. The "ability to articulate specifications" is still required on the human side. Saying "anyone can make an app with AI" is half-right and half-wrong. It is true for those who have a clear idea of "what they want to make," but for those who don't, it just ends with having more tools.
Another important factor is the ability to evaluate AI output. If you don't have enough literacy to perform code reviews, you won't be able to judge whether what the AI produced is correct. You won't be able to handle errors or verify the validity of the architecture. It's not a silver bullet for a complete novice.
Are there insights that can be applied to work?
My experiences in personal development are providing feedback to my professional work as well.
Gaining a sense of "building with AI." By getting used to collaborating with AI agents in personal projects, you can naturally transition to using them in professional tasks. The knack for "how much to leave to the AI" and "where a human should review" can only be understood by actually using them extensively.
The ability to verbalize specifications is a weapon even when dealing with people. Someone who can give precise instructions to an AI can also write precise specifications for humans. Writing prompts for an agent and task descriptions for team members are essentially the same skill.
It continues because it's fun
I've written this like a technical article, but to be honest, the driving force is extremely simple. Making things is fun.
The reason my GitHub "grass" doesn't stop growing isn't because I set goals or used habit-forming techniques. It's just that the act of making things is fun, and before I know it, I'm opening Claude Code—that's all.
What the AI agent did was fill the gap between "wanting to make" and "being able to make." By reducing the friction of turning ideas into reality, I can concentrate on the joy of thinking about "what to make."
Discussion