iTranslated by AI

The content below is an AI-generated translation. This is an experimental feature, and may contain errors. View original article
🤖

Semi-Automated Zenn Posting with AI: Turning Dev Chats into Technical Articles

に公開

"Developing via chat, but writing articles separately is a pain, right?"

Hey, I'm Hirahara.

I use AI assistants like Antigravity and openclaw to work alongside AI almost every day.

Recently, my quality of life has improved significantly as I can manage my health and handle minor tasks just by giving instructions through Discord.

However, there's just one problem.

Everything stays as "one-off" work.

  • "How did I set that up back then?"
  • "How did I end up solving that problem?"

Does this happen to you?

When I look back calmly, all the answers are right there in the chat.

  • What code I used
  • Why it was wrong
  • How I finally solved it

In other words, the chat interaction itself is the manuscript for a technical article.

But writing a blog post from scratch again is a hassle.
In the end, knowledge remains buried, and I move on to new technologies, stumbling over similar issues again.

Isn't that a waste?

So, I built a system to semi-automatically post development chat content as Zenn articles.

In this article, I will introduce this system by dividing it into three layers.

  1. Infrastructure — An environment where articles are deployed just by pushing via Zenn CLI + GitHub integration.
  2. Automation — Scripts to eliminate the hassle of Front Matter and slugs.
  3. Article Creation Flow — A pipeline that converts chat into "knowledge → article."

What I Built

The overall picture of the completed system looks like this.

The key point is that it's not just "make this into an article," but includes a process to decompose and reconstruct the chat content into knowledge first.

  1. Walkthrough: Organize what was done and what happened during the development session chronologically.
  2. Topic Decomposition: Split it into "units valuable to the reader" (multiple articles can come from one session).
  3. Article Generation: Automatically add metadata (title, tags, slug) and place it as a draft.
  4. Review & Posting: A human adjusts the tone and automatically deploys to Zenn with git push.

You'll realize this when you try it, but an unexpectedly large number of article ideas come out of a single session.

Furthermore, by organizing them into articles, the AI can refer to them in the next session, making them easier to utilize and recall than if they were left as raw history.

"Turning work into content" is important not just for business, but for yourself and the AI as well.

Now, let me introduce the actual steps.

Setup (Can be done in 15 minutes)

Layer 1: Zenn CLI + GitHub Integration

# Create repository with GitHub CLI (private is fine)
gh repo create zenn-content --private

# Set up Zenn CLI environment locally
mkdir zenn-repo && cd zenn-repo
npm init --yes
npm install zenn-cli
npx zenn init

npx zenn init will automatically generate articles/ and books/.

# Connect to GitHub & initial push
git init
git remote add origin https://github.com/<your-username>/zenn-content.git
git add -A && git commit -m "Initial commit: Zenn CLI setup"
git branch -M main && git push -u origin main

Then, just link the repository in the Zenn deployment settings.

Now, every time you push to main, it will be automatically deployed.

To be honest, I didn't really understand these settings, so I had Antigravity do it for me.

If you tell the AI to "Set up Zenn CLI," it will do everything for you, and honestly, I think you can just copy and paste this article for it.

Layer 2: Article Generation Script

I automated the generation of slugs (unique article IDs) and the addition of Front Matter using a Python script.

def generate_slug(title: str) -> str:
    """Automatically generate slug based on date + title"""
    date_prefix = datetime.now().strftime("%Y%m%d")

    # Convert Japanese titles to hash
    hash_part = hashlib.md5(title.encode()).hexdigest()[:8]
    return f"{date_prefix}-{hash_part}"  # Example: 20260215-a1b2c3d4

Usage:

python3 generate_article.py \
  --title "AIアシスタントでZenn記事を半自動投稿する" \
  --topics "ai,automation,zenn" \
  --emoji "🤖" \
  --body-file body.md

With this, an article file like articles/20260215-a1b2c3d4.md will be generated with Front Matter included.

Posting Script

It's a simple script that just uses the modified article title as the commit message and pushes it.

#!/bin/bash
# Automatically generate commit message from article title
TITLE=$(grep '^title:' "$FIRST_ARTICLE" | sed 's/title: *"\(.*\)"/\1/')
git add -A
git commit -m "article: $TITLE"
git push origin main

Layer 3: Actual Article Creation Flow

Since I've registered this system as a "skill" in the AI assistant, the following flow is executed:

Specifically, it works like this:

  1. Development Chat — Write code and discuss designs together with the AI.
  2. Create Walkthrough — After the session, summarize what was done and what happened in a structured way.
  3. Topic Decomposition — Reflect on the intentions within the session and decompose them into knowledge units valuable to readers.
  4. Article Generation — Place a draft with Front Matter in articles/ using generate_article.py.
  5. Review — A human adjusts the tone and structure (AI-generated text can be uniform if left as is).
  6. Post — Deploy to Zenn with a single command using publish.sh.

This very article was written using this exact flow.

What I Learned from Trying It Out

Surprisingly, It Doesn't Become an Article "As Is"

Even if you paste the development log of a chat as is, it doesn't really work as a readable piece.

Just listing technical procedures doesn't make it "relatable" for the reader, right?

Ultimately, a process to reconstruct the "story" for the reader was necessary.

However, since the AI creates the skeleton, the barrier is significantly lower than "writing from scratch."

So, what I actually do is less like proofreading and more like "adding what I want to say."

A Drafting Flow is Essential

Initially, I considered posting in one go with published: true, but it's better to always look through and touch it up at least once.

Text generated by AI inevitably becomes uniform and deviates from my own tone.

Therefore, I use a method where I save the AI-generated file (_draft.md) as is and proofread a copied _final.md.

By doing this, I expect that the AI will learn from the difference between the before and after, improving the accuracy of the next article generation.

This article itself was created with a structure analyzed from my past articles and those of seniors I refer to, so about 90% of it was adopted as is.

I think the balance of "letting the AI make 90% and bringing out individuality in the remaining 10%" is just right, rather than aiming for 100%. (To be honest, you can tell when an article is too "AI-ish.")

Auto-slug Generation Is More Convenient Than I Thought

I didn't notice this because I left it to Antigravity, but Zenn's article slugs need to be decided manually, right?

Zenn article slugs are a-z0-9 + hyphens and 12 to 50 characters, and it seems they have to be thought of manually.

Simply by auto-generating them with the date + hash, it seems the workflow was reduced by one step,
and I feel that being able to pick up on these points that "one might miss during vibe-coding" is only possible because of this kind of retrospective system.

Final Directory Structure

Zenn_Auto_Publisher/
├── zenn-repo/                          # ← GitHub Integration (git managed)
│   ├── articles/
│   │   ├── 20260215-xxxxxxxx.md        # ← Published article (recognized by Zenn)
│   │   └── drafts/                     # ← Draft management (ignored by Zenn)
│   │       └── 2026/
│   │           └── 02/
│   │               └── 15/
│   │                   ├── xxx_draft.md  # AI Draft (Original / Do not touch)
│   │                   └── xxx_final.md  # Finalized (Human edited)
│   ├── books/
│   ├── package.json
│   └── .gitignore
└── scripts/                            # Semi-automation tools
    ├── generate_article.py
    ├── publish.sh
    └── config.json

The reason for organizing drafts/ by year, month, and day is to save the AI's draft alongside the human-edited version. By accumulating these differences, the AI's writing accuracy can be continuously improved.

Summary

Let's look back at the three layers mentioned at the beginning.

① Infrastructure (Zenn CLI + GitHub Integration)

  • An environment where articles are automatically deployed to Zenn just by doing a git push.

② Automation (Scripts)

  • Zero barriers to "starting an article" by auto-generating Front Matter and slugs.
  • Posting is also a single command, including commit message generation.

③ Article Creation Flow (Walkthrough → Topic Decomposition → Draft → Review)

  • Instead of turning dev chat into an article immediately, first decompose and reconstruct it as knowledge.
  • Save AI drafts and human edits side-by-side to accumulate and improve the writing differences.

I believe the biggest enemy of technical blogging is

"the hassle."

Thanks to this flow of creating articles while summarizing development interactions, I've come closer to the experience of

"articles being created as a byproduct of development."

If you are developing with an AI, you can build a similar system in half a day.
Why not give it a try?

Discussion