GenAI Updates
AI content, AI detection removal, AI tools, AI writing, Claude Code, coding skills, content analysis, copywriting, developer tools, GitHub, humanizer, LLM tools, natural language, natural language processing, open source, open source AI, text humanization, text quality, writing quality, writing tools
Mike
0 Kommentare
GitHub – blader/humanizer: Claude Code skill that removes signs of AI-generated writing from text
If you’ve ever pasted something you wrote into a doc, leaned back, and thought, “Why does this sound like a corporate press release?”… you’re not alone. I’ve been there. Too many times. And that’s exactly where a small GitHub project called humanizer quietly earns its place.
This repository, created by blader, introduces a Claude Code skill designed to remove the usual signs of AI generated writing. Not by making text louder or more clever, but by doing the opposite. It trims the fluff. It grounds statements. It swaps grand claims for specific facts. Suddenly, your writing feels like something a real person would say over coffee, not a keynote slide.
What’s interesting is how it approaches the problem. The skill is based on Wikipedia’s “Signs of AI writing” guide, a living document maintained by WikiProject AI Cleanup. That guide exists because thousands of AI written texts started to show the same habits. Overexplaining. Overpromising. Sounding impressive while saying very little. You’ve seen it. We all have.
The before and after example in the repo says a lot. The original text talks about innovation, revolutions, and lasting impact. The humanized version simply lists what changed, batch processing, keyboard shortcuts, offline mode, and mentions early beta feedback. It’s calmer. More believable. Honestly, more useful.
If you already use Claude Code, the setup is straightforward. Drop the skill file into the skills directory and invoke it when needed. You can also just ask Claude to humanize text directly using this skill. No drama. No complicated workflow changes.
What I like most is the philosophy behind it. Large language models predict what sounds statistically likely. Tools like this gently push things back toward what sounds lived in. Specific. Slightly imperfect. More human.
If you want to explore it yourself, the project lives here:
https://github.com/blader/humanizer
As AI assisted writing becomes normal, not special, tools like this feel less like polish and more like hygiene. And that’s probably a good sign of where we’re headed.



Kommentar abschicken