Tech workers across China are facing a new kind of pressure: managers are asking them to document their daily work so AI agents can learn to replicate it. The trend accelerated after a viral GitHub project called Colleague Skill surfaced, promising to “distill” a colleague’s skills and quirks into a reusable AI persona. What began as an online joke has become a sharp prompt for workers to reconsider the future of their jobs and the value of the human touch in knowledge work.
How Colleague Skill works and why it spread
Colleague Skill lets users name a coworker, add basic profile details, and then automatically harvest workplace data to create a detailed manual of that person’s tasks. The tool imports chat histories and files from popular Chinese apps like Lark and DingTalk, then generates step-by-step workflows and even notes on individual idiosyncrasies for an AI agent to emulate. Its apparent ability to capture tone, punctuation habits, and response patterns helped the project spread quickly on social platforms.
The repository was created by Tianyi Zhou, an engineer at the Shanghai Artificial Intelligence Laboratory, who says the project started as a stunt reacting to AI-related layoffs and a growing corporate push to automate. Although intended as a spoof, the tool tapped into real anxiety: many employees reported that their bosses already encourage similar documentation so automation projects can proceed. The viral moment opened a wider conversation about automation, dignity, and the limits of treating human judgment as a checklist.
What workers are experiencing on the ground
After seeing the project online, some tech workers experimented with the tool themselves. For example, Amber Li, a 27-year-old developer in Shanghai, used it to recreate a former coworker and was struck by how accurately the tool produced a job manual and captured small conversational quirks. The result felt useful yet unsettling: an agent that can debug code and reply instantly may boost productivity, but it also makes human habits feel commodified.
AI agents can already perform many practical, repeatable tasks that teams value, yet they remain imperfect in complex business settings.
- They can control a computer to run scripts, summarize documents, and auto-respond to common emails.
- Agents can book appointments, generate drafts, and triage routine requests.
- They struggle with messy exceptions, nuanced judgment calls, and long-term context that humans handle intuitively.
As a result, companies are asking workers to create manuals that translate tacit knowledge into codified steps an agent can follow.
Why employers are encouraging “blueprints” for work
Experts say there are pragmatic reasons firms push for these documentation projects beyond the buzz around AI. Emory assistant professor Hancheng Cao notes that companies gain both hands-on experience with agent tools and access to richer data about employee know-how, workflows, and decision patterns. That intelligence helps managers identify which tasks are standardizable and which still require human judgment, guiding smarter automation choices rather than blind replacement.
From the worker’s perspective, however, the exercise can feel reductive and alienating. An anonymous software engineer told reporters that training an AI on their workflow flattened complex judgment into modular steps, increasing the fear of replaceability. Online, many employees use gallows humor to cope: jokes and dark memes reflect a broader unease about being distilled into a set of tokens or instructions.
Pushback, sabotage tools, and legal questions
The trend has spawned creative resistance. In early April, Beijing product manager Koki Xu published an “anti-distillation” skill on GitHub that deliberately sabotages attempts to turn a person’s work into a clean agent-ready manual. Xu’s tool offers light, medium, and heavy sabotage modes that rewrite material into vague, non-actionable language, undermining the effectiveness of any resulting AI stand-in.
Xu’s video about the project drew massive attention online, and her intervention raised legal and ethical questions about ownership of workplace data and the protection of worker identity. While companies may claim chat logs and work files as corporate property, skills that capture tone, judgment, and personality blur the lines of authorship and privacy. Xu argues that public debate and clearer rules are needed so employees can help shape how these tools are used rather than be passively distilled by them.
For now, many companies have not succeeded in wholesale replacement: agents still require supervision, frequent correction, and maintenance, so most roles remain intact. Yet workers report a sense that their contribution is being devalued even as automation experiments proliferate, leaving employees to worry about long-term career value. The path forward will likely depend on policy, workplace norms, and collective choices about which parts of knowledge work we want to keep human.
Source: MIT Tech Review – AI