AI is reshaping how we interact with technology and with each other. In large, collaborative Open Source communities like WordPress, the shift towards AI is exciting, but it brings unique challenges when applied across a global contributor base.
There are extreme stances on both sides of the argument. On one side: “AI everything.” On the other: “AI has no place here.” I am somewhere in the middle. Watching. Listening. Experimenting.
I spend a lot of time thinking about contributor workflows and triage. This post is a reflection on how we might integrate AI into those areas in thoughtful ways, without losing the human qualities that make Open Source great.
AI Is Just the Newest Kind of Automation
Automation is not new. WordPress has embraced automation in many forms over the years: build scripts, linting tools, unit testing, continuous integration workflows, and more. AI is simply another form of automation. It only does what we configure it to do. We haven’t reached any kind of singularity just yet.
But with great power comes great responsibility. Just because we can doesn’t mean we should. Every proposed use of AI should be measured against our core values: putting the users first, backwards compatibility, democratizing publishing, and making WordPress welcoming to contributors of all skill levels.
What Should Always Be Human?
Before we introduce AI into the community, we need to answer one critical question: What must be human? If we first identify those things and declare them sacred, we can then begin to explore how to best surround and support them with AI-related tools.
Related to that is another important question: What do people expect to be human? And where do those expectations align or conflict with perception, or the tasks that actually make us more productive?

From personal experience, I hate when I’m forced to interact with AI or a chat bot when I was instead expecting (or hoping for) a human. Rarely am I surprised in a good way. More often I’m frustrated and disappointed by limitations and rigid structure. Expectations matter.
The right approach could be to let each contributor configure the level of AI support they want to meet their needs and expectations.
Some common-sense boundaries might include the following:
The first and last interaction on every ticket should always be human
Creating a ticket in a large project takes courage. A human response is a sign of respect and gratitude. Every reporter is owed at least two human responses. That human interaction sets the tone, builds trust, and encourages continued participation. Tactfully sprinkling AI in the middle is acceptable.
AI should not be used for evaluating ideas or making decisions
Verifying a patch, helping adhere to coding standards, providing suggestions for improvement, or flagging potential issues based on past commit activity are all great opportunities to use AI. But using AI to actually make decisions becomes quite risky.
I think it’s safe to say that the overwhelming majority of contributors expect final decisions to be made by humans. However, AI could support the evaluation of ideas by identifying possible consequences, providing historical context, fact-checking, and validating rationale.
There could also be an option to request deeper AI insight using a keyword, like needs-ai-feedback
, when necessary.
AI can break down cultural and language barriers
Global collaboration is a strength of Open Source, but it also introduces communication challenges. Language barriers, social norms, and tone differences can unintentionally create friction. AI has the potential to help us bridge those gaps. It can assist with translating intent, softening language, and suggesting more inclusive phrasing. By helping contributors express themselves clearly and interpret others more generously, AI can promote empathy and reduce misunderstandings.
While there are localized support resources for the WordPress project, it’s generally expected that when contributing you are doing so in English. But what if AI were used to automatically translate any message, allowing someone to contribute in their native tongue? How many more contributors could we activated with something like this?
When used thoughtfully, AI can enhance human conversations across cultures rather than replace them.
AI as a Contributor Onboarding Tool
AI is great for handling beginner-leaning tasks. So how can we use AI to level up contributors that are just getting involved?
This is also tricky because Open Source projects need contributors of all levels to ensure a healthy pipeline. We don’t want to replace them since the novice contributors of today are potentially future maintainers and leaders. Instead, we should focus on how to support and empower them.

AI could assist new contributors in a number of ways: explaining the history behind a change, suggesting further reading, answering questions, and setting expectations by describing what usually happens next for similar tickets.
The number one thing I hear from new contributors is that it’s too hard to find something to do. That’s ironic, given how much work there is. Unfortunately, most of it just isn’t easy to find.
With the right prompts, AI could help new contributors by doing things like:
- Ask qualifying questions and use its understanding of contributor expectations and process to suggest appropriate tickets.
- Provide detailed descriptions of what needs to be done and why it matters.
- Recommend logical next steps or related tickets based on interest and skill level.
These types of interaction could help people get started and build confidence more quickly.
Preserving the Human Voice
Another fun idea: What if we trained AI bots in the style of long-time contributors? That kind of tool could preserve institutional knowledge while making expert guidance more accessible.
Many have joked before: “If only we could clone Sergey.” Maybe AI makes the kind of mentorship our most prolific contributors provide more scalable. With well-documented public contributions, we could build agents trained on a contributor’s feedback patterns (with clear disclaimers that they are approximations, not endorsements, of course).
It could also be component-specific. Perhaps there’s a Jonathan bot that responds to feedback requests on Build/Test Tool tickets. Or perhaps a Jorbin bot that offers a clever pun before sharing deep historical context. These bots wouldn’t replace the people behind them; they could help scale their impact and preserve bandwidth. Of course, we’d need community agreement, ethical safeguards, and consent before simulating the voice of any individual contributor.
Strengthening AI Tools Through Documentation
Cross-team collaboration and documentation could benefit greatly from AI. But AI is only as good as the data it’s trained on. That makes clear, consistent documentation essential, not just helpful.
For AI to be successfully implemented, our processes, workflows, and expectations all need to be clearly documented. Now is the time to take stock of our processes, standards, and documentation gaps. This includes teams like Performance, Accessibility, Internationalization, Design, and AI itself.
There are already areas where automation could be introduced thoughtfully. Trac, for example, uses a set of action and status keywords that are excellent candidates for AI-assisted workflows. A keyword like fixed-major
could automatically trigger a pull request backport. Similarly, AI could suggest keywords, help verify whether a keyword is appropriate, or identify when something has been overlooked. But the documentation around these keywords is basic at best.
Building Educational Content
Another promising area is summarizing complex or repetitive concepts. AI excels at distilling large volumes of discussion and activity into summaries, tutorials, or templates. This could be especially useful for the Learn team, where AI might monitor contributor activity and auto-draft course outlines or video scripts for tasks that are frequently repeated.
Creating this content can take a considerable amount of time, and contributors who perform these tasks often don’t have the bandwidth to document them as well.
Closing Thoughts
AI holds enormous potential to make Open Source more welcoming, efficient, and inclusive. But that’s only possible if we use it with intention. We should approach AI the same way we approach any other major change to the project. It should be evaluated like everything else: carefully, collaboratively, and with the long-term health of the community in mind.
How do we avoid creating more work for our future selves?
Let’s not just ask what AI can do. Let’s ask what it should do, what it should never replace, and how we can answer those questions together as humans.
Featured image credit: CC0 licensed photo by Jennifer Bourn from the WordPress Photo Directory.
Leave a Reply