Open Source Code Review Perceptions and Expectations
A mixed-method study to understand expectations, perceptions, and challenges in the current code review process — with a focus on mitigating bias in OSS.
This project investigates whether contributors and maintainers in open-source projects truly see eye to eye during the code review process. While both roles value correctness and quality, our study found that maintainers place significantly more emphasis on aligning contributions with the broader project goals. In contrast, contributors often emphasize the novelty of their ideas, without always explaining how those ideas fit into the project’s long-term vision.
We identified several common sources of friction, including delays due to maintainers’ limited availability and misinterpretations of stylistic differences as bias. Notably, we found that familiarity bias — the tendency for maintainers to favor known contributors — continues to be a barrier, discouraging newcomers and limiting diversity in OSS communities.
Our findings emphasize the need for improved documentation, clearer articulation of project goals, and more supportive onboarding practices to foster inclusivity and effective collaboration.
Key Results:
- Maintainers emphasize alignment with project goals far more than contributors do.
- Contributors often overemphasize novelty without articulating its relevance.
- Limited reviewer availability and unclear expectations lead to frustration and disengagement.
- Familiarity bias is a persistent challenge for new contributors.
- Better tools and documentation can help bridge these gaps.
Next Steps:
Our follow-up project will focus on building tools that:
- Help maintainers detect potential bias during the review process.
- Automate routine checks to free maintainers for more meaningful feedback.
- Support new contributors in understanding project expectations more efficiently.
- Help developers better navigate and understand unfamiliar codebases.
Output:
A full research paper documenting our findings has been accepted at the 29th International Conference on Evaluation and Assessment in Software Engineering (EASE), June 2025, in Istanbul, Turkey.
You can read the preprint here.
Prefer a shorter summary? Check out the blog post.
If you’re interested in collaborating on these follow-up projects or have insights to share, feel free to get in touch.
Cover Image Credit: ChatGPT