Accelerate open source development with AI

[keyword]


In many open source communities, there is a fair amount of skepticism about using generative AI (gen AI) tools for contribution and development. There are valid reasons for concern. Our goal in this article, and in Red Hat’s own practice, is to directly address, not dismiss, those concerns. Our answers are not just advice for others – they enable our own engineers, most of whom are also open source contributors.

We’ll share with you the guidelines we’ve created for Red Hat engineers, based on our use of open source principles in practice. But first we want to put the current wave of new instruments into context.

A little historical context

For the past 4 decades, we have regularly implemented new and improved tools and processes for software development. You name it: compilers, version control systems, IDEs, virtual machines (both kinds), cloud instances, agile development, containers, configuration management, and automated testing. Each set of tools was once new, and many of them sparked heated arguments about authorship, quality, and legitimacy. There was a time when both compiler flags and auto-completion in IDEs were hot-button issues.

AI-based development tools are no different. Nor should it be. Over time, we will find that AI tools improve our development lives tremendously in some areas and not at all in others, and adoption will continue accordingly. We use tools to solve problems in open source, and the new tools will help us solve old problems while discovering new ones.

If there is a core problem in the world of open source, it can be expressed as “too many projects, not enough maintainers.” Today’s project leader must do more than ever before: faster releases, faster security updates, secure software supply chains, CI/CD, regulatory compliance and large-scale contributor management. These expectations are not sustainable without better tools that help maintainers do more with less effort. Through principled use of AI, Red Hat believes we can build the next generation of developer tools to meet this challenge.

Principles of AI adoption in open source

In order to open source the new tools, we must adhere to the open source ethos that Red Hat and our industry have built. Accordingly, Red Hat has developed guidelines for AI-based open source contribution for our staff based on 3 principles:

  1. Innovate responsibly
  2. Be transparent
  3. Respect the community

Responsible innovative

Regardless of whether they use an AI tool, an IDE, the output of a pair programming session, or any other method to produce code and documents, each contributor is fully responsible for what they contribute. The individual contributor is the person in the loop who vouches for the quality, security and compliance of the contribution. Contributors should understand the AI-assisted code just as if they had written it entirely themselves. They should also be able to explain what it does, how it interacts with other code in the project, and why the change is needed. We don’t see AI as a replacement for developers. The goal is to automate tedious tasks to free them up for complex, creative problem solving. We believe in a future where developers are empowered, not automated.

Our principle of human accountability reframes AI as a powerful assistant and tutor, not a substitute. A newbie can use it to understand complicated boilerplate and learn best practices, allowing them to focus on the core logic of their contribution while making fewer mistakes. Senior contributors can use new tools to perform more efficient and thorough review and testing. The responsibility rests with the people – senior members should mentor the contributor, not just the code, and junior members should be responsible for what they submit and show a willingness to learn.

To be transparent

Openness breeds trust. Marking significant AI-backed contributions, such as with an “assisted-by” line in the commit, helps communities develop best practices together and allows for auditing if problems arise. It also enables projects to learn over time which AI tools are useful for their project development and which don’t work for them.

The marking of contributions also helps reviewers to appropriately evaluate new contributions. Low quality AI-generated contributions are a serious problem for projects. Red Hat will continue to work on practices and tools that we will share with the entire ecosystem as we learn how to better address these challenges.

Respect the community

Effective collaboration in open source relies on respecting each project’s established contribution policies and social norms. Our first responsibility is to understand and engage with a community’s chosen process for adopting new technologies such as AI—or to help start discussions about creating such a process where one does not exist. In other words, to contribute to the conversation rather than trying to dictate it.

We know that some projects will welcome new tools, some will ban them, and some will adopt specific policies about tagging and acceptable uses. Where we can, Red Hat will help communities develop and adopt policies that help them maintain their community values, health and quality standards. The most important consideration is that projects can use AI tools in a way that works for them.

Innovation in action at Red Hat

Our use of AI-powered automation for maintaining Red Hat Enterprise Linux (RHEL) packages is a real example of responsible innovation. As detailed in this blog post by Laura Barcziová, building a reliable production system required a deep focus on accountability. The engineering team has built in critical safety measures, such as dry-run modes and detailed tracking, so that one can always understand and audit the AI’s decisions. This focus on building for reliability and the possibility of human supervision is the key to responsible innovation.

The Fedora Project’s AI-assisted contribution policy process is a powerful example of transparency and respect for community governance. Developed through extensive public debate, it requires accountability and disclosure, serving as a model for how open source projects can create their own clear, pragmatic guidelines for AI.

Open source is about principled innovation

Red Hat believes that AI presents tremendous opportunities for open source projects and contributors. We are committed to developing our ecosystem in a way that preserves key open source principles. This commitment is rooted in a simple truth—our entire product portfolio is built on the innovation that occurs in upstream open source projects. The health, vitality and productivity of these contributing communities is not only a priority, but is the foundation of our success.

Our product strategy reflects this commitment – ​​from a delivering enterprise-grade AI platform with Red Hat AI, to embedding AI capabilities across our entire portfolio, to sharing our own process innovations and discoveries that we use to improve quality and security.

It is a collaborative process, and we approach it with transparency. We tackle long-standing problems in open source that are bigger than Red Hat. We invite you to join us on this journey as we work with upstream communities to build the tools, define the standards, and co-shape the future of software development.



Eva Grace

Eva Grace

Leave a Reply

Your email address will not be published. Required fields are marked *