I am the maintainer of rsyslog. This write-up is a field report from using AI coding tools in a mature C codebase of about 200k LOC.
The interesting part for us was not "can it generate code", but what had to change in the repo and workflow so the output became reliably useful: clearer contracts, better documentation near the code, stronger CI expectations, formatting consistency, and tighter task decomposition.
Happy to answer concrete questions, including where it worked, where it did not, and what still requires careful human review.
We've been tackling a massive documentation restructuring for rsyslog and are sharing the workflow we've designed. Our approach is "responsible AI in OSS," using a three-stage, human-in-the-loop system. We codify our rules in a prompt, use an AI to generate the changes, and then validate them with another AI and CI.
We'd love to hear your opinions and concerns about this process. How do we ensure human oversight remains effective? What are the biggest risks you see in a workflow like this?
We at rsyslog use it as part of our CI pipeline. It's a pretty cool tool, done by awesome folks. A real big plus is the responsiveness to questions and when something needs attention. When we came to LGTM (at that time in beta for C), we already had a pretty clean code base. The reason is that we use other static analysis tools. LGTM did nevertheless find some new and interesting things, include a real vulnerability[1]. We don't use custom QL right now (time is soo short), but opt in to some extra canned queries. For example, we prevent commented-out code via them. LGTM is a check REQUIRED to pass CI [2, sample PR].
We are really happy with both the product and the team and I can recommend diving into it. Nevertheless, don't rely on a single static analysis tool, no matter how good it is. If you use C/C++, you should at least also have clang's static analyzer in use.
The interesting part for us was not "can it generate code", but what had to change in the repo and workflow so the output became reliably useful: clearer contracts, better documentation near the code, stronger CI expectations, formatting consistency, and tighter task decomposition.
Happy to answer concrete questions, including where it worked, where it did not, and what still requires careful human review.