Linus Torvalds and the Linux kernel maintainers have officially established the first set of rules for using Artificial Intelligence in kernel development. Rather than banning the technology, the new policy treats AI as a productivity tool while emphasizing transparency and legal accountability.
Mandatory Tags and Transparency
The core change involves how patches are tagged. Automated agents are now prohibited from using the standard Signed-off-by legal tag. Instead, any code generated or refined by models like ChatGPT or GitHub Copilot must include the new Assisted-by tag, clearly identifying the tool used.
Human Responsibility
Under the new guidelines, the legal and technical responsibility remains entirely with the human developer. Contributors must:
- Thoroughly review all AI-generated code.
- Ensure full compliance with open-source licenses.
- Take accountability for any bugs or security vulnerabilities.
This decision follows recent controversies regarding undisclosed AI patches and aims to maintain honesty and rigorous standards within the Linux community.

