Note: In cases where the entire contribution was written by AI, however, that will be a different case that we should not allow. This might cause the downfall of open source. Since we are talking about AI-assisted but primarily written by human and declared with transparency, this is acceptable and the best approach to the problem that should not exist if not for the AI-bubble. So, here's an essay to the latter.
People here have been of different reactions on the use of AI assistance in primarily human-written code (not entirely AI-generated / not primarily AI-generated) in contributions:
Ugh. Here's hoping this infection can be contained and doesn't spread.
Another reaction:
How kind of Fedora to take Ubuntu's spot as the distro with the least amount of community trust and good will.
One of the solutions proposed was transparency and declaration of its use, such as that in contribution to Fedora. Nonetheless, it seemed to still be unacceptable to majority of people here; the concensus of majority is to deny the use of AI.
The problem raised by this was how can one determine if the submitted code of a contributor—both newcomer and veteran—were generated or assisted by AI? AI detectors too unreliable; AI-generated code and person-written code are generally similar on common functions or scripts; and it simply is not possible and will create more job to the maintainers.
Suppose that a contributor submitted their human-written code. There is a high chance that a part of it was copy-pasted from GitHub, or somewhere in the deepest corner of the internet. Perhaps the code that it copied was generated or assisted by AI. It is, with great disdain, that we must accept the fact that internet was overwhelmed with AI and will soon be overflowing with AI-generated results; I do not know if this will turn for the better sooner or later. This is a simple example of how it will be unavoidable.
Furthermore, if the use of AI were prohibited, there are cases that some will still use it and it will be submitted unbeknownsts to the maintainer. However, unlike the declared case, this might be treated with less rigor as the other might be treated (i.e. human-written vs. AI-assisted/generated).
It is apparent that prohibiting the submission of AI-generated or AI-assisted code will never be possible; let alone detectable. Hence, the only feasible, time-efficient, and resourceful solution, thus, is to allow it but with transparency; such that it can be reviewed rigorously and taken with caution to minimize, standardize, or assure quality of the submitted code.
In cases where the entire contribution was written by AI, however, that will be a different case that we should not allow. This might cause the downfall of open source. Since we are talking about AI-assisted but primarily written by human and declared with transparency, this is acceptable and the best approach to the problem that should not exist if not for the AI-bubble.