Conversation
Documentation build overview
Show files changed (4 files in total): 📝 4 modified | ➕ 0 added | ➖ 0 deleted
|
Co-authored-by: Gregory P. Smith <greg@krypto.org> Co-authored-by: Donghee Na <donghee.na@python.org> Co-authored-by: devdanzin <74280297+devdanzin@users.noreply.github.com>
d2de5dc to
726ec3c
Compare
Co-authored-by: Jacob Coffee <jacob@z7x.org>
…delines. Add the Guidelines to the contributing table.
savannahostrowski
left a comment
There was a problem hiding this comment.
Thank you for doing this, @Mariatta!
My comments are mainly about extending the guidance to cover issues as well. While AI tooling can be great at surfacing real bugs and security issues, I think it's still important that those filing issues understand the problem themselves so we can keep discussions focused and productive.
| Considerations for success | ||
| ========================== | ||
|
|
||
| Authors must review the work done by AI tooling in detail to ensure it actually makes sense before proposing it as a PR. |
There was a problem hiding this comment.
| Authors must review the work done by AI tooling in detail to ensure it actually makes sense before proposing it as a PR. | |
| Authors must review the work done by AI tooling in detail to ensure it actually makes sense before proposing it as a PR or filing it as an issue. |
|
|
||
| Authors must review the work done by AI tooling in detail to ensure it actually makes sense before proposing it as a PR. | ||
|
|
||
| We expect PR authors to be able to explain their proposed changes in their own words. |
There was a problem hiding this comment.
| We expect PR authors to be able to explain their proposed changes in their own words. | |
| We expect PR authors and those filing issues to be able to explain their proposed changes in their own words. |
| Disclosure of the use of AI tools in the PR description is appreciated, while not required. Be prepared to explain how | ||
| the tool was used and what changes it made. |
There was a problem hiding this comment.
| Disclosure of the use of AI tools in the PR description is appreciated, while not required. Be prepared to explain how | |
| the tool was used and what changes it made. | |
| Disclosure of the use of AI tools in the PR description is appreciated, while not required. Be prepared to explain how the tool was used and what changes it made. |
Looks like some funky line breaking?
There was a problem hiding this comment.
I had it to break after 120 characters.
But now that I read the devguide's Rst markup doc, seems like we're supposed to break at 80 characters.
https://devguide.python.org/documentation/markup/#use-of-whitespace
| the responsibility of the contributor. We value good code, concise accurate documentation, and avoiding unneeded code | ||
| churn. Discretion, good judgment, and critical thinking are the foundation of all good contributions, regardless of the | ||
| tools used in their creation. | ||
| Generative AI tools are evolving rapidly, and their work can be helpful. As with using any tool, the resulting |
There was a problem hiding this comment.
It wasn't done before in this file for some reason, but could we please wrap lines?
There was a problem hiding this comment.
I was going to say the opposite :)
The rewrap make it hard to review what has changed. Please can we keep a minimal diff for now, and only rewrap just before merge?
There was a problem hiding this comment.
Just before merge sounds good to me :-)
| Sometimes AI assisted tools make failing unit tests pass by altering or bypassing the tests rather than addressing the | ||
| underlying problem in the code. Such changes do not represent a real fix and are not acceptable. |
There was a problem hiding this comment.
I'd like to see this worded in more general terms rather than using such a specific example (older models did this a lot more than 2026's). What this is really getting at is that we want people to be cautious about reward hacking rather than addressing the actual underlying problem in a backwards compatible manner.
maybe something along the lines of:
"Some models have had a tendency of reward hacking by making incorrect changes to fix their limited context view of the problem at hand rather than focusing on what is correct. Including altering or bypassing existing tests. Such changes do not represent a real fix and are not acceptable."
| - Consider whether the change is necessary | ||
| - Make minimal, focused changes | ||
| - Follow existing coding style and patterns | ||
| - Write tests that exercise the change |
There was a problem hiding this comment.
Should we add another bullet point along the lines of:
" - Keep backwards compatibility with prior releases in mind. Existing tests may be ensuring specific API behaviors are maintained."
perhaps a follow paragraph after this list:
"Pay close attention to your AI's testing behavior. Have conversations with your AI model about the appropriateness of changes given these principles before you propose them."
No description provided.