• gaterush@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    I generally agree and like this strategy, but to add to the other comment about catching reimplemented code, there’s just some code quality reviewing that cannot be done by automating tooling right now.

    Some scenarios come to mind:

    • code is written in a brittle fashion, especially with external data, where it’s difficult to unit test every type of input; generally you might catch improper assumptions about the data in the code
    • code reimplements a more battle tested functionality, or uses a library no longer maintained or is possibly unreliable
    • code that the test coverage unintentionally misses due to code being located outside of the test path
    • poor abstractions, shallow interfaces

    It’s hard to catch these without understanding context, so I agree a code review meets are helpful and establishing domain owners. But I think you still need PR reviews to document these potential problems