• 8 Posts
  • 1.33K Comments
Joined 3 年前
cake
Cake day: 2023年7月7日

help-circle


  • It seems your team is not ditching AI anytime soon, but you can still use it to tame technical debt. In fact, with the higher rate of code generation, I’d consider trying to write the best possible code when using AI a requirement.

    Look into “skills” (as in the Anthropic’s standard) and how to use them in Cursor. Use custom prompts to your advantage - the fact you’re still getting code with lots of comments as if it was a tutorial, tells me that this can be improved. Push for rules to be applied at the project level, so your colleagues’ agents also follow them.

    Make heavy use of AI to write regression tests for covering current application behavior: they’ll serve as regression warnings for future changes, and are a great tool to overcome the limits of AI context window (e.g. most times your agent won’t know you fixed a bug last week, and the changes it’s suggesting now break that again . The test will protect you there). Occasionally use AI to refactor a small function that’s somewhat related to your changes, if that improves the codebase.

    Stepping away from AI, try introducing pre-commit hooks for code quality checks. See if the tools of your choice support “baseline” so you don’t need to fix 1000s of warnings when introducing that hook.

    AI can write code that’s good enough, but it needs a little push to minimize tech debt and follow best practices despite the rest of the codebase not being at an ideal quality.




  • Often “silent” fails are a good thing

    Silent fails have caused me to waste many hours of my time trying to figure out what the fuck was happening with a simple script. I’ve been using -e on nearly all bash code I’ve written for years - with the exception of sourced ones - and wouldn’t go back.

    If an unhandled error happened, I want my program to crash so I can evaluate whether I need to ignore it, or actually handle it.











  • you just know a company like Microsoft or Apple will eventually try suing an open source project over AI code that’s “too similar” to their proprietary code.

    Doubt it. The incentives don’t align. They benefit from open source much more than are threatened by it. Even that “embrace, extent, extinguish” idea comes from different times and it’s likely less profitable than the vendor lock-in and other modern practices that are actually in place today. Even the copyright argument is something that could easily backfire if they just throw it in a case, because of all this questionable AI training.


  • Even if you’re into AI coding, I never understood the hype around cursor. In the beginning, they were just 3 months ahead of alternatives. Today you can’t even say that anymore and they’re still “worth” billions. You can get a similar prediction quality from other editors if you know how to use them, paying a fraction of the price.

    Cursor also chugs on tokens like a 1978 Lincoln Continental, that’s how they get marginally better results, so bringing your API is not even a viable option. The first time I tried it, I asked a simple 1-line edit on a markdown and it sent out 20k tokens before I could say “AGI is 6 months away”, and it still got the change wrong.