

ugh, no way. It might do a fine job with typesetting, but the user experience is utterly awful and that’s very unlikely to change because of design choices over 40+ years. If you don’t think so, give typst a real try.


ugh, no way. It might do a fine job with typesetting, but the user experience is utterly awful and that’s very unlikely to change because of design choices over 40+ years. If you don’t think so, give typst a real try.


It seems your team is not ditching AI anytime soon, but you can still use it to tame technical debt. In fact, with the higher rate of code generation, I’d consider trying to write the best possible code when using AI a requirement.
Look into “skills” (as in the Anthropic’s standard) and how to use them in Cursor. Use custom prompts to your advantage - the fact you’re still getting code with lots of comments as if it was a tutorial, tells me that this can be improved. Push for rules to be applied at the project level, so your colleagues’ agents also follow them.
Make heavy use of AI to write regression tests for covering current application behavior: they’ll serve as regression warnings for future changes, and are a great tool to overcome the limits of AI context window (e.g. most times your agent won’t know you fixed a bug last week, and the changes it’s suggesting now break that again . The test will protect you there). Occasionally use AI to refactor a small function that’s somewhat related to your changes, if that improves the codebase.
Stepping away from AI, try introducing pre-commit hooks for code quality checks. See if the tools of your choice support “baseline” so you don’t need to fix 1000s of warnings when introducing that hook.
AI can write code that’s good enough, but it needs a little push to minimize tech debt and follow best practices despite the rest of the codebase not being at an ideal quality.
I find it really hard to replace maps, because half of the times I use it it’s because of photos, reviews, or traffic information that’s just not available in other places.


Maybe there’s a technical difference, but at least in Brazil, publicidade and propaganda are widely used as synonyms
It is uncommon to call propaganda (in the political sense) publicidade, so maybe in popular conversations this makes publicidade a kind of propaganda, and not the other way around.
Often “silent” fails are a good thing
Silent fails have caused me to waste many hours of my time trying to figure out what the fuck was happening with a simple script. I’ve been using -e on nearly all bash code I’ve written for years - with the exception of sourced ones - and wouldn’t go back.
If an unhandled error happened, I want my program to crash so I can evaluate whether I need to ignore it, or actually handle it.
Exactly, if an unhandled error happened I want my program to terminate. -e is a better default.


cool, does that mean it runs in the same event loop as the asgi server?


idk about other languages, but in Portuguese it’s literally the same word
in undergrad we’d joke that, if you make a paper ball and hit the basket, it compiles.
it just means they’ll be a passive node, but still able to seed if they connect to the other node (edited). It’s the setup I have and I manage to keep an overall ratio >1, especially if the torrent is popular.


getting a stack overflow trying to expand GNU


if you use this often, you can add a keyword search (firefox-based browsers) or a custom site search (chromium-based) with this URL
https://icon-sets.iconify.design/?query=%25s
(use %s after equals; some lemmy front-ends seem to be rendering it wrong)
and a shortcut e.g. icon
so everytime you enter e.g. icon person in a new tab, it’ll run the search for you
you just know a company like Microsoft or Apple will eventually try suing an open source project over AI code that’s “too similar” to their proprietary code.
Doubt it. The incentives don’t align. They benefit from open source much more than are threatened by it. Even that “embrace, extent, extinguish” idea comes from different times and it’s likely less profitable than the vendor lock-in and other modern practices that are actually in place today. Even the copyright argument is something that could easily backfire if they just throw it in a case, because of all this questionable AI training.


Even if you’re into AI coding, I never understood the hype around cursor. In the beginning, they were just 3 months ahead of alternatives. Today you can’t even say that anymore and they’re still “worth” billions. You can get a similar prediction quality from other editors if you know how to use them, paying a fraction of the price.
Cursor also chugs on tokens like a 1978 Lincoln Continental, that’s how they get marginally better results, so bringing your API is not even a viable option. The first time I tried it, I asked a simple 1-line edit on a markdown and it sent out 20k tokens before I could say “AGI is 6 months away”, and it still got the change wrong.
I don’t quite get what this is supposed to do. Is it basically a software to allow jellyfin/plex users to request media without needing a radarr/sonarr account?