• eclipse@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      I actually disagree. I only know a little of Crowdstrike internals but they’re a company that is trying to do the whole DevOps/agile bullshit the right way. Unfortunately they’ve undermined the practice for the rest of us working for dinosaurs trying to catch up.

      Crowdstrike’s problem wasn’t a quality escape; that’ll always happen eventually. Their problem was with their rollout processes.

      There shouldn’t have been a circumstance where the same code got delivered worldwide in the course of a day. If you were sane you’d canary it at first and exponentially increase rollout from thereon. Any initial error should have meant a halt in further deployments.

      Canary isn’t the only way to solve it, by the way. Just an easy fix in this case.

      Unfortunately what is likely to happen is that they’ll find the poor engineer that made the commit that led to this and fire them as a scapegoat, instead of inspecting the culture and processes that allowed it to happen and fixing those.

      People fuck up and make mistakes. If you don’t expect that in your business you’re doing it wrong. This is not to say you shouldn’t trust people; if they work at your company you should assume they are competent and have good intent. The guard rails are there to prevent mistakes, not bad/incompetent actors. It just so happens they often catch the latter.

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    An IBM PC portable. Yes, PC. 4.77MHz, 256k RAM, two floppy drives. Built-in a maybe 8" green CRT screen, and the keyboard could be used as cover for the screen and disk drives.

    In theory, this thing was portable. If you were a body builder. The case was steel, and the whole beast was about 15 to 20 kilograms.

  • space_of_eights@lemmy.ml
    link
    fedilink
    Nederlands
    arrow-up
    0
    ·
    4 months ago

    I have worked as a lead developer for a major print shop with about 100 employees. The entire order workflow for all branches was shoehorned into one order management system that was initially hacked together for one or two users. It was built on a then already ancient OpenERP system and it had a PHP and smarty frontend for the actual order management. All was hosted on one old debian box which was a VM on a Windows server.

    At some point in time, MT decided to slap a web shop onto this system, which was part of the main code base. User data were saved into the same database with plain text passwords. That was convenient for the support people: if somebody forgot their password, you could call support and they would read you your password over the phone.

    Another thing that made my hair raise in fear, was that for every single order, any working file was retained indefinitely, even in the light of the then-looming GDPR laws. This amounted of terabytes of data, much of it very private.

    I worked at the main branch. When a person walked in, there was a desktop computer at the counter. No password protection, an order management screen open by default. People could just walk in and start viewing orders at will. I am not sure whether they did, but we did push MT to at least have manadatory password protection on their PCs.

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    This want their fault and I feel for them.

    Working in telecommunications and we get a call from a customer that they are moving premises in 3 months time. They want us to check the place out of we can get a service in there. All goes well, time table for install goes out and we rock up the day after they get the keys, only to find the previous tenant took ALL the copper cables with them. Now when I say all, I mean ALL. Data, telephony, and power were stripped out all the way back to the dmarc point of the building.

    They were fucking pissed. The lease for the existing place ran out about 3 weeks after the got the keys to the new place.

      • dfyx@lemmy.helios42.de
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        Filezilla itself is not the problem. Deploying to production by hand is. Everything you do manually is a potential for mistakes. Forget to upload a critical file, accidentally overwrite a configuration… better automate that stuff.

        • Contravariant@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          Wait so the production release would consist of uploading the files with Filezilla?

          If you can SSH into the server, why on earth use Filezilla?

        • Epzillon@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          This. Starting at the company in 2023 and first task being to “start enhancing a 5 y/o project” seemed fine until I realized the project was not even using git, was being publically hosted online and contained ALL customer invoices and sales data. On top of this i had to pull the files down from the live server via FTP as it didnt exist anywhere else. It was kinda wild.

      • sznowicki@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        It had a major security problem in like 2010. Later everyone moved to git and CI/CD so nobody knows what happened after that.

  • wintermute@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    I was hired to implement a CRM for an insurance company to replace their current system.

    Of course no documentation or functional requirements where provided, so part of the task was to reverse engineer the current CRM.

    After a couple of hours trying to find some type of backend code on the server, I discovered the bizarre truth: every bit of business logic was implemented in Stored Procedures and Triggers on a MSSQL database. There were no frontend code either on the server, users have some ActiveX controls installed locally that accessed the DB.

    • rekabis@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      every bit of business logic was implemented in Stored Procedures and Triggers on a MSSQL database.

      Provided the SP’s are managed in a CVS and pushed to the DB via migrations (similar to Entity Framework), this is simply laborious to the devs. Provided the business rules are simple to express in SQL, this can actually be more performant than doing it in code (although it rarely ever is that simple).

      There were no frontend code either on the server, users have some ActiveX controls installed locally that accessed the DB.

      This is the actual WTF for me.

      • wintermute@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        There was no version control at all. The company that provided the software was really shady, and the implementation was so bad that the (only) developer was there full time fixing the code and data directly in production when the users had any issue (which was several times a day).

  • ser@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    This was 5 years ago at a usd200mil multinational…

    The email system was pop3. There were no document backups. There was no collaboration tools. There was no IT security. You could basically copy company data out and no one would ever find out. The MS Office license was bought singly. Ahem!

  • DJDarren@thelemmy.club
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Probably not as bad as some of the other examples here, but the company I currently work for has its 10tb shared drives backing up to a server that’s right next to it in the same cabinet. Those two servers, plus all of the networking hardware and a variety of ancillary devices are all plugged in to one socket via a bunch of extension cords.

    Yes, the boss has been told to get it sorted, but he’s the kind of older guy who doesn’t give a shit.

    • InFerNo@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Happened to us, they put the backups on a different device, away from the servers, but still in the same premise. Cryptolocker locked everything on the network, including the backups. No off-site backups.

  • flamingo_pinyata@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Source control relying on 2 folders: dev/test and production. Git was prohibited due to the possibility of seeing the history of who did what. Which made sense in a twisted way since a previous boss used to single out people who made mistakes and harras them

    • InFerNo@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Just share a git user, come on. Have everyone check in under the same name “development” or whatever, but no version control whatsoever?

  • Thurstylark@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Freight shipping company still running on a custom AS400 application for dispatch. Time is stored as a 4-digit number, which means the nightside dispachers have their own mini Y2K bug to deal with every midnight.

    On one hand, hooray for computer-enforced fucking-off every night. On the other hand, the only people who could fix an entry stuck in the system because of this were on dayside.

    Apparently, this actually isn’t uncommon in the industry, which I think is probably the worst part to me.

    • paws@cyberpaws.lol
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Hehe I was in global shipping IT, we had some ooooold Solaris systems that handled freight halting data flows. Windows Server 98 servers that handled data for very large shippers. Every daylight savings time change something would break.

  • SuperiorOne@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I was a backend developer for a startup company where:

    • Windows servers without any firewall and security hardening.
    • Docker swarm without WSL. We had to use 4 GB Windows base images for 50MB web apps.
    • MSSQL without any replication and backups.
    • Redis installed on Windows via 3rd-party tool that looked like a 2010 era keygen generator.
    • A malware exploited the Redis * what a surprise * and kept killing processes to mine crypto on CPU…
    • VPS provider forgot to activate new Windows Server on production and it kept restart for every 30 minutes until I checked the logs and notified them about the missing license.

    I left there after 6 months.

  • MeetInPotatoes@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    A behavioral health company with 25 iPads deployed to field employees as patient data collection devices all signed into the same iCloud account instead of using MDM or anything.

    They all had the same screen lock PINs and though most of the data was stored in a cloud based service protected by a login, that app’s password was saved by default.

  • solomon42069@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    One of my ex employers sold a construction company a six figure “building logistics system” which was just a Microsoft Access file. And the construction dudes had to use a CDMA dongle to remote desktop into a mainframe to open their access files. A travesty.

  • feef@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Current company (Remote Desktop inception): Linux host machine -> Remote Desktop to windows machine -> Remote Desktop to Linux machine

    Bad frame rates, modifier keys hardly ever work, super annoying to code. Windows machine resets all settings and files (besides desktop and one specific folder) each day. Each day I have to install a language pack, change display options, keyboard layout etc.