I am seeking advice regarding my ebook collection on a Linux system, which is stored on an external drive and sorted into categories. However, there are still many unsorted ebooks. I have tried using Calibre for organization, but it creates duplicate files during import on my main drive where I don’t want to keep any media. I would like to:

  • Use Calibre’s automatic organization (tags, etc.) without duplicating files
  • Maintain my existing folder structure while using Calibre
  • Automatically sort the remaining ebooks into my existing categories/folder structure

I am considering the use of symlinks to maintain the existing folder structure if there is a simple way to automate the process due to my very large collection.

Regarding automatic sorting by category, I am looking for a solution that doesn’t require manual organization or a significant time investment. I’m wondering if there’s a way to extract metadata based on file hashes or any other method that doesn’t involve manual work. Most of the files should have title and author metadata, but some won’t.

Has anyone encountered a similar problem and found a solution? I would appreciate any suggestions for tools, scripts, or workflows that might help. Thank you in advance for any advice!

  • paddirn@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I’ve run into exactly the same issue with my large ttrpg ebook/pdf collection (+100k file data hoarding… it’s not a problem, I swear) and I’ve not really found a good option I’m entirely happy with. Calibre duplicates everything and I don’t like the thought of having my collection’s organization tied to a specific piece of software if I just delete my duplicates.

    Zotero is the least worst option I’ve found, but it’s geared towards scholarly journals and such, so not great, but serviceable. Not sure if it’s on linux though.

    Jellyfin is apparently able to handle ebooks with a plugin, though I didn’t particularly care for it when I tried it months ago.

    There’s a handful of other ebook software out there, mostly geared towards comics/manga, so depending on what you have those might be worth looking for.

    I’d like to use Obsidian for it and just turn the directory into a vault and let it automatically scan the folders for files, but that doesn’t work great either.

    The best piece of software I’ve seen that could potentially handle it is an app called Stashapp… which is unfortunately geared towards adult film. But it’s feature-set if it could be applied to PDFs seems like it would be ideal.

    • astro_ray@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Zotero is on Linux, and it has a LibreOffice plugin as well. Though, I do not like Zotero. Zotero is more geared towards reference management, but it also offers some pdf, epub management. But I find their document management too tedious. It’s just easier for me to just rename files. That served me well for a long time.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Yeah, I’ve tried, both for actual files and for tracking my reading across multiple platforms, and nothing really seems to fit my needs, especially when I want to actually read them on an Android ereader. Anything I choose seems to have a lot of manual effort, frequently, or just a dumpster fire of an actual reading experience.

      I feel like I’m eventually going to have to make my own, which is fine, I guess, but I’m definitely not comfortable actually managing a community project or just building up the codebase or documentation to the level someone else would be enthusiastic to use as a jumping off point to manage themselves, so it will probably just stay a personal project that ends up not helping anyone else solve the same problems I have.

      • rand_alpha19@moist.catsweat.com
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Have you tried Kavita? I use it to read comics and e-books on my Android tablet and my Kindle Paperwhite. It also uses OPDS so it has compatibility with some reading apps too, like KOReader, FB Reader, Mihon/Tachiyomi, Moon+ Reader, etc.

        Website

        Demo - Username: demouser Password: Demouser64

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 months ago

          I’m aware of it and explored it a little, but the folder structure requirements are the opposite of what I’m interested in. I want to dump everything in one place and use the UX of my reader to manually build series, adjust metadata, and do everything else.

          Most of the benefits of it are really only useful in its browser based reader, which is also a dealbreaker, and it doesn’t really add anything to Moon Reader because OPDS integration doesn’t actually sync anything, which is the whole reason I’d want a dedicated server over just having everything in a cloud drive.

          It’s cool if it works for you, but it doesn’t really solve any of the problems I want solved.

          • rand_alpha19@moist.catsweat.com
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            Hm, well, hopefully my other comment helps you then. I don’t think there’s an automated tool for this — though a shell script might do the trick, or at least get you most of the way, if you have basic scripting knowledge.

            • conciselyverbose@sh.itjust.works
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              2 months ago

              I don’t want anything automated. I just want to be able to do it manually with a database that handles all of the metadata and organization and literally no folders but the top level one containing every file. Calibre’s insistence on me either having incorrect author information or splitting everything with multiple authors into unique folders for every combination is most of the reason I can’t stand it. The actual bulk editing tools are good. The end result of a mess of folders isn’t.

              I’m not OK with folders, especially nested folders.

              • rand_alpha19@moist.catsweat.com
                link
                fedilink
                arrow-up
                0
                ·
                2 months ago

                Okay? My other comment might help you then, you can change in the preferences whether to put your library in nested folders or not.

                If all else fails, make a post on the MobileRead forums; there are lots of nice and knowledgeable book people there with tons of Calibre experience.

                I’m not trying to get you to do something you don’t want to, so your wall of text doesn’t really make sense to be directed at me. I didn’t make Calibre.

  • solrize@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    If the files are literally duplicated (exact same bytes in the files, so matching md5sums) then maybe you could just delete the duplicates and maybe replace them with links.

    Automatically sorting books by category isn’t so easy. Is the metadata any good? Are there categories already? ISBN’s? Even titles and authors? It starts to be kind of a project but you could possibly import MARC records (library metadata) which have some of thatinfo in them, if you can match up the books to library records. I expect that the openlibrary.org API still works but I haven’t used it in ages.

    • CoderSupreme@programming.devOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 months ago

      If the files are literally duplicated (exact same bytes in the files, so matching md5sums) then maybe you could just delete the duplicates and maybe replace them with links.

      If it was only a handful of ebooks I’d consider using symlinks but with a large collection that seems daunting, unless there is a simple way to automate that?

      Automatically sorting books by category isn’t so easy. Is the metadata any good? Are there categories already? ISBN’s? Even titles and authors? It starts to be kind of a project but you could possibly import MARC records (library metadata) which have some of thatinfo in them, if you can match up the books to library records. I expect that the openlibrary.org API still works but I haven’t used it in ages.

      If there’s still no simple way to get the metadata based on the file hashes, I’ll just wait until AI becomes intelligent enough to retrieve the metadata. I’m looking for a solution that doesn’t require manual organization or spending too much time. I’m wondering if there’s a way to extract metadata based on file hashes or any other method that doesn’t involve manual work. Most of the files should have title and author metadata, but some won’t. I’m not in a rush to solve this issue, and I can still find most ebooks by their title without any organization after all.

      • constantokra@lemmy.one
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        I hope someone gives you a good answer, because I’d like one myself. My method has just been to do this stuff little by little. I would also recommend calibre web for interfacing instead of calibre. You can run both in docker, and access calibre on your server from whatever computer you happen to be on. I find centralizing collections makes the task of managing them at least more mentally manageable.

        You might want to give an idea of the size of your library. What some people consider large, others might consider nothing much. If it is exceedingly large you’re better off asking someplace with more data hoarders instead of a general Linux board.

        • Ledivin@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          2 months ago

          I hope someone gives you a good answer

          I honestly don’t know that there is one. What OP is looking for is effectively an AI librarian… this is literally a full-time job for some people. I’m sure OP doesn’t have quite that many books, but the point remains

          • solrize@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            How many ebooks are you talking about (millions)? Is there just a question of finding duplicated files? That’s easy with a shell script. For metadata, see if the books already have it since a lot do. After that, you can use fairly crude hacks as an initial pass at matching library records. There’s code like that around already, try some web searches, maybe code4lib (library related programming) if that is still around. I saw your earlier comment before you deleted it and it was perfectly fine.

  • thejoker954@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    In regards to another reply, there are multiple options of dealing with duplicates in calibre. From merging to deleting them.

    It’s been a while since I had to do any library setup with calibre and Im kinda confused by what you are saying with “but it duplicates files when importing in a different drive where I don’t want ebooks”

    Do you have multiple libraries in calibre?

    Are you using multiple library managers?

    Why not just use calibres folder structure?

    And also if you don’t know about it - mobileread forums has been around a long time and has a whole section on calibre where you could probably get more specific help.

  • rand_alpha19@moist.catsweat.com
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago
    1. Open the Preferences in Calibre

    2. Click on “Saving books to Disk” (found under Import/Export)

    3. Make sure “Save cover separately,” “Update metadata in saved copies,” and “Save metadata in separate OPF file” are all unchecked.

    4. Adjust the “Save template” to the filename format that you prefer. You can use variables as folder names so, for example, {author_sort}/{title} would put everything by Stephen King in a folder titled “King, Stephen” and each book would be inside of a self-titled folder.

    5. Select all of the books you want, then click the floppy disk icon and save them to a temporary directory.

    6. Delete the old library, then import a new library (with the new filenames) from the temporary directory.

    7. Delete the temporary directory.

    Or you can just use symlinks. :P

    • CoderSupreme@programming.devOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I don’t like keeping duplicate files, especially in my main drive where I don’t store media. If I didn’t mind duplicate files, it wouldn’t be an issue.

      • Ledivin@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I guess I’m a little confused… why are there duplicates at all? What operation are you performing that ends with duplicates?

        Is this drive just where you download them to, and then move them to your organized drive? Why do the books ever touch this drive at all? If there isn’t supposed to be media on the drive, why not just delete the source folder after the organization task is complete?

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Distilbert https://huggingface.co/distilbert/distilroberta-base

    …was setup for something like that here but note that the repo that runs this has an “unsafe” warning that I have not looked into: https://huggingface.co/spaces/nasrin2023ripa/multilabel-book-genre-classifier

    https://huggingface.co/spaces/nasrin2023ripa/multilabel-book-genre-classifier/tree/main

    It might be fine or whatnot, I’m on mobile and can’t see the file in question. The associated Python code might be a helpful starting point.

    In my experience, most models intentionally obfuscate copyright sources. They all know the materials to various degrees, but they are not intended to replicate sources. They all have strong interference in place to obscure both their recognition and reproduction potential. If, for instance, you can identify where errors are inserted and make a few corrections, they often continue adding a few details that are from the original source. If this is done a few times in a row, they tend to gain more freedom before reverting to obfuscation again. This is the behavior I look out for. It is a strong tool too if you get creative in application.

    Perhaps someone posts an API to look up the library of congress classification of a work based on a few lines or something. GL

  • silkroadtraveler@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Migrate to Calibre and use Calibre Virtual Libraries. However based on the comments I’m reading, it looks like you want something that is not application based. Good luck with that.

  • linearchaos@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I tried to ingest a four terabyte epub library once. Even getting the data ingested with the author and title in the right spot was almost impossible. If a duplicates weren’t just slightly wrong would be a different story but the duplicates are often misspells or different spellings.

    Realistically the best thing you can do is get an output of file name, title, author and hand dedupe, but even then you’re going to have to be careful about quality and language and all kinds of other strange issues you run into with large libraries.

    In the end I gave up and only stored what I really wanted and would realistically ever need and that was small enough to hand cull.

      • linearchaos@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yeah, I wrote some python once to give me voice control over a plex server. Distance algorithms did okay I had a lot better results out of fuzzy wuzzy though

        Kind of a dicey prospect with risk when you’re doing duplication though.

        Then you run into problems with things like Harry Potter and the philosopher’s Stone versus Harry Potter and the sorcerer’s Stone. Depending on how badly your database is degenerated you can even end up with words out of order and all kinds of crazy crap. If the database is truly large just truing the database up can be unreasonably time-consuming.

        I was pretty amazed at all the different versions of string search and comparison algorithms.