• macniel@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    16 days ago

    How would the machine know where the string would stop, since a string could contain literally any character?

    But yeah… a .text section would be an alternative.

    • LalSalaamComrade@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      16 days ago

      I am talking about modern, or slightly dated-but-easy-to-implement alternatives to C string, like for example, the pointer+length encoding method in Rust, (which is also called record method, I think?), or the Pascal string method.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        You answered your own question. Strings with length are better than null terminated. It is a mistake in the original C language library and probably a hack because the pdp11 used asciz format.

        • letsgo@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          15 days ago

          Lower performance though. At each iteration through the string you need to compare the length with a counter, which if you want strings longer than 255 characters will have to be multibyte. With NTS you don’t need the counter or the multibyte comparison, strings can be indefinitely long, and you only need to check if the byte you just looked at is zero, which most CPUs do for free so you just use a branch-if-[not-]zero instruction.

          The terminating null also gives you a fairly obvious visual clue where the end of the string is when you’re debugging with a memory dump. Can you tell where the end of this string is: “ABCDEFGH”? What about now: “ABCD\0EFGH”?

      • SubArcticTundra@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        Another alternative I’ve seen is strings that are not null terminated but where the allocated memory actually begins at ptr[-1] and contains the length of the string. The benefit is that you still get a char array starting at ptr[0].

  • tunetardis@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    Better in what sense? I put some thought into this when designing an object serialization library modelled like a binary JSON.

    When it got to string-encoding, I had to decide whether to go null-terminated vs length + data? The former is very space-efficient, particularly when you have a huge number of short strings. And let’s face it, that’s a common enough scenario. But it’s nice to have the length beforehand when you are parsing the string out of a stream.

    What I did in the end was come up with a variable-length integer encoding that somewhat resembles what they do in UTF-8. It means for strings < 128 chrs, the length is a single byte. Longer than that and more bytes get used as necessary.

    • LalSalaamComrade@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      What about data structures like gap buffer or piece table? Would they be ideal for something like, say, a TUI-interface application?

    • zarenki@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      15 days ago

      a variable-length integer encoding that somewhat resembles what they do in UTF-8. It means for strings < 128 chrs, the length is a single byte. Longer than that and more bytes get used as necessary.

      What you used might be similar to unsigned LEB128, which is used in DWARF, Webassembly, Android’s DEX format, and protobuf. Essentially encodes 7 bits of the number in each byte, with the high bit being 1 in any byte except the last one representing the number.

      Though unlike UTF-8 the number’s length isn’t encoded in the first byte but instead implied by the final byte. Arguably making the number’s encoding similar to a terminated string.