A chart titled “What Kind of Data Do AI Chatbots Collect?” lists and compares seven AI chatbots—Gemini, Claude, CoPilot, Deepseek, ChatGPT, Perplexity, and Grok—based on the types and number of data points they collect as of February 2025. The categories of data include: Contact Info, Location, Contacts, User Content, History, Identifiers, Diagnostics, Usage Data, Purchases, Other Data.

  • Gemini: Collects all 10 data types; highest total at 22 data points
  • Claude: Collects 7 types; 13 data points
  • CoPilot: Collects 7 types; 12 data points
  • Deepseek: Collects 6 types; 11 data points
  • ChatGPT: Collects 6 types; 10 data points
  • Perplexity: Collects 6 types; 10 data points
  • Grok: Collects 4 types; 7 data points
    • will_a113@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Not that we have any real info about who collects/uses what when you use the API

      • morrowind@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        Yeah we do, they list it in privacy policies. Many of these they can’t really collect even if they wanted to

      • zr0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        All services you see above are provided to EU citizens, which is why they also have to abide by GDPR. GDPR does not disallow the gathering of information. Google, for example, is GDPR compliant, yet they are number 1 on that list. That’s why I would like to know if European companies still try to have a business case with personal data or not.

        • Sips'@slrpnk.net
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          8 months ago

          If it’s one thing I don’t trust its non-EU companies following GDPR. Sure they’re legally bound to, but l mean Meta doesn’t care so why should the rest.

          (Yes I’m being overly dramatic about this, but I’ve lost trust ages ago in big tech companies)

        • Susurrus@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          8 months ago

          It doesn’t mean they “have to abide by GDPR” or that they “are GDPR compliant”. All it means is they appear to be GDPR compliant and pretend to respect user privacy. The sole fact that the AI chatbots are run in US-based data centres is against GDPR. The EU has had many different personal data transfer agreements with the US, all of which were canceled shortly after signing due to US corporations breaking them repeatedly (Facebook usually being the main culprit).

          • zr0@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            I tried to say that, but you were better at explaining, so thank you. Without a court case, you will essentially never know, if they are truly GDPR compliant

    • Gadg8eer@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      8 months ago

      The Broligarchy: “Everything.”

      Me: Squints Pours glowing demon tanning lotion on ground

      Trump: “You dare dispute my rule?! And you would have these… mongrels… come here to die?”

      Open Source Metaverse online. Launching Anti-StarLink missiles…

      Warning. FOSS Metaverse alternative launch detected.

      The Broligarchy: “This was not how it was supposed to be…”

      Me: “Times change. But war, war never changes.”

      “We will never be slaves. But we WILL be online. For the Open Source Metaverse we deserve!”

      Anyway, hopefully that’s the real future in some sense. The metaverse is, technologically, in a state resembling 1995’s World Wide Web. We can stop the changes that made social media happen the first time, but that comes at a grave cost of it’s own… Zero tolerance for interference with the FOSS paradigm. This means no censorship even for the most vile of content, and no government authority over online activity ever again. It also means we have less than 150 years to become immortal because having children inherently puts kids at risk of sexual exploitation, so everyone - literally everyone - must be made infertile permanently to make that impossible.

      Life extension is actually plausible, and omnispermicide would make denying it a war crime. That is the only fix I can see, but all of you would never pay it. That is why I stopped writing; every goddamn story and society at large championed “anti-escapism” in 2017 and onwards, and I will NEVER forgive you all for that. Fuck reality. I Have No Truth and I Must Dream. I want to die because I hate you all.

    • TangledHyphae@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      +1 for Mistral, they were the first (or one of the first) Apache open source licensed models. I run Mistral-7B and variant fine tunes locally, and they’ve always been really high quality overall. Mistral-Medium packed a punch (mid-size obviously) but it definitely competes with the big ones at least.

  • krnl386@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Wow, it’s a whole new level of f*cked up when Zuck collects more data than the Winnie the Pooh (DeepSeek). 😳

    • Octagon9561@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      The idea that US apps are somehow better than Chinese apps when it comes to collecting and selling user data is complete utter propaganda.

      • Duamerthrax@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        8 months ago

        Don’t use either. Until Trump, I still considered CCP spyware more dangerous because they would be collecting info that could be used to blackmail US politicians and businesses. Now, it’s a coin flip. In either case, use EU or FOSS apps whenever possible.

    • serenissi@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      Nope, these services almost always require user login, eventually tied to cell number (ie non disposable) and associate user content and other data points with account. Nonetheless user prompts are always collected. How they’re used is a good question.

        • serenissi@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          8 months ago

          Yes it is possible to create disposable-isque api keys for different uses. The monetary cost is the cost of privacy and of not having hardware to run things locally.

          If you have reliable privacy friendly api vendor suggestions then do share. While I do not need such services now, it can a good future reference.

  • Sonalder@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Anyone has these data from Mistral, HuggingChat and MetaAI ? Would be nice to add them too

  • scintilla@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Am I missing something? What do the numbers mean in relation to the type? Sub types?

    • ohwhatfollyisman@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      perhaps it’s the limit imof each data type?!

      gemini harvests only your first four cobtacts, your last two locations, and so on.

      how does one defeat that? have fewer than four friends and don’t go out!

    • caoimhinr@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      It’s labeled “Unique data points”. See the number 2 - Usage Data for Gemini, there’s an arrow with label there.

  • lib1 [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    I’m curious what data t3chat collects. They support all the models and I’m pretty sure they use Sentry and Stripe, but beyond that, who knows?

    • will_a113@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Anthropic and OpenAPI both have options that let you use their API without training the system on your data (not sure if the others do as well), so if t3chat is simply using the API it may be that they themselves are collecting your inputs (or not, you’d have to check the TOS), but maybe their backend model providers are not. Or, who knows, they could all be lying too.