I'm interested in seeing how this changes when using duck duck go front end at duck.ai
there's no login and history is stored locally (probably remotely too)
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
much thanks to @gary_host_laptop for the logo design :)
I'm interested in seeing how this changes when using duck duck go front end at duck.ai
there's no login and history is stored locally (probably remotely too)
Is there away to fake all the data they try to collect?
Pretty sure this is what they scrape from your device if you install their app. I dont know how else they would get access to contacts and location and stuff. So yeah you can just run it on a virtual android device and feed it garbage data, but i assume the app or their backend will detect that and throw out your data.
How about if I only use the web version?
Root, install xprivacy (or xprivacylua if your phone isn't 10 years old).
And what about goddamn Mistral?
Its French as far as I know so at least it abides to gdpr by default.
Just clarifying, does this report mean it's collected while a user is using the tools, or data that is generally scraped from the internet?
They're talking about what is being recorded while the user is using the tools (your prompts, RAG data, etc.)
Does that include generated responses?
Nobody knows! There's no specific disclosure that I'm aware of (in the US at least), and even if there was I wouldn't trust any of these guys to tell the truth about it anyway.
As always, don't do anything on the Internet that you wouldn't want the rest of the world to find out about :)
DeepSeek at home: None
How much VRAM does your machine have? Are you using open webui?
Doesn't the official local app still have telemetry? I might be remembering wrong
Aren't they supposed to collect data?
Or you could use Deepseek's workaround and run it locally. You know, open source and all.
Me when Gemini (aka google) collects more data than anyone else:
Not really shocked, we all know that google sucks
I would hazard a guess that the only reason those others aren't as high is because they don't have the same access to data. It's not that they don't want to, they simply can't (yet).
Which is good (for now). Glad I don't use that shit
I have a bridge to sell you if you think grok is collecting the least amount of info.
Ikr XD
I’m curious what data t3chat collects. They support all the models and I’m pretty sure they use Sentry and Stripe, but beyond that, who knows?
Anthropic and OpenAPI both have options that let you use their API without training the system on your data (not sure if the others do as well), so if t3chat is simply using the API it may be that they themselves are collecting your inputs (or not, you'd have to check the TOS), but maybe their backend model providers are not. Or, who knows, they could all be lying too.
Back in the day, malware makers could only dream of collecting as much data as Gemini does.
Locally run AI: 0
Are there tutorials on how to do this? Should it be set up on a server on my local network??? How hard is it to set up? I have so many questions.
If you want to start playing around immediately, try Alpaca if Linux, LMStudio if Windows. See if it works for you, then move from there.
Alpaca actually runs its own Ollama instance.
If by more learning you mean learning
ollama run deepseek-r1:7b
Then yeah, it's a pretty steep curve!
If you're a developer then you can also search "$MyFavDevEnv use local ai ollama" to find guides on setting up. I'm using Continue extension for VS Codium (or Code) but there's easy to use modules for Vim and Emacs and probably everything else as well.
The main problem is leveling your expectations. The full Deepseek is a 671b (that's billions of parameters) and the model weights (the thing you download when you pull an AI) are 404GB in size. You need so much RAM available to run one of those.
They make distilled models though, which are much smaller but still useful. The 14b is 9GB and runs fine with only 16GB of ram. They obviously aren't as impressive as the cloud hosted big versions though.
Check out Ollama, it’s probably the easiest way to get started these days. It provides tooling and an api that different chat frontends can connect to.
If only my hardware could support it..
I can actually use locally some smaller models on my 2017 laptop (though I have increased the RAM to 16 GB).
You'd be surprised how mich can be done with how little.
Who would have guessed that the advertising company collects a lot of data
And I can't possibly imagine that Grok actually collects less than ChatGPT.
Data from surfshark aka nordvpn lol. Take it with a few chunks of salt