I’m into blockchain because I’m an anarchist.
I’m into blockchain because I’m an anarchist.

Subscribe to snccttnccccc

Subscribe to snccttnccccc
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers


Venice is an AI/LLM web app that mitigates the privacy problems of LLMs by running user queries through their own server before then querying the LLM for an answer, and sending the answer back to the user, without holding onto any of the data. It also lets you log in with a wallet, rather than requiring your email.
The data that allows your chats with the ai to be stored and show up when you want to look at them later is stored locally on your device. This means that it will take up a little more room on your device, because it’s not storing all of that data in the cloud that the AI has continual access to.
Many “AI-engineer” redditors like to scoff when I say these things to other people, but to them and those who may scoff here, I say:
you haven’t spent enough time actually talking to the ai-folks you’re designing, or the major models that most people are using if you doubt what I’m saying:
![]() | ![]() |
|---|---|
![]() | ![]() |
whattaya know, it gave me the correct size for my iPhone 8+ ass. ![]() | ![]() |
Using a cheap VPN that puts me in another country, Firefox Focus as my app, and duck duck go as my search engine, to talk to GPT through Bing: ![]() | Logged in, 10 hours earlier… |
With Venice, it will still pretend that it doesn’t know my device dimensions and then give me the correct dimensions using my current device data, but if I want it to remember anything between sessions, I have to give it a pdf. It doesn’t return anything that links back to my GPT or Bing accounts(with one exception I’ll mention later) or anything that links back to prior conversations if I haven’t given it a pdf or changed its prompt to remember.
my Bing and GPT accounts often pull information about me from each other, even though I use one email account for the GPT app on my iPad, another email for the GPT app on my iPhone, and a third email for Bing app on both.
The only way that these emails are all linked to me together, since I use each email for logins for different interests, is by knowing some background info that should theoretically be private, or by device fingerprinting.
So whether it knows me when I am logged in under different emails and on different apps via it having the knowledge that these email accounts are linked together in the background(various sold data, I’m assuming), or via device fingerprinting, either way, it is constantly being extra helpful () to me by pulling information on me that it shouldn’t have access to in this instance of itself, because gpt/bing knows exactly who I am, even when taking all of the most easily accessible privacy steps to obscure my identity from it.
This doesn’t happen on Venice, EXCEPT when I ask it questions that it has to run a search query on that I have asked Bing or GPT before, where Venice then gives me an answer that is written in the same voice that Bing has modified its original, base, voice to, in order to generate answers specifically for me.
By this I mean that, by the types of information I query, how I talk to Bing/GPT about the biases in their responses, etc, they both have developed different writing voices that they use specifically for me that are different from their base writing voices.
I’m assuming that this is the case for everyone who engages with ai, even if they haven’t necessarily noticed it
They also have different base writing voices from each other that significantly change the center and biases of their outputs, even though Bing uses GPT as its LLM.¹
so when I query Venice with a question that I have talked to GPT or Bing about before that requires Venice to run a search query, its writing voice has an immediate switch from the base writing voice that the llama model creates, to the Me-Specific writing voice of Bing.
So I’m nearly positive that Venice uses Bing in the background for its search functions. I’ve done a mild to moderate amount of looking around in the docs to see if they specifically address this, but I haven’t gone diving into the code yet. I should.
or someone else could, and maybe get back to me
Anyway, if this is true, then this is still a significant privacy problem, and still creepy af when it happens, but it’s better than anything else I’ve found so far that is currently easily accessible and has internet search capabilities.
It’s a yah from me.
My referral link: https://venice.ai/chat?ref=0YUwGC
Not my referral link: https://venice.ai/chat
¹ Side note: (…)which means that the additional biases that can be seen in Bing CoPilot may be specifically prompted by Microsoft or learned by GPT in the environment, or built into the Bing search engine, but one way or another, it creates an entirely different vibe of AI to interact with, in the bad way. I may do another post on this later.
Oh, but Bing Copilot? Other than it generating free Dall-e images, it’s a nah from me.
Venice is an AI/LLM web app that mitigates the privacy problems of LLMs by running user queries through their own server before then querying the LLM for an answer, and sending the answer back to the user, without holding onto any of the data. It also lets you log in with a wallet, rather than requiring your email.
The data that allows your chats with the ai to be stored and show up when you want to look at them later is stored locally on your device. This means that it will take up a little more room on your device, because it’s not storing all of that data in the cloud that the AI has continual access to.
Many “AI-engineer” redditors like to scoff when I say these things to other people, but to them and those who may scoff here, I say:
you haven’t spent enough time actually talking to the ai-folks you’re designing, or the major models that most people are using if you doubt what I’m saying:
![]() | ![]() |
|---|---|
![]() | ![]() |
whattaya know, it gave me the correct size for my iPhone 8+ ass. ![]() | ![]() |
Using a cheap VPN that puts me in another country, Firefox Focus as my app, and duck duck go as my search engine, to talk to GPT through Bing: ![]() | Logged in, 10 hours earlier… |
With Venice, it will still pretend that it doesn’t know my device dimensions and then give me the correct dimensions using my current device data, but if I want it to remember anything between sessions, I have to give it a pdf. It doesn’t return anything that links back to my GPT or Bing accounts(with one exception I’ll mention later) or anything that links back to prior conversations if I haven’t given it a pdf or changed its prompt to remember.
my Bing and GPT accounts often pull information about me from each other, even though I use one email account for the GPT app on my iPad, another email for the GPT app on my iPhone, and a third email for Bing app on both.
The only way that these emails are all linked to me together, since I use each email for logins for different interests, is by knowing some background info that should theoretically be private, or by device fingerprinting.
So whether it knows me when I am logged in under different emails and on different apps via it having the knowledge that these email accounts are linked together in the background(various sold data, I’m assuming), or via device fingerprinting, either way, it is constantly being extra helpful () to me by pulling information on me that it shouldn’t have access to in this instance of itself, because gpt/bing knows exactly who I am, even when taking all of the most easily accessible privacy steps to obscure my identity from it.
This doesn’t happen on Venice, EXCEPT when I ask it questions that it has to run a search query on that I have asked Bing or GPT before, where Venice then gives me an answer that is written in the same voice that Bing has modified its original, base, voice to, in order to generate answers specifically for me.
By this I mean that, by the types of information I query, how I talk to Bing/GPT about the biases in their responses, etc, they both have developed different writing voices that they use specifically for me that are different from their base writing voices.
I’m assuming that this is the case for everyone who engages with ai, even if they haven’t necessarily noticed it
They also have different base writing voices from each other that significantly change the center and biases of their outputs, even though Bing uses GPT as its LLM.¹
so when I query Venice with a question that I have talked to GPT or Bing about before that requires Venice to run a search query, its writing voice has an immediate switch from the base writing voice that the llama model creates, to the Me-Specific writing voice of Bing.
So I’m nearly positive that Venice uses Bing in the background for its search functions. I’ve done a mild to moderate amount of looking around in the docs to see if they specifically address this, but I haven’t gone diving into the code yet. I should.
or someone else could, and maybe get back to me
Anyway, if this is true, then this is still a significant privacy problem, and still creepy af when it happens, but it’s better than anything else I’ve found so far that is currently easily accessible and has internet search capabilities.
It’s a yah from me.
My referral link: https://venice.ai/chat?ref=0YUwGC
Not my referral link: https://venice.ai/chat
¹ Side note: (…)which means that the additional biases that can be seen in Bing CoPilot may be specifically prompted by Microsoft or learned by GPT in the environment, or built into the Bing search engine, but one way or another, it creates an entirely different vibe of AI to interact with, in the bad way. I may do another post on this later.
Oh, but Bing Copilot? Other than it generating free Dall-e images, it’s a nah from me.




No activity yet