

@Passerby6497 my stance is that the LLM might recognize that the best way to solve the problem is to run chromium and get the answer from there, then pass it on?
🇫🇷
@Passerby6497 my stance is that the LLM might recognize that the best way to solve the problem is to run chromium and get the answer from there, then pass it on?
@Passerby6497 I really don’t understand the issue here
If there is a challenge to solve, then the server has provided that to the client
There is no way around this, is there?
@rtxn I don’t understand how that isn’t client side?
Anything that is client side can be, if not spoofed, then at least delegated to a sub process, and my argument stands
@mfed1122 yeah that is my worry, what’s an acceptable wait time for users? A tenth of a second is usually not noticeable to a human, but is it useful in this context? What about half a second, etc
I don’t know that I want a web where everything is artificially slowed by a full second for each document