• 0 Posts
  • 24 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle


  • Balder@lemmy.worldtoTechnology@lemmy.worldaight... i'm out..
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    16 days ago

    Always has been. Nothing has changed.

    The fact that OpenAI stores all input typed doesn’t mean you can make a prompt and ChatGPT will use any prior information as context, unless you had that memory feature turned on (which allowed you to explicitly “forget” what you choose from the context).

    What OpenAI stores and what the LLM uses as input when you start a session are totally separate things. This update is about the LLM being able to search your prior conversations and referencing it (using it as input, in practice), so saying “Nothing has changed” is false.


  • Balder@lemmy.worldtoTechnology@lemmy.worldaight... i'm out..
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    17 days ago

    Maybe for training new models, which is a totally different thing. This update is like everything you type will be stored and used as context.

    I already never share any personal thing on these cloud-based LLMs, but it’s getting more and more important to have a local private LLM on your computer.















  • I remember listening to a podcast that is about scientific explanations. The guy hosting it is very knowledgeable about this subject, does his research and talks to experts when the subject involves something he isn’t himself an expert.

    There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.

    So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).

    Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.

    In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.



  • This seems to me like just a semantic difference though. People will say the LLM is “making shit up” when they’re outputting something that isn’t correct, and that happens (according to my knowledge) usually because the information you’re asking wasn’t represented enough in the training data to guide the answer always to that information.

    In any case, there is an expectation from users that LLMs can somehow be deterministic when they’re not at all. They’re a deep learning model that’s so complicated that’s impossible to predict what effect a small change in the input will have on the output. So it could give an expected answer for a certain question and give a very unexpected one just by adding or changing some word on the input, even if that appears irrelevant.