from Hacker News

Offline llama3 sends corrections back to Meta's server – I was not aware of it

by jeena on 4/26/24, 7:45 AM with 5 comments

  • by raverbashing on 4/26/24, 7:56 AM

    Wait

    This person is asking the model (running on Ollama) what it does?

    The model answer might have a significance when running on FB infra, but here it is meaningless. Even worse at higher temperatures

    They need to check Ollama source for that

    They're doing no better than people asking Chatgpt if they wrote that assignment paper they got

  • by reneberlin on 4/26/24, 8:47 AM

    I think there is a clear misunderstanding how LLM-things work and that a network request has nothing to do with a LLM-model. Even if "function calling" is possible, it is the users choice what function can be called and if it does a network request it is totally the users side of the implementation what URI and request-body gets sent.

    It feels a bit like trolling. I somehow can't believe this is meant seriously.

  • by okokwhatever on 4/26/24, 7:50 AM

    mmm!!! so it's gonna be necessary to deny some hosts...

    Do you have a list of the hosts callbacked?