from Hacker News

Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs

by milliondreams on 4/2/24, 3:34 AM with 10 comments

  • by CGamesPlay on 4/2/24, 5:38 AM

    Random, off-topic observation: it seems like I see lots of open-source LLM prompts have prompts that aren't proofread. For example [0], "ounch", "he spend", general lack of punctuation, and so on. But the big model providers don't do this [1], so I'm curious how much impact proper grammar in prompts makes (and if it's a positive or negative impact).

    [0] https://github.com/allenai/lumos/blob/main/data/incontext.py...

    [1] https://twitter.com/AmandaAskell/status/1765207842993434880

  • by antupis on 4/2/24, 5:37 AM

    I think that to get truly working agents we need huge working context windows like 1M - 100M tokens everything before that will be kinda hackish.
  • by zzzzzzzzzz10 on 4/2/24, 6:39 AM

    How can I run this locally does anyone know? Can't find any instructions.
  • by dvt on 4/2/24, 10:31 PM

    The paper[1] seems promising, is there a fine-tuned model available? Or will we have to fine tune Llama-7b or Mistral-7b ourselves?

    [1] https://arxiv.org/pdf/2311.05657.pdf

  • by jondwillis on 4/2/24, 7:06 PM

    All of these papers and projects need less jargon, and more examples and proof of performance.
  • by milliondreams on 4/2/24, 3:34 AM

    Looks promising approach to Agentic AI systems.