from Hacker News

Text-to-LoRA: Hypernetwork that generates task-specific LLM adapters (LoRAs)

by dvrp on 6/12/25, 5:51 AM with 17 comments

  • by phildini on 6/15/25, 8:19 PM

    I got very briefly excited that this might be a new application layer on top of meshtastic.
  • by jph00 on 6/15/25, 9:44 PM

    The paper link on that site doesn't work -- here's a working link:

    https://arxiv.org/abs/2506.06105

  • by smcleod on 6/16/25, 12:48 AM

    Out of interest, why does it depend on or at least recommend such an old version of Python? (3.10)
  • by kixiQu on 6/17/25, 6:07 PM

    Can someone explain why this would be more effective than a system prompt? (Or just point me to it being tested out against that, I supposed)
  • by watkinss on 6/15/25, 10:03 PM

    Interesting work to adapt LoRa adapters. Similar idea applied to VLMs: https://arxiv.org/abs/2412.16777
  • by gdiamos on 6/15/25, 7:27 PM

    An alternative to prefix caching?
  • by etaioinshrdlu on 6/16/25, 1:07 AM

    What is such a thing good for?
  • by npollock on 6/15/25, 9:47 PM

    LoRA adapters modify the model's internal weights
  • by vessenes on 6/15/25, 3:59 PM

    Sounds like a good candidate for an mcp tool!