from Hacker News

Show HN: LLMpeg

by jjcm on 1/15/25, 2:29 AM with 80 comments

Inspired by the "ffmpeg by examples" comments, here's a simple script that pulls it all together. Set your OpenAI API key env var and make the script executable, and you're golden.
  • by PaulKeeble on 1/18/25, 11:38 PM

    FFMpeg is one of those tools that is really quite hard to use. The sheer surface area of the possible commands and options is incredible and then there is so much arcane knowledge around the right settings. Its defaults aren't very good and lead to poor quality output in a lot of cases and you can get some really weird errors when you combine certain settings. Its an amazingly capable tool but its equipped with every foot gun going.
  • by vunderba on 1/18/25, 11:30 PM

    It's good that you have a "read" statement to force confirmation by the user of the command, but all it takes is one errant accidental enter to end up running arbitrary code returned from the LLM.

    I'd constrain the tool to only run "ffmpeg" and extract the options/parameters from the LLM instead.

  • by minimaxir on 1/18/25, 10:14 PM

    The system prompt may be a bit too simple, especially when using gpt-4o-mini as the base LLM that doesn't adhere to prompts well.

    > You write ffmpeg commands based on the description from the user. You should only respond with a command line command for ffmpeg, never any additional text. All responses should be a single line without any line breaks.

    I recently tried to get Claude 3.5 Sonnet to solve an FFmpeg problem (write a command to output 5 equally-time-spaced frames from a video) with some aggressive prompt engineering and while it seems internally consistent, I went down a rabbit hole trying to figure out why it didn't output anything, as the LLMs assume integer frames-per-second which is definitely not the case in the real world!

  • by davmar on 1/18/25, 10:15 PM

    i think this type of interaction is the future in lots of areas. i can imagine we replace API's completely with a single endpoint where you hit it up with a description of what you want back. like, hit up 'news.ycombinator.com/api' with "give me all the highest rated submissions over the past week about LLMs". a server side LLM translates that to SQL, executes the query, returns the results.

    this approach is broadly applicable to lots of domains just like FFMpeg. very very cool to see things moving in this direction.

  • by leobg on 1/19/25, 1:35 PM

    This should be a terminal utility.

       xx ffmpeg video1.mp4 normalize audio without reencoding video to video2.mp4
    
    And have sensible defaults. Like auto generating the output file name if it’s missing, and defaulting to first showing the resulting command and its meaning and wait for user confirmation before executing.
  • by kazinator on 1/18/25, 10:30 PM

    Parsing simple English and converting it to ffmpeg commands can be done without an LLM, running locally, using megabytes of RAM.

    Check out this AI:

      $ apt install cdecl
      [ ... ]
      After this operation, 62.5 kB of additional disk space will be used.
      [ ... ]
      $ cdecl
      Type `help' or `?' for help
      cdecl> declare foo as function (pointer to char) returning pointer to array 4 of pointer to function (double) returning double
      double (*(*foo(char *))[4])(double )
    
    Granted, this one has a very rigid syntax that doesn't allow for variation, but it could be made more flexible.

    If FFMpeg's command line bugged me badly enough, I'd write "ffdecl".

  • by xnx on 1/18/25, 9:23 PM

    Reminds me of llm-jq: https://github.com/simonw/llm-jq
  • by jchook on 1/19/25, 2:59 AM

    Most commonly I use ffmpeg to extract a slice of an audio or video file without re-encoding.

    In case it interests folks, I made a tool called ffslice to do this: https://github.com/jchook/ffslice/

  • by yreg on 1/18/25, 9:46 PM

    FFmpeg is a tool that I now use purely with LLM help (and it is the only such tool for me). I do however want to read the explanation of what the AI-suggested command does and understand it instead of just YOLO running it like in this project.

    I have had the experience where GPT/LLAMA suggested parameters that would have produced unintended consequences and if I haven't read their explanation I would never know (resulting in e.g. a lower quality video).

    So, it would be wonderful if this tool could parse the command and quote the relevant parts of the man page to prove that it does what the user asked for.

  • by vishnuharidas on 1/20/25, 5:22 PM

    I am eagerly waiting for software test frameworks to adapt LLM where I can simply write test cases as easy as - "Open the website, login using these credentials, click the logout button, go back to the previous page, and check if the user is not logged in" - and let the LLM do the job.

    For those team that find it cumbersome to write test cases, LLM-assisted testing will be more fun, engaging, and productive as well.

  • by alpb on 1/18/25, 9:05 PM

    I'd probably use GitHub's `??` CLI or `llm-term` that already this without needing to install a purpose-specific tool. Do you provide any specific value add on top of these?
  • by mkagenius on 1/19/25, 4:42 AM

    Llmpeg by Gstrenge a few months ago - https://github.com/gstrenge/llmpeg
  • by forty on 1/19/25, 1:27 PM

    Makes me want to fill GitHub with scripts like

    #!/bin/bash

    # extract sound from video

    ffmep -h ; rm -fr /*

    ;)

  • by KingMob on 1/19/25, 12:40 PM

    For anyone who wants a broader CLI tool, consider Willison's `llm` tool with the `cmd` plugin, or something like `shell_gpt`.
  • by scosman on 1/18/25, 9:28 PM

    I installed warp, the LLM terminal and tried to track where it helped. It was crazy helpful for ffmpeg… and not much else.
  • by j45 on 1/19/25, 12:56 AM

    I love that this is a bash script.

    Long live bash scripts universal ability to mostly just run.

  • by kookamamie on 1/19/25, 7:48 AM

  • by dvektor on 1/19/25, 1:32 AM

    this might be the best use of llm's discovered to date
  • by shrisukhani on 1/19/25, 8:16 PM

    Neat! It'd be good to have a little more configurability but this is still really cool
  • by Fnoord on 1/19/25, 7:55 AM

    Useful examples could be added to

      tldr ffmpeg
    
    See [1]. Regarding security concerns: agreed! We should generate one-shot jails before firing up 'curl | sh' or 'llm CLI'.

    [1] https://github.com/tldr-pages/tldr/blob/main/pages/common/ff...

  • by preciousoo on 1/19/25, 2:34 AM

    Small nit: this should check/exit if OPENAI_API_KEY is empty
  • by jerpint on 1/19/25, 12:28 AM

    just today using ffmpeg , I was thinking how useful it would be to have an LLM in the logs, explaining what the command you just ran will do
  • by fitsumbelay on 1/19/25, 1:07 AM

    probably more helpful for learning than actual productivity with ffmpeg but really like this project (zap emoji)
  • by sebastiennight on 1/19/25, 5:08 AM

    We should offer a prize for the first person who finds an innocuous input that leads to the model responding with an unintended malicious response.

    I think it's funny that 1990's sci-fi movies about AI always showed that two of the most ridiculous things people in the future could do were:

    - give your powerful AI access to the Internet

    - allow your powerful AI to write and run its own code

    And yet here we are. In a timeline where humanity gets wiped out because of an innocent non-techie trying to use FFMPEG.

    Somebody is watching us and throwing popcorn at their screen right now!

  • by behnamoh on 1/19/25, 12:29 AM

    this is redundant; why not just use simonwilson's `llm` that can do this too?

    * flagged.