by evtothedev on 9/20/24, 4:25 PM with 1 comments
As LLMs have become ubiquitous in web applications, I've noticed that prompts intended for Claude or GPT have become scattered throughout our codebase or buried within objects. Often, these prompts were built inline through string manipulation. My thinking was two-fold, 1) Let's come up with a simple pattern for organizing and rendering these prompts, and 2) Let's make them easy to review.
This draws heavy inspiration from ActionMailer::Preview.
by graypegg on 9/20/24, 5:09 PM
config.action_prompt.delivery_method = :chatgpt_api
config.action_prompt.chatgpt_api_settings = {
api_key: Rails.env[:OPENAI_API_KEY]
}
...
class ApplicationPrompt < ActionPrompt::Base
before_action :set_user
def set_user
@user = current_user || AnonUser.new
end
end
class SupportAgentPrompt < ApplicationPrompt
def ask(question)
@question = question.strip.downcase
if result = prompt(question:) # renders `views/prompts/support_agent/ask.text.erb`,
# ideally handles recognizing JSON and stripping out whitespace in response
broadcast_append_to @user,
target: :support_chat,
partial: 'support_chat/message',
locals: { message: result, from: SupportAgentUser.new }
end
end
end
...
SupportAgentPrompt.ask.prompt_later "What's the best menu item?"