Skip to main content
All CollectionsGetting Started
How to test your prompts across multiple LLMs
How to test your prompts across multiple LLMs

It only takes a couple of clicks

Dan Cleary avatar
Written by Dan Cleary
Updated over 4 months ago


​
Models are constantly updating and new ones are launching all the time, which makes it important to test your prompts across a wide variety of models.

PromptHub makes this extremely easy, taking just a few clicks:

  1. On the playground page click the "batch" icon in the "Run Test" button
    ​

  2. Select the number of times you want the prompt to run

  3. For each test run, update the model by clicking on it and selecting any model
    ​

  4. Click "Run as Batch" and you'll then be able to judge outputs from different models side-by-side

Tips

  • API Setup: Ensure you have API keys set up for all the model providers that you are testing across

  • Experiment: Feel free to adjust parameters and variables to see how changes affect the output.
    ​

Did this answer your question?