LLMs and GenAI projects are very versatile and can be used to fulfill a number of various use cases. You'll find a few listed examples below that you can check out to see some examples of what this builder can be used for. You can also download the templates for these sample use cases so that you can upload them and try them out yourself!
You can find all the templates here.
You can also try out these templates or a custom use case for free through the LLMs and GenAI Playground on our website.
Say you've got an LLM model you'd like to test out and rate. You can build a prompt and response form, and give it the necessary tools for annotators to be able to rate the responses with. Below, you can see the creation process using the existing template, as well as what it looks like while in the builder.
In the builder, each component is laid out the way you want it to look in the final form. You can modify the properties and their details, such as titles or placeholder values, to fine-tune your Chat Rating form the way you need to. Once you've published it and generated the items, each item can then be used to make API calls using the LLM model selected. Annotators can then rate each prompt and response separately. You can modify the rating system to allow for any kind of feedback, from a number of stars to a few select options. You may even choose to input a text field for annotators to enter further, more detailed information.
Another way you can make use of this builder is by creating your own form for Reinforcement Learning from Human Feedback (RLHF).
Once the form is created, annotators can start the reinforcement process by generating images. In this example, annotators can enter a prompt and choose from a list of varying art styles. Then, they can rate the results based on how accurate to the prompt it was and how well it was generated.
Using the LLMs and GenAI builder, you can create a form that'll allow annotators to simultaneously send a prompt to two different models and rate their responses accordingly.
You can choose to add any kind of component that'll allow you to build the most appropriate rating system for your form.
Once the form's ready, annotators will be able to provide a prompt to two LLMs and view their responses side by side. Then, using the evaluation tools you've set up, they can rate the responses effectively and make their comparison.
Updated 12 days ago