Templates
Multimodal projects are very versatile and can be used to fulfill a number of various use cases. When creating your project, you can either build a form from scratch or select from a number of available templates.
Use case 1: Chat Rating
Say you've got an LLM model you'd like to test out and rate. You can build a prompt and response form, and give it the necessary tools for annotators to be able to rate the responses with. Below, you can see the creation process using the existing template, as well as what it looks like while in the builder.
In the builder, each component is laid out the way you want it to look in the final form. You can modify the properties and their details, such as titles or placeholder values, to fine-tune your Chat Rating form the way you need to. Once you've published it and generated the items, each item can then be used to make API calls using the LLM model selected. Annotators can then rate each prompt and response separately. You can modify the rating system to allow for any kind of feedback, from a number of stars to a few select options. You may even choose to input a text field for annotators to enter further, more detailed information.
Use case 2: RLHF for image generation
Another way you can make use of this builder is by creating your own form for Reinforcement Learning from Human Feedback (RLHF).
Once the form is created, annotators can start the reinforcement process by generating images. In this example, annotators can enter a prompt and choose from a list of varying art styles. Then, they can rate the results based on how accurate to the prompt it was and how well it was generated.
Use case 3: Model comparison
Using the Multimodal builder, you can create a form that'll allow annotators to simultaneously send a prompt to two different models and rate their responses accordingly.
You can choose to add any kind of component that'll allow you to build the most appropriate rating system for your form.
Once the form's ready, annotators will be able to provide a prompt to two LLMs and view their responses side by side. Then, using the evaluation tools you've set up, they can rate the responses effectively and make their comparison.
Updated 24 days ago