Skip to content
/ YiVal Public
forked from YiVal/YiVal

πŸš€ Evaluate and Evolve.πŸš€ YiVal is an open-source GenAI-Ops tool for tuning and evaluating prompts, configurations, and model parameters using customizable datasets, evaluation methods, and improvement strategies.

License

Notifications You must be signed in to change notification settings

UmoZeo/YiVal

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ§šπŸ»β€οΈ YiVal

Website Β· Producthunt Β· Documentation

⚑ Build any Generative AI application with evaluation and improvement ⚑

πŸ‘‰ Follow us: Twitter | Discord

Downloads License: MIT GitHub star chart Dependency Status Open Issues

πŸ€” What is YiVal?

YiVal is an GenAI-Ops framework that allows you to iteratively tune your Generative AI model metadata, params, prompts and retrieval configs all at once with your preferred choices of test dataset generation, evaluation algorithms and improvement strategies.

Check out our quickstart guide! β†’

πŸ“£ What's Next?

Expected Features in Sep

  • Add ROUGE and BERTScore evaluators
  • Add support to midjourney
  • Add support to LLaMA2-70B, LLaMA2-7B, Falcon-40B,
  • Support LoRA fine-tune to open source models

πŸš€ Features

πŸ”§ Experiment Mode: πŸ€– Agent Mode (Auto-prompting):
Workflow Define your AI/ML application ➑️ Define test dataset ➑️ Evaluate πŸ”„ Improve ➑️ Prompt related artifacts built βœ… Define your AI/ML application ➑️ Auto-prompting ➑️ Prompt related artifacts built βœ…
Features 🌟 Streamlined prompt development process
🌟 Support for multimedia and multimodel
🌟 Support CSV upload and GPT4 generated test data
🌟 Dashboard tracking latency, price and evaluator results
🌟 Human(RLHF) and algorithm based improvers
🌟 Service with detailed web view
🌟 Customizable evaluators and improvers
🌟 Non-code experience of Gen-AI application build
🌟 Witness your Gen-AI application born and improve with just one click
Demos - Animal story with MidJourney 🐯 Open In Colab
- Model Comparison in QA ability 🌟Open In Colab
- Startup Company Headline Generation BotπŸ”₯ Open In Colab
- Automate Prompt Generation with Retrieval MethodsπŸ”₯ Open In Colab

Model Support matrix

Model llm-Evaluate Human-Evaluate Variation Generate Custom func
OpenAI βœ… βœ… βœ… βœ…
Azure βœ… βœ… βœ… βœ…
TogetherAI βœ… βœ… βœ… βœ…
Cohere βœ… βœ… βœ… βœ…
Huggingface βœ… βœ… βœ… βœ…
Anthropic βœ… βœ… βœ… βœ…
MidJourney βœ… βœ…

To support different models in custom func(e.g. Model Comparison) , follow our example

To support different models in evaluators and generators , check our config

Installation

pip install yival

Demo

Multi-model Mode

Yival has multimodal capabilities and can handle generated images in AIGC really well.

Find more information in the Animal story demo we provided.

yival run demo/configs/animal_story.yml

MidJourney

Basic Interactive Mode

To get started with a demo for basic interactive mode of YiVal, run the following command:

yival demo --basic_interactive

Once started, navigate to the following address in your web browser:

http://127.0.0.1:8073/interactive

Click to view the screenshot

Screenshot 2023-08-17 at 10 55 31 PM

For more details on this demo, check out the Basic Interactive Mode Demo.

Question Answering with expected result evaluator

yival demo --qa_expected_results

Once started, navigate to the following address in your web browser: http://127.0.0.1:8073/

Click to view the screenshot Screenshot 2023-08-18 at 1 11 44 AM

For more details, check out the Question Answering with expected result evaluator.

About

πŸš€ Evaluate and Evolve.πŸš€ YiVal is an open-source GenAI-Ops tool for tuning and evaluating prompts, configurations, and model parameters using customizable datasets, evaluation methods, and improvement strategies.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.8%
  • CSS 0.2%