Skip to content
/ giskard Public
forked from Giskard-AI/giskard

🐒 The testing framework for ML models, from tabular to LLMs

License

Notifications You must be signed in to change notification settings

Smixi/giskard

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

giskardlogo giskardlogo

The testing framework dedicated to ML models, from tabular to LLMs

Scan AI models to detect risks of biases, performance issues and errors. In 4 lines of code.

GitHub release License CI Sonar Giskard on Discord

Documentation β€’ Blog β€’ Website β€’ Discord Community β€’ Advisors


Install Giskard 🐒

You can install the latest version of Giskard from PyPi using pip :

pip install giskard -U

We officially support Python 3.9, 3.10 and 3.11.

Try in Colab πŸ“™

Open Colab notebook


Giskard Architechture

Giskard is a Python library that automatically detects vulnerabilities in AI models, from tabular models to LLM, including performance biases, data leakage, spurious correlation, hallucination, toxicity, security issues and many more.

It's a powerful tool that helps data scientists save time and effort drilling down on model issues, and produce more reliable and trustworthy models.

Scan Example

Instantaneously generate test suites for your models ‡️

Test Suite Example

Giskard works with any model, in any environment and integrates seamlessly with your favorite tools ‡️


Contents

  1. πŸ€Έβ€β™€οΈ Quickstart
  2. ⭐️ Premium features
  3. ❓ FAQ
  4. πŸ‘‹ Community

πŸ€Έβ€β™€οΈ Quickstart

1. πŸ”Ž Scan your model

Here's an example of a Giskard scan on the famous Titanic survival prediction dataset:

import giskard

# Replace this with your own data & model creation.
df = giskard.demo.titanic_df()
demo_data_processing_function, demo_sklearn_model = giskard.demo.titanic_pipeline()

# Wrap your Pandas DataFrame with Giskard.Dataset (test set, a golden dataset, etc.).
giskard_dataset = giskard.Dataset(
    df=df,  # A pandas.DataFrame that contains the raw data (before all the pre-processing steps) and the actual ground truth variable (target).
    target="Survived",  # Ground truth variable
    name="Titanic dataset", # Optional
    cat_columns=['Pclass', 'Sex', "SibSp", "Parch", "Embarked"]  # List of categorical columns. Optional, but is a MUST if available. Inferred automatically if not.
)

# Wrap your model with Giskard.Model. Check the dedicated doc page: https://docs.giskard.ai/en/latest/guides/wrap_model/index.html
# you can use any tabular, text or LLM models (PyTorch, HuggingFace, LangChain, etc.),
# for classification, regression & text generation.
def prediction_function(df):
    # The pre-processor can be a pipeline of one-hot encoding, imputer, scaler, etc.
    preprocessed_df = demo_data_processing_function(df)
    return demo_sklearn_model.predict_proba(preprocessed_df)

giskard_model = giskard.Model(
    model=prediction_function,  # A prediction function that encapsulates all the data pre-processing steps and that could be executed with the dataset used by the scan.
    model_type="classification",  # Either regression, classification or text_generation.
    name="Titanic model",  # Optional
    classification_labels=demo_sklearn_model.classes_,  # Their order MUST be identical to the prediction_function's output order
    feature_names=['PassengerId', 'Pclass', 'Name', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked'],  # Default: all columns of your dataset
)

✨✨✨Then run Giskard's magical scan✨✨✨

scan_results = giskard.scan(giskard_model, giskard_dataset)

Once the scan completes, you can display the results directly in your notebook:

display(scan_results)

If you're facing issues, check out our wrapping model & dataset docs for more information.

2. πŸͺ„ Automatically generate a test suite

If the scan found potential issues in your model, you can automatically generate a test suite based on the vulnerabilities found:

test_suite = scan_results.generate_test_suite("My first test suite")

You can then run the test suite locally to verify that it reproduces the issues:

test_suite.run()

Test suites are reusable objects that provide a way to apply consistent checks on your models. To drill down on failing tests and get even more out of the Giskard library, we recommend heading over to the Giskard hub ‡️

⭐️ Premium Features

The Giskard hub is Giskard's premium offering. It provides a number of additional capabilities that are not available in the open-source version of Giskard, including:

  • Advanced test generation: This includes the ability to diagnose failing tests, debug your models and create more domain-specific tests.
  • Model comparison: This includes the ability to compare models to decide which one to promote.
  • Test hub: This includes a place to gather all of your team's tests in one place to collaborate more efficiently.
  • Business feedback: This includes the ability to share your results and collect business feedback from your team.

If you are interested in learning more about Giskard's premium offering, please contact us.

Scan Example

1. Start the Giskard hub

To start the Giskard hub, run the following command:

pip install "giskard[hub]" -U
giskard hub start

πŸš€ That's it! Access it at http://localhost:19000

2. Upload your test suite to the Giskard hub

You can then upload the test suite created using the giskard Python library to the Giskard hub. This will enable you to:

  • Compare the quality of different models to decide which one to promote
  • Debug your tests to diagnose identified vulnerabilities
  • Create more domain-specific tests relevant to your use case
  • Share results, and collaborate with your team to integrate business feedback
  1. First, make sure the Giskard hub is installed

    How to check if the Giskard hub is running
  2. Then, execute the ML worker in your notebook:

    !giskard worker start -d -k YOUR_KEY
  3. Finally, upload your test suite to the Giskard hub using the following code:

    key = "API_KEY"  # Find it in Settings in the Giskard hub
    client = giskard.GiskardClient(
        url="http://localhost:19000", key=key  # URL of your Giskard instance
    )
    
    my_project = client.create_project("my_project", "PROJECT_NAME", "DESCRIPTION")
    
    # Upload to the current project
    test_suite.upload(client, "my_project")

The Giskard hub is installed on your infrastructure.

Giskard as a company does not have access to your datasets and models, so you can keep everything private.

❓ Where can I get more help?

What is a ML worker?

Giskard executes your model using a worker that runs the model directly in your Python environment containing all the dependencies required by your model. You can execute the ML worker either from a local notebook, a Colab notebook or a terminal.

How to get the API key

Access the API key in the Settings tab of the Giskard hub.

If Giskard hub/ML worker is not installed

Go to the Install the Giskard Hub page.

If Giskard hub is installed on an external server
  !giskard worker start -d -k YOUR_KEY -u http://ec2-13-50-XXXX.compute.amazonaws.com:19000/
For more information on uploading to your local Giskard hub

Go to the Upload an object to the Giskard hub page.

For any other questions and doubts, head over to our Discord.

πŸ‘‹ Community

We welcome contributions from the Machine Learning community! Read this guide to get started.

Join our thriving community on our Discord server: join Discord server

🌟 Leave us a star, it helps the project to get discovered by others and keeps us motivated to build awesome open-source tools! 🌟

❀️ You can also sponsor us on GitHub. With a monthly sponsor subscription, you can get a sponsor badge and get your bug reports prioritized. We also offer one-time sponsoring if you want us to get involved in a consulting project, run a workshop, or give a talk at your company.

About

🐒 The testing framework for ML models, from tabular to LLMs

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.1%
  • JavaScript 2.0%
  • Other 0.9%