Skip to content

Latest commit

 

History

History
59 lines (42 loc) · 2.49 KB

README.md

File metadata and controls

59 lines (42 loc) · 2.49 KB

Hindi Image Captioning Model

Hugging Face Spaces

This is an encoder-decoder image captioning model made with VIT encoder and GPT2-Hindi as a decoder. This is a first attempt at using ViT + GPT2-Hindi for a Hindi image captioning task. We used the Flickr8k Hindi Dataset available on kaggle to train the model.

This model was trained during HuggingFace course community week, organized by Huggingface. The pretrained weights are available here

How to use

Here is how to use this model to caption an image of the Flickr8k dataset:

import torch
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, \
                         VisionEncoderDecoderModel

if torch.cuda.is_available():
    device = 'cuda'
else:
    device = 'cpu'

image_path = 'sample.jpg'
image = Image.open(image_path)

encoder_checkpoint = 'google/vit-base-patch16-224'
decoder_checkpoint = 'surajp/gpt2-hindi'
model_checkpoint = 'team-indain-image-caption/hindi-image-captioning'
feature_extractor = ViTFeatureExtractor.from_pretrained(encoder_checkpoint)
tokenizer = AutoTokenizer.from_pretrained(decoder_checkpoint)
model = VisionEncoderDecoderModel.from_pretrained(model_checkpoint).to(device)

#Inference
sample = feature_extractor(image, return_tensors="pt").pixel_values.to(device)
clean_text = lambda x: x.replace('<|endoftext|>','').split('\n')[0]

caption_ids = model.generate(sample, max_length = 50)[0]
caption_text = clean_text(tokenizer.decode(caption_ids))
print(caption_text)

Training data

We used the Flickr8k Hindi Dataset, which is the translated version of the original Flickr8k Dataset, available on Kaggle to train the model.

Training procedure

This model was trained during HuggingFace course community week, organized by Huggingface. The training was done on Kaggle GPU.

Evaluation Results

Due to longer inference time, we sampled around 3000 comments from the test dataset and computed METEOR and BLEU scores.

  • BLEU - 0.137
  • METEOR - 0.320

Team Members