Transformer-based models have gained immense popularity in the past few years and many state-of-the-art algorithms in natural image processing are based on transformers. BERT is also attention based model which works very well on text classification. In order to use these models in Indian judiciary, incorporating domain knowledge is necessary. Legal-BERT was developed using the domain knowledge of US and EU legal documents. In this paper, we perform a thorough comparison of BERT and Legal-BERT models to check their effectiveness in the Indian legal system. We obtain text embeddings from both these models and use them for the prediction of Indian legal case judgements using several machine learning and deep learning algorithms. We also propose a solution for the limited token size of BERT models. In all the algorithms, it has been observed that Legal-BERT outperforms basic BERT model and can be used in Indian legal context as well.
-
Notifications
You must be signed in to change notification settings - Fork 1
stelios357/Predicting-Indian-Legal-Case-Judgements
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published