This project aims to detect potential deception through facial recognition and machine learning techniques. By analyzing facial expressions and employing advanced models, it seeks to provide insights into a person's truthfulness.
In this project, we use transfer learning and LSTM networks to analyze facial expressions and detect potential lies. The system is trained on a dataset of facial images and expressions to recognize patterns associated with deception. I specifically utilize the InceptionV3 model for feature extraction.
The dataset is sourced from the Real-Life Deception Detection project by Mohamed Abouelenien at the University of Michigan. It consists of real-life trial videos featuring statements made by exonerees and defendants during crime-related TV episodes. The video clips are labeled as deceptive or truthful based on a guilty verdict, not-guilty verdict, and exoneration. The project utilizes video recordings taken in natural settings and applies multimodal approaches for deception detection, providing valuable visual and linguistic patterns to discriminate between liars and truth-tellers.
- Python
- TensorFlow and Keras
- OpenCV for image processing
- Transfer Learning with InceptionV3 for model optimization
- Facial expression recognition
- Analysis of deception patterns
- Utilization of advanced neural network architectures
The dataset is obtained from the Real-Life Deception Detection project by Mohamed Abouelenien at the University of Michigan. It includes real-life trial videos labeled as deceptive or truthful based on the verdicts of the speakers.
- Transfer Learning: Leveraging the InceptionV3 model to improve performance.
- LSTM (Long Short-Term Memory): Used for analyzing sequential data from facial expressions.
- The project evaluates the effectiveness of the model in detecting deception based on facial expressions, showcasing accuracy and potential areas for improvement.
- Feel free to open issues or submit pull requests if you would like to contribute to this project.