Welcome to the BionicScholar repository! BionicScholar is a platform designed to simplify and enhance the research reading experience by summarizing research papers and applying Bionic Reading techniques for easier comprehension.
The BionicScholar platform consists of both the frontend (user interface) and the backend (API services). The frontend allows users to upload research papers (PDFs), view concise summaries, and read using Bionic Reading, while the backend provides the summarization service and text processing via a Large Language Model (LLM).
Bionic Reading is a technique that enhances reading speed and retention by bolding key letters within words, improving focus and comprehension
-
Paper Summarization: Upload a research paper (PDF) and the platform generates a concise summary of its content.
-
Bionic Reading View: Users can switch to Bionic Reading mode, which highlights essential portions of the text to enhance readability and focus.
-
LLM-Powered Summarization: The backend uses a Large Language Model (LLM) to summarize research papers accurately.
-
Responsive UI: The frontend is fully responsive, optimized for both desktop and mobile devices.
-
RESTful API: The backend exposes a RESTful API for tasks such as summarization and PDF parsing.
-
React.js: Core library for building the user interface.
-
Tailwind CSS: Utility-first CSS framework for styling.
-
TypeScript: Provides static type checking and improved code maintainability.
-
Axios: Used for making HTTP requests to the backend services.
-
Django: High-level Python web framework for building the backend.
-
Django REST Framework (DRF): For building RESTful APIs.
-
Large Language Model (LLM): A model integrated into the backend for summarizing research papers.
-
Node.js (v14 or above) for the frontend.
-
npm or yarn as the package manager for the frontend.
-
Python (v3.8 or above) for the backend.
Clone the repository and navigate to the BionicScholar directory:
git clone https://github.com/tamasvencel/BionicScholar.git
cd BionicScholar
Using Docker:
-
Build frontend-dev docker image
docker-compose build frontend-dev
-
Run frontend-dev docker container
docker-compose up frontend-dev
Now you can access the frontend via: http://localhost:80/
Without using docker:
-
Go into the frontend directory:
cd frontend
-
Install dependencies:
yarn install
-
Start the development server:
yarn dev
Now you can access the frontend via: http://localhost:80/
-
Go into the backend directory:
cd backend
-
Create
.env
file for storing environment variablesSECRET_KEY = 'your_django_secret_key'
HUGGINGFACE_API_KEY = your_hugging_face_api_key
-
Go back into root directory:
cd ..
Using Docker:
-
Build backend-dev docker image
docker-compose build backend-dev
-
Run backend-dev docker container
docker-compose up backend-dev
Now your backend is running on http://localhost:8000
Without using docker:
-
Navigate to the backend directory:
cd backend
-
Create a virtual environment and activate it:
python -m venv venv
venv\Scripts\activate
-
Install required dependencies:
pip install -r requirements.txt
-
(Only on macOS) If you are having difficulty importing libmagic for the python-magic package please refer to the following Stack Overflow Q&A thread: https://stackoverflow.com/questions/73398716/difficulty-importing-module-in-python-that-was-installed-via-homebrew-on-m1-pro
-
Run migrations:
python manage.py migrate
-
(This is only needed if you don't use Windows operating system)
Tesseract OCR:
Tesseract is required for Optical Character Recognition (OCR) to process text scanned PDFs. It should be installed in the
backend/app/
folder.To install Tesseract, download the executable from the official repository here:
Set it up to the
backend/app/
folder. -
Start server:
python manage.py runserver
Now your backend is running on http://localhost:8000