CoDSPy is an intelligent code optimization system that combines AI analysis, automated refactoring, and test generation into a single workflow. Built with DSPy and Gradio, it transforms raw code into optimized, test-covered implementations through Chain-of-Thought (CoT) and ReAct reasoning.
- AI-Powered Code Analysis: Detects potential issues using CoT and ReAct reasoning
- Smart Optimization: Suggests and implements code improvements
- Test Generation: Creates comprehensive test cases and code
- Interactive Interface: Gradio-based web UI with real-time results
- Local AI Integration: Runs on Ollama with custom LLM models
- Multiple Implementations: Supports both CoT and ReAct approaches
.
├── README.md # Project documentation
├── testcodes.py # Sample test codes for evaluation
├── v1_CoT_CodeLlama.py # CoT implementation with CodeLlama 7B
├── v2_CoT_Llama.py # CoT implementation with Llama 3.2:3B
└── v3_ReAct.py # ReAct implementation with Llama 3.2:3B
- Python 3.8+
- Ollama installed locally
- Llama3 or compatible LLM model configured
git clone https://github.com/yourusername/CoDSPy.git
cd CoDSPy
pip install -r requirements.txt
- Start Ollama service:
ollama serve
- Run the desired implementation:
# For CoT with CodeLlama:7b
python v1_CoT_CodeLlama.py
# For CoT with Llama 3.2:3b
python v2_CoT_Llama.py
# For ReAct with Llama 3.2:3b
python v3_ReAct.py
- Access the interface at
http://localhost:7860
-
Code Analysis Phase:
- Syntax inspection
- Performance evaluation
- Best practices verification
-
Optimization Phase:
- Code refactoring
- Efficiency improvements
- Readability enhancements
-
Test Generation:
- Edge case identification
- Test case creation
- Unit test generation
Component | Technology | Description |
---|---|---|
AI Framework | DSPy | CoT and ReAct reasoning |
Language Model | Ollama/Llama3 | Local LLM operations |
Web Interface | Gradio | User-friendly code editor |
Processing | CoT/ReAct | Reasoning approaches |