Code documentation toolkit for analyzing and generating LLM-friendly, but human-readable and lovable code documentation.
- Code structure analysis using Python's AST
- Configurable documentation generation (MyST/Markdown)
- Automatic type annotation resolution
- Cross-component relationship mapping
- Example extraction from docstrings and tests
- Flexible configuration via
pyproject.toml
- LLM-assisted documentation analysis & generation
- LLM-assisted documentation testing
- LLM-assisted documentation validation
- LLM-assisted documentation deployment
# Install with pip substitute
python3 -m pip install uv
uv pip install git+https://github.com/puroman/chewed.git
# Editable install for contributors
git clone https://github.com/puroman/chewed.git
cd chewed && uv pip install -e .
# Generate docs for current project (default output dir)
chew . --output docs/
# Analyze specific package
chew ./my_module --output docs/ --verbose
chew requests --output docs/requests --local
from chewed import analyze_package, generate_docs
# Experimental analysis pipeline
results = analyze_package("mypackage")
generate_docs(results, output_format="myst")
Note: chewed is a research prototype - interfaces may evolve as we explore new documentation paradigms, LLM-assisted workflows, and agentic automation.
Add to pyproject.toml
:
[tool.chewed]
output_format = "myst"
exclude_patterns = ["tests/*"]
known_types = { "DataFrame" = "pandas.DataFrame" }
Setting | Description |
---|---|
output_format |
Documentation format (myst/markdown) |
exclude_patterns |
File patterns to ignore |
known_types |
Type annotation simplifications |
chewed/
├── src/
│ └── chewed/ # Core research implementation
│ ├── analysis/ # AST processing components
│ └── formats/ # Output format handlers
├── tests/ # Experimental validation
└── pyproject.toml
We welcome collaborations expecialy from research teams! We would like to work on the following topics:
- Benchmarking methodologies
- Documentation patterns research
- Experimental design principles
- LLM-assisted and LLM-led documentation workflows
- Natural language documentation generation
- On-demand documentation generation
- Usage examples generation
Please see our contribution guidelines for more details.
MIT Licensed | Part of ongoing research into API documentation systems and agentic workflow automation