Skip to content

A decorator for Just-in-time Completion of Python programs using LLM

License

Notifications You must be signed in to change notification settings

treefds/jit_completion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

JIT Completion

Implementation of Just-in-time completion of Python functions, empowered by LLM and metaprogramming.

You have probably heard of JIT compilation, where computer code is compiled at runtime. But what if the computer code is also generated at runtime?

Sounds like a horrible idea. Let's do that!

How to use

To use JIT Completion, you have to clone jit_completion/ from this repository first, as I didn't bother submitting a PyPI package...

openai>=1 is required. Tested in python>=3.10.

The jit_completion module provides two decorators: llm_complete() and llm_mock().

  • llm_complete autocompletes the function definition based on existing code. LLM will be called exactly once on function definition/import.
  • llm_mock will try to mimick the behaviour of the function, as described in function's docstring and annotations. LLM will be called every time the decorated function is called.

Some usage examples are given below.

llm_complete

from jit_completion import llm_complete
import openai
openai.api_key = "sk-..."
# alternatively, set OPENAI_API_KEY environment variable

@llm_complete()
def draw_cows(nums : int) -> str:
    """
    Draw a herd of cows that looks like what you get from cowsay as ASCII art.
    Args:
    - nums (int): specify how many cows should stand side-by-side. 
    """
    many_cows = 'lots of cows, please implement'
    # Cows stand side by side. 
    # Pay attention to spaces and padding!
    # Remember to ADD PADDING to each line of the cow ASCII art, 
    # so that len(line) for each line is equal! Otherwise it looks wrong!
    return many_cows

if __name__ == '__main__':
    print(draw_cows(2))

# Output:
#  ^__^                   ^__^             
#  (oo)\_______           (oo)\_______     
#  (__)\       )\/\       (__)\       )\/\ 
#      ||----w |              ||----w |    
#      ||     ||              ||     ||    

llm_complete will set raw attribute to the original LLM output. You can print the LLM-generated code with print(func.raw).

llm_mock

import time
from jit_completion import llm_mock

@llm_mock()
def on_this_day(date : str) -> str:
    """
    Return one (and only one) short description
    of a random historical event that happened on this date.
    """
    return "A random incident from the history"

print(on_this_day("March 14th"))
print(on_this_day(time.strftime('%m-%d')))

Check out examples/ for more (silly) use cases. You may also change the model and prompt template by passing kwargs into llm_complete/mock(). Read jit_completion/prompts.py for sample prompts.

Limitations

This module is written purely for fun, as an exercise to get familiar with Python decorators. You should probably not use it for production.

Several known limitations/issues:

  • Only OpenAI API is supported.
  • Chaining several decorators together might break.
  • Decorators are not context aware; it only has access to the decorated function. It will not know what modules are imported, or the class definition.
  • Anonymous functions are not supported. For llm_complete, the code has to be stored in a file (so using it in an interactive terminal won't work).
  • Extremely unstable. Tuning seeds and temperature would help, but not by much.
  • The implementation uses eval() and exec() to run code generated by an LLM. Use at your own risk.

Acknowledgement

Cow art from the wonderful command line tool that is cowsay.

About

A decorator for Just-in-time Completion of Python programs using LLM

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages