Skip to content

DB logging crashes when response is empty #960

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
thedch opened this issue Apr 26, 2025 · 0 comments
Open

DB logging crashes when response is empty #960

thedch opened this issue Apr 26, 2025 · 0 comments

Comments

@thedch
Copy link

thedch commented Apr 26, 2025

When adding some features to the llm-grok plugin, I noticed an issue where if the max tokens is sufficiently low that the model never finishes its reasoning phase and therefore the response is empty, the DB logging will crash.

❯ llm -m grok-3-mini-latest "what color is the sun?" -o max_completion_tokens 5

Traceback (most recent call last):
  File "/Users/daniel/.local/bin/llm", line 10, in <module>
    sys.exit(cli())
             ~~~^^
  File "/Users/daniel/.local/share/uv/tools/llm/lib/python3.13/site-packages/click/core.py", line 1161, in __call__
    return self.main(*args, **kwargs)
           ~~~~~~~~~^^^^^^^^^^^^^^^^^
  File "/Users/daniel/.local/share/uv/tools/llm/lib/python3.13/site-packages/click/core.py", line 1082, in main
    rv = self.invoke(ctx)
  File "/Users/daniel/.local/share/uv/tools/llm/lib/python3.13/site-packages/click/core.py", line 1697, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
  File "/Users/daniel/.local/share/uv/tools/llm/lib/python3.13/site-packages/click/core.py", line 1443, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/daniel/.local/share/uv/tools/llm/lib/python3.13/site-packages/click/core.py", line 788, in invoke
    return __callback(*args, **kwargs)
  File "/Users/daniel/.local/share/uv/tools/llm/lib/python3.13/site-packages/llm/cli.py", line 781, in prompt
    response.log_to_db(db)
    ~~~~~~~~~~~~~~~~~~^^^^
  File "/Users/daniel/.local/share/uv/tools/llm/lib/python3.13/site-packages/llm/models.py", line 456, in log_to_db
    "prompt_json": condense_json(self._prompt_json, replacements),
                   ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/daniel/.local/share/uv/tools/llm/lib/python3.13/site-packages/condense_json/__init__.py", line 81, in condense_json
    return process(obj)
  File "/Users/daniel/.local/share/uv/tools/llm/lib/python3.13/site-packages/condense_json/__init__.py", line 53, in process
    return {key: process(val) for key, val in value.items()}
                 ~~~~~~~^^^^^
  File "/Users/daniel/.local/share/uv/tools/llm/lib/python3.13/site-packages/condense_json/__init__.py", line 55, in process
    return [process(item) for item in value]
            ~~~~~~~^^^^^^
  File "/Users/daniel/.local/share/uv/tools/llm/lib/python3.13/site-packages/condense_json/__init__.py", line 53, in process
    return {key: process(val) for key, val in value.items()}
                 ~~~~~~~^^^^^
  File "/Users/daniel/.local/share/uv/tools/llm/lib/python3.13/site-packages/condense_json/__init__.py", line 67, in process
    replacement_id: str = substr_to_id[matched_text]
                          ~~~~~~~~~~~~^^^^^^^^^^^^^^
KeyError: ''

This raises a few broader questions of how to control reasoning vs visible tokens, which each provider will have its own approach for. Should reasoning tokens be visible to the user? (Should llm have a flag for that? I am not aware of one currently). If reasoning tokens are not shown to the user, but the max tokens are sufficiently low such that the response is empty, how should that be indicated to the user?

But, more broadly, the DB logging issue seems like a rough edge case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant