Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prompts with inline JSON throw error when response is to be deserialized to entity #2347

Open
pradu2 opened this issue Feb 28, 2025 · 0 comments

Comments

@pradu2
Copy link

pradu2 commented Feb 28, 2025

Bug description
When prompts contain json and call().entity(clazz) is used to retrieve the LLM response, the call to LLM fails with java.lang.IllegalArgumentException: The template string is not valid.

Environment
Spring AI version: 1.0.0-M6
Java: 21
Models: all of them (?) for sure it happens with Azure Open AI.

Steps to reproduce

Execute on any model:

chatClient.prompt().user(<<text containing some json inline>>).call().entity(clazz)

and it will throw The template string is not valid.

See the full example in Minimal Complete Reproducible example.

Expected behavior
Prompts with inline json must not cause an exception when they get decorated with the JSON instruction. Actually any decoration of prompts must not make the prompt's content invalid.

Minimal Complete Reproducible example

chatClient.prompt().user("""
What is the current weather in {"city":"Bucharest"}?
""").call().entity(ChatResponse.class);

will throw:

Request processing failed: java.lang.IllegalArgumentException: The template string is not valid.] with root cause

org.stringtemplate.v4.compiler.STException: null
	at org.stringtemplate.v4.compiler.Compiler.reportMessageAndThrowSTException(Compiler.java:224) ~[ST4-4.3.4.jar:na]
	at org.stringtemplate.v4.compiler.Compiler.compile(Compiler.java:154) ~[ST4-4.3.4.jar:na]
	at org.stringtemplate.v4.STGroup.compile(STGroup.java:514) ~[ST4-4.3.4.jar:na]
	at org.stringtemplate.v4.ST.<init>(ST.java:162) ~[ST4-4.3.4.jar:na]
	at org.stringtemplate.v4.ST.<init>(ST.java:156) ~[ST4-4.3.4.jar:na]
	at org.springframework.ai.chat.prompt.PromptTemplate.<init>(PromptTemplate.java:80) ~[spring-ai-core-1.0.0-M6.jar:1.0.0-M6]
	at org.springframework.ai.chat.client.advisor.api.AdvisedRequest.toPrompt(AdvisedRequest.java:171) ~[spring-ai-core-1.0.0-M6.jar:1.0.0-M6]
	at org.springframework.ai.chat.client.DefaultChatClient$DefaultChatClientRequestSpec$1.aroundCall(DefaultChatClient.java:680) ~[spring-ai-core-1.0.0-M6.jar:1.0.0-M6]
	at org.springframework.ai.chat.client.advisor.DefaultAroundAdvisorChain.lambda$nextAroundCall$1(DefaultAroundAdvisorChain.java:98) ~[spring-ai-core-1.0.0-M6.jar:1.0.0-M6]
	at io.micrometer.observation.Observation.observe(Observation.java:564) ~[micrometer-observation-1.14.4.jar:1.14.4]
	at org.springframework.ai.chat.client.advisor.DefaultAroundAdvisorChain.nextAroundCall(DefaultAroundAdvisorChain.java:98) ~[spring-ai-core-1.0.0-M6.jar:1.0.0-M6]
	at org.springframework.ai.chat.client.DefaultChatClient$DefaultCallResponseSpec.doGetChatResponse(DefaultChatClient.java:493) ~[spring-ai-core-1.0.0-M6.jar:1.0.0-M6]
	

Additional output on the log:

1:39: '"Bucharest"' came as a complete surprise to me

This works fine:

chatClient.prompt().user("""
What is the current weather in Bucharest?
""").call().entity(ChatResponse.class);

This also works fine:

chatClient.prompt().user("""
What is the current weather in {"city":"Bucharest"}?
""").call().content();

** Additional info / hints:

From my investigation it looks like it is cause by the AdvisedRequest where the prompt is decorated with a text instructing for the use of JSON and adding the schema derived from the class passed in .entity() builder method. The decorating happens by appending the {spring_ai_soc_format} placeholder to the raw prompt text and then passing everything to the formatter.

While adding automatically the schema and the JSON instruction is a very nice feature, it shouldn't break the input prompt.

It also could be a point of injection as the prompt might use ST to get additional information from the advisable context.

Maybe a better solution would be to use concatenation instead of creating a template out of the input prompt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant