You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bug description
When prompts contain json and call().entity(clazz) is used to retrieve the LLM response, the call to LLM fails with java.lang.IllegalArgumentException: The template string is not valid.
Environment
Spring AI version: 1.0.0-M6
Java: 21
Models: all of them (?) for sure it happens with Azure Open AI.
and it will throw The template string is not valid.
See the full example in Minimal Complete Reproducible example.
Expected behavior
Prompts with inline json must not cause an exception when they get decorated with the JSON instruction. Actually any decoration of prompts must not make the prompt's content invalid.
Minimal Complete Reproducible example
chatClient.prompt().user("""What is the current weather in {"city":"Bucharest"}?""").call().entity(ChatResponse.class);
will throw:
Request processing failed: java.lang.IllegalArgumentException: The template string is not valid.] with root cause
org.stringtemplate.v4.compiler.STException: null
at org.stringtemplate.v4.compiler.Compiler.reportMessageAndThrowSTException(Compiler.java:224) ~[ST4-4.3.4.jar:na]
at org.stringtemplate.v4.compiler.Compiler.compile(Compiler.java:154) ~[ST4-4.3.4.jar:na]
at org.stringtemplate.v4.STGroup.compile(STGroup.java:514) ~[ST4-4.3.4.jar:na]
at org.stringtemplate.v4.ST.<init>(ST.java:162) ~[ST4-4.3.4.jar:na]
at org.stringtemplate.v4.ST.<init>(ST.java:156) ~[ST4-4.3.4.jar:na]
at org.springframework.ai.chat.prompt.PromptTemplate.<init>(PromptTemplate.java:80) ~[spring-ai-core-1.0.0-M6.jar:1.0.0-M6]
at org.springframework.ai.chat.client.advisor.api.AdvisedRequest.toPrompt(AdvisedRequest.java:171) ~[spring-ai-core-1.0.0-M6.jar:1.0.0-M6]
at org.springframework.ai.chat.client.DefaultChatClient$DefaultChatClientRequestSpec$1.aroundCall(DefaultChatClient.java:680) ~[spring-ai-core-1.0.0-M6.jar:1.0.0-M6]
at org.springframework.ai.chat.client.advisor.DefaultAroundAdvisorChain.lambda$nextAroundCall$1(DefaultAroundAdvisorChain.java:98) ~[spring-ai-core-1.0.0-M6.jar:1.0.0-M6]
at io.micrometer.observation.Observation.observe(Observation.java:564) ~[micrometer-observation-1.14.4.jar:1.14.4]
at org.springframework.ai.chat.client.advisor.DefaultAroundAdvisorChain.nextAroundCall(DefaultAroundAdvisorChain.java:98) ~[spring-ai-core-1.0.0-M6.jar:1.0.0-M6]
at org.springframework.ai.chat.client.DefaultChatClient$DefaultCallResponseSpec.doGetChatResponse(DefaultChatClient.java:493) ~[spring-ai-core-1.0.0-M6.jar:1.0.0-M6]
Additional output on the log:
1:39: '"Bucharest"' came as a complete surprise to me
This works fine:
chatClient.prompt().user("""What is the current weather in Bucharest?""").call().entity(ChatResponse.class);
This also works fine:
chatClient.prompt().user("""What is the current weather in {"city":"Bucharest"}?""").call().content();
** Additional info / hints:
From my investigation it looks like it is cause by the AdvisedRequest where the prompt is decorated with a text instructing for the use of JSON and adding the schema derived from the class passed in .entity() builder method. The decorating happens by appending the {spring_ai_soc_format} placeholder to the raw prompt text and then passing everything to the formatter.
While adding automatically the schema and the JSON instruction is a very nice feature, it shouldn't break the input prompt.
It also could be a point of injection as the prompt might use ST to get additional information from the advisable context.
Maybe a better solution would be to use concatenation instead of creating a template out of the input prompt.
The text was updated successfully, but these errors were encountered:
Bug description
When prompts contain json and
call().entity(clazz)
is used to retrieve the LLM response, the call to LLM fails with java.lang.IllegalArgumentException: The template string is not valid.Environment
Spring AI version: 1.0.0-M6
Java: 21
Models: all of them (?) for sure it happens with Azure Open AI.
Steps to reproduce
Execute on any model:
and it will throw
The template string is not valid
.See the full example in Minimal Complete Reproducible example.
Expected behavior
Prompts with inline json must not cause an exception when they get decorated with the JSON instruction. Actually any decoration of prompts must not make the prompt's content invalid.
Minimal Complete Reproducible example
will throw:
Additional output on the log:
This works fine:
This also works fine:
** Additional info / hints:
From my investigation it looks like it is cause by the AdvisedRequest where the prompt is decorated with a text instructing for the use of JSON and adding the schema derived from the class passed in .entity() builder method. The decorating happens by appending the
{spring_ai_soc_format}
placeholder to the raw prompt text and then passing everything to the formatter.While adding automatically the schema and the JSON instruction is a very nice feature, it shouldn't break the input prompt.
It also could be a point of injection as the prompt might use ST to get additional information from the advisable context.
Maybe a better solution would be to use concatenation instead of creating a template out of the input prompt.
The text was updated successfully, but these errors were encountered: