You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
.Net: docs(dotnet): supply maxTokens to avoid response truncation (#2249)
### Motivation and Context
1. Why is this change required?
When working through the examples in the dotnet/README.md file, I
noticed that [OpenAI sets the max_tokens to
16](https://platform.openai.com/docs/api-reference/completions/create),
which is far too low for these examples and led to truncated output and
failing prompt chaining examples.
3. What problem does it solve?
This makes the README.md examples work again.
5. What scenario does it contribute to?
Onboarding/experimentation.
8. If it fixes an open issue, please link to the issue here.
- [x] The code builds clean without any errors or warnings
- [x] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts)
raises no violations
- [x] All unit tests pass, and I have added new tests where possible
- [x] I didn't break anyone 😄
Co-authored-by: Shawn Callegari <[email protected]>
0 commit comments