Description
Is your feature request related to a problem? Please describe.
It is not clear to the generator users what swagger and openapi features are supported by our generators.
Describe the solution you'd like
I propose adding a feature matrix like the one shown below.
We can run unit tests in CI which may pass or fail to generate our feature matrix.
A test passing fills int X or true to show that the feature is supported.
A test failing fills in blank or false to show that the feature is not supported.
We could include hyperlinks to the test if it exists.
Type Support
https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types
https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#dataTypes
Location | Int64 | Int32 | string | bool |
---|---|---|---|---|
Model.param | X | X | X | X |
Request.query_param | X | X | X | |
Request.form_param | X | X | ||
Response.query_param | X | X | X | |
Response.form_param | X | X |
Schema.additionalProperties
https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#schema-object
Location | Supported |
---|---|
Model | X |
Request.body | X |
Response.data | X |
This matrix could be generated from CI unit testing that we run OR locally in dev machines.
Describe alternatives you've considered
- Manual addition of this info (very error prone and can get stale)
- Automated generation of this data using uint tests (table kept up to data)
- Add feature metadata in java classes
- How do we ensure that these stay consistent with what is in our generators and that the feature keeps working?
Additional context
This is a potential large number of new tests that we are adding but would help devs understand the differences between generators and it would help us focus our efforts on bug fixes and feature additions.