Open
Description
Validations
- I believe this is a way to improve. I'll try to join the Continue Discord for questions
- I'm not able to find an open issue that requests the same enhancement
Problem
I have two main models configured in Continue that I regularly switch between:
- A larger model running on a remote Mac Studio with an M2 Ultra, which handles large context windows and fast autocompletions.
- A low-parameter model running locally on my MacBook Pro with an M2 Pro, which I use when I’m on the go and don’t have a stable internet connection (a common issue in Germany).
The problem is that my MacBook Pro cannot handle the same context window size (maxPromptTokens
) as the Mac Studio. As a result, I need to manually adjust the maxPromptTokens
setting in TabAutocompleteOptions
every time I switch between the models.
Solution
It would be great to configure TabAutocompleteOptions
(e.g., maxPromptTokens
) on a per-model basis. This would allow users to tailor settings like maxPromptTokens
to the capabilities of the hardware or the size of the model being used. For example:
- The Mac Studio could use a larger
maxPromptTokens
value for more comprehensive autocompletions. - The MacBook Pro could use a smaller
maxPromptTokens
value to ensure good performance and lower battery usage.
This feature would eliminate the need for manual adjustments when switching models and improve the overall user experience.