Skip to content

Per-Model Configuration for TabAutocompleteOptions #4020

Open
@Hinkolas

Description

@Hinkolas

Validations

  • I believe this is a way to improve. I'll try to join the Continue Discord for questions
  • I'm not able to find an open issue that requests the same enhancement

Problem

I have two main models configured in Continue that I regularly switch between:

  1. A larger model running on a remote Mac Studio with an M2 Ultra, which handles large context windows and fast autocompletions.
  2. A low-parameter model running locally on my MacBook Pro with an M2 Pro, which I use when I’m on the go and don’t have a stable internet connection (a common issue in Germany).

The problem is that my MacBook Pro cannot handle the same context window size (maxPromptTokens) as the Mac Studio. As a result, I need to manually adjust the maxPromptTokens setting in TabAutocompleteOptions every time I switch between the models.

Solution

It would be great to configure TabAutocompleteOptions (e.g., maxPromptTokens) on a per-model basis. This would allow users to tailor settings like maxPromptTokens to the capabilities of the hardware or the size of the model being used. For example:

  • The Mac Studio could use a larger maxPromptTokens value for more comprehensive autocompletions.
  • The MacBook Pro could use a smaller maxPromptTokens value to ensure good performance and lower battery usage.

This feature would eliminate the need for manual adjustments when switching models and improve the overall user experience.

Metadata

Metadata

Labels

area:autocompleteRelates to the auto complete featurekind:enhancementIndicates a new feature request, imrovement, or extensionneeds-triageos:macHappening specifically on Macpriority:mediumIndicates medium priority

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions