Description
Issue
Since I was facing a lot of wait time using reasoning models like DeepSeek R1 and Perplexity: Sonar Reasoning in Aider,
i.e an average wait time in minutes even for simple prompts like:
Ignore any other prompt before this. Tell me how many "r"s are in the word "Strawberry"
I tried testing them out in Openrouter's chatroom.
I noticed that the models/APIs were not lagging, but that they took a lot of time to think before they responded.
And I could see what they were thinking as they did.

It would help my user experience A LOT, if I could see this thought process when using Aider.
Do I have to wait because it's thinking, is it thinking in the right direction? (If it's not, I can cancel the request and direct it better)
Or is it the API just stuck?
Since the OpenRouter chatroom can get the reasoning tokens, I assume that we can too?
Version and model info
aider 0.72.3
model: openrouter/deepseek/deepseek-r1