Update to GitHub Copilot Consumptive Billing Experience #163114
Replies: 40 comments 32 replies
-
300 premium requests are not enough for pro plan |
Beta Was this translation helpful? Give feedback.
-
The |
Beta Was this translation helpful? Give feedback.
-
It feels like instead of making Pro+ a better plan, you’ve simply made the Pro plan worse. |
Beta Was this translation helpful? Give feedback.
-
Nothing exciting about this change. Pro plan looks pretty useless from now. I can see the real-time Premium requests and it's not cheap. |
Beta Was this translation helpful? Give feedback.
-
I have the feeling that someone just cut my wings ! |
Beta Was this translation helpful? Give feedback.
-
What model should I set, to never worry about this limit ever (until similar announcement) again? |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Crazy how when you guys set that 300 hundred limits, cursor update their pricing plan from 500 hundred request to become unlimited to better align with Claude Code and other LLM providers pricing's. |
Beta Was this translation helpful? Give feedback.
-
I dont like it. Now VS Code for non dev has become less effective. IF you are going to charge to use a LLM, then users should be able to bring their own. |
Beta Was this translation helpful? Give feedback.
-
I run a GitHub Enterprise instance with Copilot for Business enabled. As far as I can tell none of the usage monitoring features are available. Is this by design? This is feeling pretty half-baked at this point - what am I supposed to tell my user base? They just have to guess how many requests they have left? Or are you seriously expecting my team to run a centralised report every time someone asks what their usage is like? If you at least had an API that'd be something, but I see you couldn't even work up the energy to implement that. In case my feedback is not clear here - this is dire. You need to try harder - right now you're just signalling that you are not investing in enterprise users. EDIT: So I see that there is the ability to see this in the IDE - that was not appearing for us straight away this morning but after a restart or two it has started to appear. However, the lack of usage tracking in github.com is still poor, as is the lack of notice for switching this on. |
Beta Was this translation helpful? Give feedback.
-
Well, that's a shame. The unlimited nature of Copilot made it very useful. I've been with Copilot since before it launched, but now, I'm looking for alternatives. |
Beta Was this translation helpful? Give feedback.
-
It seems completely unfair that they would put such severe limitations on us if we were paying for it. |
Beta Was this translation helpful? Give feedback.
-
I can see the the Quota Usage in the IDE. It currently says 1%, but i don not understand why i need to wait 30 Minutes to see how many Request i made this month. It makes it hard to get a feeling what a typical premium Request uses, compared to the monthly included Requests. I would love to have a absolute count in the IDE and also a overview for last hour, today, last week which is in real time. I also had many Requests aborted, due some errors (mostly with Gemini 2.5 pro, which is the best model in my opinion), do these still count as premium Requests? |
Beta Was this translation helpful? Give feedback.
-
This made my GitHub Copilot subscription completely useless. I'll not renew my subscription and switch to something else. I'm sorry GitHub - I was a huge fan / promoter of the GitHub Copilot Pro plan and I got you a few customers too. I'll make sure all of them cancel their subscription as it's clearly not worth the money anymore. |
Beta Was this translation helpful? Give feedback.
-
Of course GPT 4.1 is the included model... You always have to watch it in agent mode, otherwise, it will stop. At least with Claude Sonnet 4, Gemini, etc. they act as TRUE agents that can work autonomously. I ask him to analyse my project and make detailed documentation, and he provides me a markdown with 40 lines... Im cancelling my Github Copilot Pro subscription to get Claude Max. 👋 |
Beta Was this translation helpful? Give feedback.
-
Oh, what perfect timing! Just when I needed to renew my education verification—a truly delightfully streamlined process featuring their brilliantly designed auto-rejection model that clearly has my best interests at heart. So here I am, 3 days later, still waiting for this mythical "developer pack" (complete with the oh-so-generous Copilot Pro), while my "free" premium quota evaporated after a whopping 5 or 6 requests. Truly spectacular work there. Despite being a loyal beta user since day one—because apparently that counts for absolutely nothing—I'm graciously taking my business to Cursor. But hey, I'm sure everyone will be thrilled with those incredibly generous 300 requests per month. What a bargain! |
Beta Was this translation helpful? Give feedback.
-
300 messages is a joke. I really hope it will be reverted, otherwise they will lose another sub |
Beta Was this translation helpful? Give feedback.
-
300 a month is just too low I am likely switching to cursor |
Beta Was this translation helpful? Give feedback.
-
The problem is, half the time the code doesn't even work and you have to correct it again. Actions that advances the code base forward, you could go through many iterations. Just to get the result you want. This is the same with every interaction with llms like through the subscriptions of chatGpt or Claude or Gemini. So when it comes to code and you're constantly having random stuff put in, like Claude 3.7 would randomly change button colours and layout for no reason. Fixing the errors that the models are creating, in the past you would just re prompt it to fix the error and continue, now at that rate limit. It's basically useless. Even at 1500 requests at the Pro+ is still basically useless because you're paying for the llm to make mistakes |
Beta Was this translation helpful? Give feedback.
-
Documentation: Provide examples of typical monthly usage patterns by developer type |
Beta Was this translation helpful? Give feedback.
-
Maybe add slow premium requests after fast premium requests are depleted? 🙄 |
Beta Was this translation helpful? Give feedback.
-
Alteast add claude 3.5 in the included models.The pro plan is useless now |
Beta Was this translation helpful? Give feedback.
-
Done, like I said I would. Vote with your wallets, people. It's the only language they understand. |
Beta Was this translation helpful? Give feedback.
-
Well…here is your feedback! Please take it seriously. I have been using copilot with sonnet and it was really a dream. Not perfect by any means, but it was a really helpful tool. Since the change has been rolled out I have apparent run out of premium requests (I seriously doubt it, unless it was counted retroactively, in which case shame on you, but I am going on holiday now and will simply wait for the month to finish at this point). I then gave it a go at the unlimited alternatives and I suppose I have adjust my expectations and accept an AI that consistently allucinates about the code I am working on (tells me to include files that don’t exists, explains the code making reference to functions that don’t exist, etc). I join the chorus here: want to cap the usage? Then every request needs to give a perfect answer. Obviously that s not possible so:
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
My premium request has exceeded the quota, but I cannot continue to use GPT-4o as a substitute. I can only use GPT4.1. Cursor just announced that pro plan users can make unlimited requests for the premium model. I suggest that Microsoft, as an official first-party partner of OpenAI, always provide the strongest OpenAI programming model to pro plan users without any limit. And the model selection should follow Cursor's approach. Simple tasks should be automatically assigned to the OpenAI programming model, and complex tasks should be automatically assigned to the strongest programming models such as Claude 4. |
Beta Was this translation helpful? Give feedback.
-
fuk you |
Beta Was this translation helpful? Give feedback.
-
HAHAHHA vibe coder idiots! You thought they would just give you their expensive gigantic model outputs for free? Welcome to being nickel and dimed, cloud style! Maybe we'll even go back to competent programmers. |
Beta Was this translation helpful? Give feedback.
-
using cursor - much better. Even their base model is useful. It works not like 4.1 in copilot, which is only good for basic things. |
Beta Was this translation helpful? Give feedback.
-
I'm still trying to understand how this new change is in any possible way better, or at least comparable, or maybe a new and exciting thing (which is the main feeling I've been getting from the blog posts, and the various announcements in general). I can totally see myself having missed some detail that makes this whole thing make sense, so please let me know if this is the case. Because otherwise, my customer experience just went from having a practically unlimited access to a big array of diverse options for LLMs to work with and swap as necessary, to this access now suddenly having become majorly restricted, i.e. still "unlimited", but only for pretty much the two worst LLMs from the previous array That's what I've understood from the whole thing, so I'm wondering what the reason was for this change. Did GitHub determine that the Pro subscription had access to more value than what it paid for? Or maybe those extra models needed to be less congested so access to them got restricted? I'm not being sarcastic here or anything similar, I'm just genuinely curious. And moreover, I can't see how this is supposed to be even remotely positive for me as the customer, let alone excited. This feels more like GitHub either decided that $10 is too low an amount to charge for the Pro subscription, or that it could be making more money from the new models, or both. I'm not sure what to think about it, I get it that it's a company that must always make more money than yesterday. But this whole change feels a bit dishonest? Exploitative? Can't really pinpoint it, but if so, then it could also explain why it's being marketed as such a hyped, positive change for the customer. Feels kinda bad because I've been a supporter of Copilot since quite early on, and together with GitHub they've acquired a special, familiar place in my heart (mostly in regards to other similar ecosystems, I know it's a company). All this time I haven't considered using another code assistant, and have generally been supportive of Copilot/Github in various forums and posts, but I think I'll start looking for other subscriptions for coding assistants from now on 🤷🏼 But yeah, thanks for coming to my TED talk. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello Copilot Community,
We’re excited to announce an update to your GitHub Copilot experience: monthly premium request allowances are now in effect for all paid Copilot plans! This change is designed to give you more transparency and control over your premium Copilot usage.
What’s changed?
The monthly allowance of premium requests per user is now enforced for Copilot Pro, Pro+, Business, and Enterprise plans. Premium requests unlock a selection of advanced AI models and features—usage varies by model.
What’s unchanged?
All paid plans still include unlimited use of GPT-4.1 and GPT-4o for agent mode and chat interactions, plus unlimited code completions! Please note that rate limits may still apply across all models.
Managing Your Premium Requests
For all the details, check out our documentation.
We Want Your Feedback!
Have questions, comments, or suggestions? Your input helps shape the future of Copilot, please comment below.
Thank you for being a part of our community and helping us make Copilot even better!
Beta Was this translation helpful? Give feedback.
All reactions