Skip to content

[BUG]: KUBECTL Azure Devops TASK kubernetes@1, cannot fetch access token for Azure on version 1.241.5 #20080

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
4 of 7 tasks
RSoares82 opened this issue Jun 27, 2024 · 23 comments

Comments

@RSoares82
Copy link

New issue checklist

Task name

kubernetes@1

Task version

1.241.5

Issue Description

Task started to fail on version 1.241.5 with the message:

Reason Could not fetch access token for Azure. Status code: network_error, status message: Network request failed.

It works on prior versions

Environment type (Please select at least one enviroment where you face this issue)

  • Self-Hosted
  • Microsoft Hosted
  • VMSS Pool
  • Container

Azure DevOps Server type

dev.azure.com (formerly visualstudio.com)

Azure DevOps Server Version (if applicable)

No response

Operation system

Ubuntu 20.04

Relevant log output

2024-06-27T11:46:04.1785075Z ##[error]Cannot download access profile/kube config file for the cluster xpto. Reason Could not fetch access token for Azure. Status code: network_error, status message: Network request failed.

Full task logs with system.debug enabled

 [REPLACE THIS WITH YOUR INFORMATION] 

Repro steps

No response

@merlynomsft
Copy link
Contributor

We appreciate your report. It seems there's a compatibility issue between the updated MSAL dependency in 1.241.5 and Node 10. Migrating to the Node 16 or Node 20 runtime is likely to rectify this problem. We are actively seeking a permanent fix.

@RSoares82
Copy link
Author

Another thing... I tried to revert the version in Azure Release Pipeline (classic) to a prior version by setting versionSpec, but is not working, it always defaults to the latest version.

Not sure if its another issue.

Thanks

@Ajeit8055
Copy link

@RSoares82 You can try using task [email protected] instead of Kubernetes@1

@RSoares82
Copy link
Author

@Ajeit8055 yes I did that on the yaml pipelines, but on classic releases I don't have that option and versionSpec didn't work.

@archertango
Copy link

archertango commented Jul 8, 2024

Is there any update / progress on this? Release and build pipelines for us have been broken for almost 2-weeks now. We do regular releases and this has significantly hindered our ability to release updates.

@Ajeit8055 we've found a workaround using regular command line tasks calling kubectl. It's not perfect though.

@v-schhabra
Copy link
Contributor

Hi @RSoares82 @archertango
Could you please share the complete debug failed pipeline logs at [email protected]?
By the time we are working on the permanent fix, will provide you temporary mitigation incase you are blocked.

@jappenzesr
Copy link

We are facing the same issue on our privately hosted VMSS pools.

Please note that these agents are configured to run behind a corporate proxy. Could it be that the task is not properly picking up proxy settings from the system environment? Analysis with tcpdump reveals that the request towards login.microsoftonline.com is not being sent (no trace of it on network level).

@v-schhabra v-schhabra added the Area:RM RM task team label Jul 9, 2024
@v-schhabra
Copy link
Contributor

Hi @RSoares82
Thanks for sharing the logs. We could see this issue is thrown out from one of the dependent package. We are collaborating with dependent package(MSAL_node) team to solve this issue. We will keep you posted on the updates.

@v-schhabra
Copy link
Contributor

Hi @archertango @jappenzesr
If you do not have any workarounds please share the complete pipeline logs at [email protected]. We can provide you the temporary workaround incase you are blocked.

@archertango
Copy link

@v-schhabra I've emailed the build and release logs

@v-schhabra
Copy link
Contributor

v-schhabra commented Jul 18, 2024

For the permanent fix we are collaborating with msal owners team and will update here once we have the permanent fix.

@v-schhabra v-schhabra added Area:RM RM task team and removed Area:RM RM task team labels Jul 25, 2024
@yolomaniac
Copy link

yolomaniac commented Jul 30, 2024

Hello, I got the same exact situation on AzureRmWebAppDeployment v4.242.0 too.

We appreciate your report. It seems there's a compatibility issue between the updated MSAL dependency in 1.241.5 and Node 10. Migrating to the Node 16 or Node 20 runtime is likely to rectify this problem. We are actively seeking a permanent fix.

Agent version is pipelines-agents-* so it should not be a problem of node.

@yolomaniac
Copy link

@v-schhabra and @archertango would you kindly share generic workaround steps with the community? Thanks

@v-schhabra
Copy link
Contributor

Hi @yolomaniac
If you are using yaml pipeline you can specify the last working version
for example like this: - task: [email protected]

@yolomaniac
Copy link

yolomaniac commented Jul 30, 2024

Is there a way to specify it using classic pipeline interface ?

I see this...
image

@v-schhabra
Copy link
Contributor

Hi @yolomaniac
Could you pls share your org details at [email protected]?
Becoz the fix has been deployed to few Rings so just wanted to check if the fix is already there on your ring or not?

@yolomaniac
Copy link

Hi @yolomaniac Could you pls share your org details at [email protected]? Becoz the fix has been deployed to few Rings so just wanted to check if the fix is already there on your ring or not?

thx someone from the organization will contact you soon

@KristofKuli
Copy link

Hi All,
We are experiencing the same issue with "Package and deploy Helm charts" task, version 0.243.1. Is it possible to change the version for classic pipelines?

@v-schhabra
Copy link
Contributor

Hi @KristofKuli,
We cannot change the version in classic release pipeline. If possible, please use the yaml pipeline and specify the last working version until the fix is deployed to your ring.

@KristofKuli
Copy link

Hi @v-schhabra ,

Thank you for the answer. Do you have any ETA, for the fix to be released?

@jappenzesr
Copy link

With the recent escalation of this issue to the helm task, we have a major productive release pipeline blocked. Please provide us with an ETA for the push of this fix to ADO ring 5. This has the potential to quickly escalate into a major issue on our side now.

@v-schhabra
Copy link
Contributor

Hi @jappenzesr
the fixes will be rolled out till 6thAugust on all the rings. If you are blocked we can perform task override for your org specifically.

@v-schhabra
Copy link
Contributor

v-schhabra commented Aug 2, 2024

Hi, the fixes are rolled out on all the rings as large no of cx impacted due to this issue.
#20170

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants