-
Notifications
You must be signed in to change notification settings - Fork 497
LOG-4782: Workload Identity through Microsoft Entra ID For Azure Monitor Logs #1786
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
@Clee2691: This pull request references LOG-4782 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
Skipping CI for Draft Pull Request. |
@@ -0,0 +1,264 @@ | |||
--- | |||
title: workload_identity_support_for_azure_monitor_logs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: hyphens, rather than snakecase, is the standard for filenames in this directory.
suggested title change: "Workload Identity Auth for Azure Monitor Logs" as I don't think the "Entra ID" part is necessary to mention in the PR or anywhere else. Entra is their entire suite of IAM and access solutions. It's implied.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It used to be called Azure Active Directory (AD) so that’s why I put that distinction
|
||
## Release Sign-off Checklist | ||
|
||
- [ ] Enhancement is `implementable` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can some of these be checked off?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possibly. Probably after a consensus on the work, feasibility?
|
||
### Non-Goals | ||
|
||
- Supporting authentication other than short lived federated token credentials. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one other, shared_key. We need to ensure long-lived continues to be supported as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can keep the shared_key, but will need to migrate to another long lived credential solution with the log ingestion API
- See #1 in [implementation details](#implementation-detailsnotesconstraints) section. | ||
- Update Vector's rust [Azure Identity](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/identity/azure_identity) client library to `v0.23.0`. | ||
- See #2 in [implementation details](#implementation-detailsnotesconstraints) section. | ||
- Extend Vector's Azure Log Ingestion sink to accept additional configuration for workload identity authentication. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: explicitly call out either ViaQ/Vector or vectordotdev (upstream) or some other way to clarify.
|
||
- Switch over to the new Azure Log Ingestion vector sink when implemented. | ||
- See #1 in [implementation details](#implementation-detailsnotesconstraints) section. | ||
- Update Vector's rust [Azure Identity](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/identity/azure_identity) client library to `v0.23.0`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know what version of vector we will need, to include this? or are we vendoring it ourselves either way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is addressed in point 2 of implementation details. See link right below this line to jump to it.
|
||
The Vector collector will: | ||
|
||
1. Determine the authentication type using a configurable field (`credential_kind`). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our plan for future (gcp, azure, alibaba) is to have auth type determined by the type of credentials you are pointing to. This is how local config and credential files work in both gcp and aws (even though we already implemented cw.auth.type). Does azure require any 'type' when you auth with their cli? If not, then I'd suggest we align as closely as possible to standard cloud auth and (wip) gcp auth.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way that the upstream PR is trying to implement it is using azure_identity::create_credential(), from v0.21.0
, which depends on the AZURE_CREDENTIAL_KIND environment variable in order to use the right credential flow.
Even if we use the default_credentials() for a credential chain, it does not support WIF and the credentials types are created based on the available environment variables.
In v0.23.0
of the azure_identity
SDK, there is an option to use chained_token_credential that
provides a user-configurable
TokenCredential
authentication flow for applications that will be deployed to Azure.
So this is an option and we will not need to specify any type.
Admittedly, I have not thought much about the https://issues.redhat.com/browse/LOG-6857 and will need to be a joint venture for aligning with other clouds.
|
||
The ClusterLogForwarder will: | ||
|
||
1. Determine which authentication method to use based on a configurable field on the `azureMonitorAuthentication`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
related to comment above. I don't think we want to add auth.type
|
||
### Open Questions | ||
|
||
1. Do we also want to implement long-lived credential support using the Log Ingestion API? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, because we need to deprecate the other.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be done in this scope or in another enhancement proposal?
### Open Questions | ||
|
||
1. Do we also want to implement long-lived credential support using the Log Ingestion API? | ||
2. Do we want to start deprecating the fields for the HTTP data collector API? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can delay this for now. Possibly a v6.4 item.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's worth considering if we can deprecate the entire output which makes a cut over cleaner. Can we identify a good, distinguishing name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above comment
|
||
### Test Plan | ||
|
||
- Manual E2E tests: Need access to Azure accounts along with an Openshift cluster configured to use Azure's workload identity. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
refer to the Azure links in this: https://devservices.dpp.openshift.com/support/
We will be migrating to our own account shortly, but this will work for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a more blanket statement for E2E testing. I have access to Azure and was able to test all this out.
|
||
[Azure Monitor Logs](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/data-platform-logs) is a comprehensive service provided by Microsoft Azure that enables the collection, analysis, and actioning of telemetry data across various Azure and on-premises resources. | ||
|
||
This proposal enhances the Azure Monitor Logs integration by implementing secure, short-lived authentication with [Microsoft Entra Workload Identity (WID)](https://learn.microsoft.com/en-us/entra/workload-id/workload-identities-overview) through federated tokens. The update will leverage a pending, upstream Vector PR, [azure_logs_ingestion feature](https://github.com/vectordotdev/vector/pull/22912), that will utilize the new Log Ingestion API. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Monitor Logs log collection integration ...
... that will implement the new Azure? Log Ingestion API (do we have a link to ref?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
[Azure Monitor Logs](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/data-platform-logs) is a comprehensive service provided by Microsoft Azure that enables the collection, analysis, and actioning of telemetry data across various Azure and on-premises resources. | ||
|
||
This proposal enhances the Azure Monitor Logs integration by implementing secure, short-lived authentication with [Microsoft Entra Workload Identity (WID)](https://learn.microsoft.com/en-us/entra/workload-id/workload-identities-overview) through federated tokens. The update will leverage a pending, upstream Vector PR, [azure_logs_ingestion feature](https://github.com/vectordotdev/vector/pull/22912), that will utilize the new Log Ingestion API. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's WID not WIF?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is Workload Identity Federation (WIF). Entra Workload ID (WID) is the authentication suite that includes other types of authentication.
|
||
### User Stories | ||
|
||
- As an administrator, I want to be able to forward logs from my OpenShift cluster to Azure Monitor Logs using federated tokens, removing the need for long-lived, static credentials. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is "Azure Monitor Logs" a product name that is agnostic of the API (e.g. Log Ingestion API)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the Log Ingestion API,
The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace
From the Log Analytics Workspace Docs
A Log Analytics workspace is a data store into which you can collect any type of log data from all of your Azure and non-Azure resources and applications.
From the Azure Monitor Logs Docs
Azure Monitor Logs is a centralized software as a service (SaaS) platform for collecting, analyzing, and acting on telemetry data generated by Azure and non-Azure resources and applications.
You can collect logs, manage log data and costs, and consume different types of data in one Log Analytics workspace, the primary Azure Monitor Logs resource.
Azure Monitor Logs is one half of the data platform that supports Azure Monitor. The other is Azure Monitor Metrics, which stores numeric data in a time-series database.
- Has its own metric collection API
In short, no it is not agnostic of the log ingestion API as it is for azure monitor logs platform
// +kubebuilder:validation:Optional | ||
// +kubebuilder:validation:XValidation:rule="isURL(self)", message="invalid URL" | ||
// +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Log Ingestion Endpoint",xDescriptors={"urn:alm:descriptor:com.tectonic.ui:text"} | ||
LogIngestionEndpoint string `json:"logIngestionEndpoint,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider renaming to URL to be consistent with other output types
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be in the azure-specific section? The general pattern is to use the outputs.url unless there is output-specific endpoint data that can't be expressed in a URL.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this will be renamed to be more in line with existing outputs.
2. Conditionally project the service account token if the type is `workloadIdentity`. | ||
3. Create the collector configuration with required fields for the Log Ingestion API along with the path to the projected service account token. | ||
|
||
### Proposed API |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should identify the fields which are deprecated. Additionally, if we have not, we need to figure out what's the 'cut over' behavior. Depending upon when we take this feature up it may mean we are supporting both Azure apis for a short period of time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically ALL fields will be deprecated.
Specifically we don't need:
- customerId
- azureResourceId
- host
- logType
We could reuse LogType
but we should probably align the field name with the Log Ingestion API, stream name
. With the upstream PR, we can support both APIs until the data collector API is retired.
We could also reuse host
as the endpoint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically ALL fields will be deprecated.
Given the output is fundamentally deprecated then IMO we should plan to design and implement a completely different output type
|
||
- `TenantId` and `ClientId` can be found from the generated [CCO utility secret](#cco-utility-secret) when Openshift is set up for [workload identity for Azure](https://github.com/openshift/cloud-credential-operator/blob/9c3346aea5a7f9a38713c09d11605b8ee825446c/docs/azure_workload_identity.md). | ||
|
||
#### Additional configuration fields for the Azure Logs Ingestion sink to `Vector` API |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these in addition to those for the upstream PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. But we might not need credential_kind
if updating to the latest azure_identity
sdk, v0.23.0
azure_tenant_id: 11111111-2222-3333-4444-555555555555 | ||
azure_region: westus | ||
azure_subscription_id: 11111111-2222-3333-4444-555555555555 | ||
azure_federated_token_file: /path/to/serviceaccount/token |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to what is required for cloudwatch, I believe we need to dictate this path. I believe we are stating it as a hard requirement of an explicit path and that it CAN NOT BE anything else. I think that is where we landed in our discussions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes in the API proposal, we can use the projected service account token which is mounted differently from the above. This is just showing the credential secret that is created from the CCO utility
### Open Questions | ||
|
||
1. Do we also want to implement long-lived credential support using the Log Ingestion API? | ||
2. Do we want to start deprecating the fields for the HTTP data collector API? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's worth considering if we can deprecate the entire output which makes a cut over cleaner. Can we identify a good, distinguishing name.
- Add a patch to `azure_identity` crate to allow setting `client_id`, `tenant_id`, etc. instead of relying on environment variables for workload identity credentials until Vector updates the crate version. See [implementation details](#implementation-detailsnotesconstraints). | ||
|
||
### Risks and Mitigations | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have the risk which is more schedule in the availability of the rust runtime. We can only update our base vector version once we have the runtime available for productization
## Design Details | ||
|
||
### Graduation Criteria | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have concerns about stability or usability which would require us to consider initial release as non-GA?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. It is straightforward and clear cut. As long as the user sets up azure logs correctly, it is just a RESTful API call. The call will only fail because of incorrect information which would fall on the user.
// +kubebuilder:validation:Optional | ||
// +kubebuilder:validation:XValidation:rule="isURL(self)", message="invalid URL" | ||
// +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Log Ingestion Endpoint",xDescriptors={"urn:alm:descriptor:com.tectonic.ui:text"} | ||
LogIngestionEndpoint string `json:"logIngestionEndpoint,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be in the azure-specific section? The general pattern is to use the outputs.url unless there is output-specific endpoint data that can't be expressed in a URL.
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…r Forwarding Log Events to Azure Monitor
// | ||
// +kubebuilder:validation:Required | ||
// +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Stream Name",xDescriptors={"urn:alm:descriptor:com.tectonic.ui:text"} | ||
StreamName string `json:"streamName"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this allow templating? This sounds like it is the "tenant" which could be a dynamic value? Maybe there is a concern with "grouping" here and the way batches of records are submitted that would make it not feasible to support a template...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could template this but it wouldn't be ideal. The stream isn't something that is created on the fly. You can think of the streamName
as an existing table within Azure. It must be defined and exist in order for logs to be able to flow into it.
If we template this, users must ensure the table exists within Azure otherwise they wouldn't get the logs.
So if they want to template {.log_type}-{.namespace_name}-my-azure-stream
, they must ensure all possible renditions of log_type
and namespace_name
, depending on what they want to collect, exist within Azure.
e.g application-ns1-my-azure-stream
, application-ns2-my-azure-stream
, infrastructure-default-my-azure-stream
,... etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be a future enhancement if needed. I wonder if we need a validation to restrict templating? I guess it depends what precedence we have
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need validation. We can just tell users that templating is not supported. I also think it depends on upstream Vector's sink and their configurable fields if it allows templating. If it doesn't it would just throw an error.
type AzureLogIngestionAuth struct { | ||
// ClientId points to the secret containing the client ID used for authentication. | ||
// | ||
// This is the application ID that's assigned to your app. You can find this information in the portal where you registered your app. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stray comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No this is the definition of the ClientId
.
|
||
// TenantId points to the secret containing the tenant ID used for authentication. | ||
// | ||
// The directory tenant the application plans to operate against, in GUID or domain-name format. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be a place where we can add validation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What kind of validation should we add here? This is information obtained from Azure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GUID or domain-name format
I think we can validate these, though if we dont have precedence then I think it is reasonable to skip. If you are not getting your logs maybe you need to look closer at auth
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is more like, did you copy/paste the right information. It's like putting in a password, if you type it wrong it won't be valid. If the GUID or domain-name is wrong, it doesn't matter if it conforms to the right pattern, it is still wrong.
Authentication | ||
|
||
```Go | ||
type AzureLogIngestionAuth struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pattern elsewhere is to include an auth type enum so we can ensure there is only one defined or is it possible to not define secret or client?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I initially had an enum for the type of auth but this is where @cahartma had some comments about not including an auth type and having the client figure it out based on the provided information. (Unification of cloud output auths)
Here the ClientSecret
is the long lived "password" while the Token
is the path to a short-lived token. So you can either provide a ClientSecret
or a Token
. It could be possible to do both and have the azure_identity
crate create a credential flow where it tries credentials in order. e.g Try ClientSecret
first... fallback on Token
etc.
I was thinking you can define either or and have vector use the correct credential flow without providing an explicit type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
having the client figure it out based on the provided information.
is this a valid scenario where an admin could/would define both? In other auth scenarios do you usually define token and username/password. I would argue you do not. If we are not going to define a type enum, then we should at least specify an order of precedence if that is something we can control and enforce, assuming this is a valid use-case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. I think precedence is a good way to restrict it. Maybe token first, then clientsecret
### Implementation Details/Notes/Constraints | ||
|
||
1. Relies on [this upstream vector PR](https://github.com/vectordotdev/vector/pull/22912) to implement the Azure Log Ingestion sink utilizing the Log Ingestion API. This is a separate sink from the data collector API. We can support both APIs while transitioning. | ||
2. Current `master` branch of [upstream Vector](https://github.com/vectordotdev/vector), `>=v0.46.1`, as of 04/29/2025, utilizes `[email protected]` which relies solely on environment variables for workload identity credentials and will not be sufficient when forwarding logs to multiple different Azure sinks. The aforementioned PR relies on `[email protected]`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we identify the same dependency version
- `[email protected]` allows for setting `client_id`, `tenant_id`, etc. for authentication. | ||
- [Workload Identity Credentials [email protected] SDK Ref](https://github.com/Azure/azure-sdk-for-rust/blob/azure_identity%400.23.0/sdk/identity/azure_identity/src/credentials/workload_identity_credentials.rs) | ||
- `v0.23.0` can also utilize the [ChainedTokenCredential](https://github.com/Azure/azure-sdk-for-rust/blob/azure_identity%400.23.0/sdk/identity/azure_identity/src/chained_token_credential.rs) struct which provides a user-configurable `TokenCredential` authentication flow for applications. | ||
3. Additional fields will be required in sink configuration in upstream Vector's API. See [proposed API](#proposed-api) above. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you or considered mentioning the dependency and additional config in the upstream to see if it can be added as part of the initial impl? it may be worth trying
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can ask about it in the upstream to see if they can update it.
|
||
#### Constraints | ||
|
||
- As of `v6.2`, CLO relies on `v0.37.0` of OpenShift Vector. OpenShift's Vector will have to be upgraded; however, the upgrade is currently blocked by the Rust version for RHEL. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe its 0.37.1 but may not really matter
azure_subscription_id: 11111111-2222-3333-4444-555555555555 | ||
azure_federated_token_file: /var/run/secrets/openshift/serviceaccount/token | ||
``` | ||
- The `azure_federated_token_file` cannot be used because the CLO projects the service account token to a custom path. (e.g `/var/run/ocp-collector/serviceaccount/token`). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if there is something we can do to make these the same so we can use it..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would have to project the token to the path specified in the secret since this key/value is automatically generated by the CCO util
|
||
1. Do we also want to implement long-lived credential support using the Log Ingestion API? | ||
- We will most likely implement long-lived tokens at the same time in the form of client secrets. | ||
2. Do we want to start deprecating the fields for the HTTP data collector API? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't already then maybe it is worth adding a note to the output type and a link to the feature request?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean adding a note to the output type above? I think we can add a note about it's deprecation etc to the existing output and let users know to use the new AzureLogIngestion
output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No I meant create a PR against CLO and add the deprecated hints and a link to the epic or feature JIRA issue
Description
This PR adds an enhancement proposal to enable Workload Identity authentication for Azure Monitor Logs using Microsoft Entra ID.
/cc @cahartma @vparfonov
/assign @jcantrill @alanconway
Links
JIRA: https://issues.redhat.com/browse/LOG-4782