-
Notifications
You must be signed in to change notification settings - Fork 611
Replace deprecated methods to support Kafka 4.0.0 #2254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: ShubhamRwt <[email protected]>
Hey @CCisGG could you take a look at this one? It is a small change, it future-proofs the Cruise Control metric reporter so it can be run on brokers using Kafka 4.0. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Thanks! One minor question: will people use older kafka version be bothered from this change?
The replaced method are deprecated since Kafka 3.1.0 and the new methods were also introduced in Kafka 3.1.0. Sof if brokers are using Kafka older then 3.1.0, only then it can be a problem |
+1 Here is a reference to the deprecated methods in the Kafka 3.1 API docs [1] |
Oops. We are internally still using kafka 3.0 so this will break our internal build. Can we choose the API based on the kafka version? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are internally still using kafka 3.0 so this will break our internal build. Can we choose the API based on the kafka version?
Thanks for the info @CCisGG, I will add the corresponding changes to the PR |
Signed-off-by: ShubhamRwt <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @ShubhamRwt thanks for the change!
However I think this won't work, because the compiler still see both methods at compile time and one of the method doesn't exist. I think it will cause build errors.
I think you need to use invoke to invoke the method calls, instead of directly calling these methods. Both the old method and new method might need to called by "invoke".
Signed-off-by: ShubhamRwt <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work @ShubhamRwt, just took a quick pass, was thinking that it might be easier/neater if we organize the DescribeTopicsResult method detection logic into its own method in the CruiseControlMetricReporter class, we could call it something like "topicNameValuesMethod()".
I left more specific details in the code comments, let me know what you think!
// Starting with Kafka 4.0.0, the deprecated method "values()" class is completely removed from "org.apache.kafka.clients.admin.DescribeTopicsResult" | ||
// so we have to use the new method "topicNameValues". | ||
// To make sure that the internal build pass, this logic is introduced to choose the API based on the Kafka version | ||
Method topicDescriptionMethod = null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would do the same for the tests, we can separate the DescribeTopicsResult method detection logic into its own method, topicNameValuesMethod()
in the CruiseControlMetricRepoter
class, then call topicNameValuesMethod()
in our tests to save us from the extra lines.
Method topicDescriptionMethod = null; | |
Method topicDescriptionMethod = topicNameValuesMethod(); |
if (topicDescription.partitions().size() < _metricsTopic.numPartitions()) { | ||
_adminClient.createPartitions(Collections.singletonMap(cruiseControlMetricsTopic, | ||
NewPartitions.increaseTo(_metricsTopic.numPartitions()))); | ||
Method topicDescriptionMethod = null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be worth separating out the DescribeTopicsResult method detection into a separate method. This would keep the focus of the maybeIncreaseTopicPartitionCount() method on the original logic.
We could separate the DescribeTopicsResult method detection logic out like this:
/**
* Attempts to retrieve the method for mapping topic names to futures from the {@link org.apache.kafka.clients.admin.DescribeTopicsResult} class.
* This method first tries to get the {@code topicNameValues()} method, which is available in Kafka 4.x or later.
* If the method is not found, it falls back to trying to retrieve the {@code values()} method, which is available in Kafka 3.x or earlier.
*
* If neither of these methods is found, a {@link RuntimeException} is thrown.
*
* <p>This method is useful for ensuring compatibility with both older and newer versions of Kafka clients.</p>
*
* @return the {@link Method} object representing the {@code topicNameValues()} or {@code values()} method.
* @throws RuntimeException if neither the {@code values()} nor {@code topicNameValues()} methods are found.
*/
/* test */ static Method topicNameValuesMethod() {
//
Method topicDescriptionMethod = null;
try {
// First we try to get the topicNameValues() method
topicDescriptionMethod = Class.forName("org.apache.kafka.clients.admin.DescribeTopicsResult").getMethod("topicNameValues");
} catch (ClassNotFoundException | NoSuchMethodException exception) {
LOG.info("Failed to get method topicNameValues() from DescribeTopicsResult class since we are probably on kafka 3.0.0 or older: ", exception);
}
if (topicDescriptionMethod == null) {
try {
// Second we try to get the values() method
topicDescriptionMethod = Class.forName("org.apache.kafka.clients.admin.DescribeTopicsResult").getMethod("values");
} catch (ClassNotFoundException | NoSuchMethodException exception) {
LOG.info("Failed to get method values() from DescribeTopicsResult class since we are probably on kafka 3.0.0 or older: ", exception);
}
}
if (topicDescriptionMethod != null) {
return topicDescriptionMethod;
} else {
throw new RuntimeException("Unable to find both values() and topicNameValues() method in the DescribeTopicsResult class ");
}
}
Then call it like this:
Method topicDescriptionMethod = null; | |
Method topicDescriptionMethod = topicNameValuesMethod(); |
// Starting with Kafka 4.0.0, the deprecated method "values()" class is completely removed from "org.apache.kafka.clients.admin.DescribeTopicsResult" | ||
// so we have to use the new method "topicNameValues". | ||
// To make sure that the internal build pass, this logic is introduced to choose the API based on the Kafka version | ||
Method topicDescriptionMethod = null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would do the same for the tests, we can separate the DescribeTopicsResult method detection logic into its own method, topicNameValuesMethod()
in the CruiseControlMetricRepoter
class, then call topicNameValuesMethod()
in our tests to save us from the extra lines.
Method topicDescriptionMethod = null; | |
Method topicDescriptionMethod = topicNameValuesMethod(); |
Signed-off-by: ShubhamRwt <[email protected]>
Signed-off-by: ShubhamRwt <[email protected]>
Signed-off-by: ShubhamRwt <[email protected]>
...main/java/com/linkedin/kafka/cruisecontrol/metricsreporter/CruiseControlMetricsReporter.java
Outdated
Show resolved
Hide resolved
...main/java/com/linkedin/kafka/cruisecontrol/metricsreporter/CruiseControlMetricsReporter.java
Outdated
Show resolved
Hide resolved
...main/java/com/linkedin/kafka/cruisecontrol/metricsreporter/CruiseControlMetricsReporter.java
Outdated
Show resolved
Hide resolved
...din/kafka/cruisecontrol/metricsreporter/CruiseControlMetricsReporterAutoCreateTopicTest.java
Outdated
Show resolved
Hide resolved
.../java/com/linkedin/kafka/cruisecontrol/metricsreporter/CruiseControlMetricsReporterTest.java
Outdated
Show resolved
Hide resolved
...main/java/com/linkedin/kafka/cruisecontrol/metricsreporter/CruiseControlMetricsReporter.java
Outdated
Show resolved
Hide resolved
Signed-off-by: ShubhamRwt <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates @ShubhamRwt, looking better, I have left a little more feedback if you are interested, let me know what you think!
.../java/com/linkedin/kafka/cruisecontrol/metricsreporter/CruiseControlMetricsReporterTest.java
Outdated
Show resolved
Hide resolved
.../java/com/linkedin/kafka/cruisecontrol/metricsreporter/CruiseControlMetricsReporterTest.java
Outdated
Show resolved
Hide resolved
...din/kafka/cruisecontrol/metricsreporter/CruiseControlMetricsReporterAutoCreateTopicTest.java
Outdated
Show resolved
Hide resolved
...din/kafka/cruisecontrol/metricsreporter/CruiseControlMetricsReporterAutoCreateTopicTest.java
Outdated
Show resolved
Hide resolved
...main/java/com/linkedin/kafka/cruisecontrol/metricsreporter/CruiseControlMetricsReporter.java
Outdated
Show resolved
Hide resolved
Map<String, KafkaFuture<TopicDescription>> topicDescriptionMap = (Map<String, KafkaFuture<TopicDescription>>) topicDescriptionMethod | ||
.invoke(describeTopicsResult); | ||
|
||
TopicDescription topicDescription = topicDescriptionMap.get(cruiseControlMetricsTopic).get(CLIENT_REQUEST_TIMEOUT_MS, TimeUnit.MILLISECONDS); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also abstract these the above lines into a getTopicDescription()
method to simplify the code across here and the tests even further. If you want to go that route you could separate the code like this:
/**
* Retrieves the {@link TopicDescription} for the specified Kafka topic, handling compatibility
* with Kafka versions 4.0 and above. This method uses reflection to invoke the appropriate method
* for retrieving topic description information, depending on the Kafka version.
*
* @param _adminClient The Kafka {@link AdminClient} used to interact with the Kafka cluster.
* @param cruiseControlMetricsTopic The name of the Kafka topic for which the description is to be retrieved.
*
* @return The {@link TopicDescription} for the specified Kafka topic.
*
* @throws KafkaTopicDescriptionException If an error occurs while retrieving the topic description,
* or if the topic name retrieval method cannot be found or invoked properly. This includes
* exceptions related to reflection (e.g., {@link NoSuchMethodException}), invocation issues,
* execution exceptions, timeouts, and interruptions.
*/
/* test */ static TopicDescription getTopicDescription(AdminClient _adminClient, String cruiseControlMetricsTopic) throws KafkaTopicDescriptionException {
try {
// For compatibility with Kafka 4.0 and beyond we must use new API methods.
Method topicDescriptionMethod = topicNameValuesMethod();
DescribeTopicsResult describeTopicsResult = _adminClient.describeTopics(Collections.singletonList(cruiseControlMetricsTopic));
Map<String, KafkaFuture<TopicDescription>> topicDescriptionMap = (Map<String, KafkaFuture<TopicDescription>>) topicDescriptionMethod
.invoke(describeTopicsResult);
return topicDescriptionMap.get(cruiseControlMetricsTopic).get(CLIENT_REQUEST_TIMEOUT_MS, TimeUnit.MILLISECONDS);
}
catch (InvocationTargetException | IllegalAccessException | ExecutionException | InterruptedException | TimeoutException | NoSuchMethodException e) {
throw new KafkaTopicDescriptionException(String.format("Unable to retrieve config of Cruise Cruise Control metrics topic {}.", cruiseControlMetricsTopic), e);
}
}
Note that if you did this, it would probably be a good idea to create a custom exception class in the cruisecontrol.metricsreporter/exception
folder so you don't have to add all the Exception classes to the method signatures where this method is called.
public class KafkaTopicDescriptionException extends Exception {
public KafkaTopicDescriptionException(String message, Throwable cause) {
super(message, cause);
}
}
This way, you can just call getTopicDescription()
whenever you need the topic description for a topic and not worry about calling and invoking the right method there as well, it would just be done in the method here in the CruiseControlMetricsReporter
class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was looking into applying this suggestion but there is one more thing -> The call made in CruiseControlMetricsReporter
is
return topicDescriptionMap.get(cruiseControlMetricsTopic).get(CLIENT_REQUEST_TIMEOUT_MS, TimeUnit.MILLISECONDS);
while in the test we don't use the get
method with CLIENT_REQUEST_TIMEOUT_MS, TimeUnit.MILLISECONDS
parameters
`topicDescriptionMap.get(TOPIC).get()`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the inclusion of .get(CLIENT_REQUEST_TIMEOUT_MS, TimeUnit.MILLISECONDS)
interfere with the tests? From what I understand, it still returns the TopicDescription
object we need.
Even if the inclusion of .get(CLIENT_REQUEST_TIMEOUT_MS, TimeUnit.MILLISECONDS)
did cause any problems in the tests, couldn't we remove it from the getTopicDescription()
method itself and append it to the end of the places where we need it?
Signed-off-by: ShubhamRwt <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking even better @ShubhamRwt, I left one last comment about creating a getTopicDescription()
method to further reduce lines, let me know what you think!
Map<String, KafkaFuture<TopicDescription>> topicDescriptionMap = (Map<String, KafkaFuture<TopicDescription>>) topicDescriptionMethod | ||
.invoke(describeTopicsResult); | ||
|
||
TopicDescription topicDescription = topicDescriptionMap.get(cruiseControlMetricsTopic).get(CLIENT_REQUEST_TIMEOUT_MS, TimeUnit.MILLISECONDS); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the inclusion of .get(CLIENT_REQUEST_TIMEOUT_MS, TimeUnit.MILLISECONDS)
interfere with the tests? From what I understand, it still returns the TopicDescription
object we need.
Even if the inclusion of .get(CLIENT_REQUEST_TIMEOUT_MS, TimeUnit.MILLISECONDS)
did cause any problems in the tests, couldn't we remove it from the getTopicDescription()
method itself and append it to the end of the places where we need it?
Signed-off-by: ShubhamRwt <[email protected]>
Sorry for the drive by comment here but does this get us compatibility with 3.9 as well? I imagine there will be lots of teams upgrading older versions of kafka to 3.9 before making the switch to 4.0, so 3.9 compatibility feels quite important |
@shaneikennedy This PR is mostly to make sure that the cruise control metrics reporter can run on broker using 4.0 since it has multiple deprecated methods which are removed in Kafka 4.0.0(The metrics reporter works correctly with 3.9.0 since the deprecated method are not deleted in 3.9.0). |
Gotcha, thanks for the info 🙌 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great @ShubhamRwt!
@CCisGG Could you take another pass when you get a chance?
LGTM. Thanks @kyguy |
Hi @CCisGG, can you help me understand the release cadence of Cruise Control? Are we going to have a release coming out in the next few days? We were planning to ship the new release of Cruise Control with Strimzi 0.46.0 release which is going to use Kafka 4.0.0 since it will contain the fix[1] for issues we are currently having with Kafka 4.0.0. [1] #2254 |
@ShubhamRwt it's usually on demand. I'll make a new release soon. |
Thanks a lot @CCisGG |
* Upgrade simplekdc to 2.1.0 (linkedin#2186) This PR resolves linkedin#2178 Upgrading simplekdc version to "2.1.0" which supports a change that can correctly use security classes based on what version of IBM Semeru JDK(if applicable) is being used. There is no regression observed using Semeru, OpenJDK and Temurin JDKs. This newer version(released on 14 August 2024) also caters vulnerability in deps mentioned linkedin#2179 as **org.jboss.xnio:xnio-api** is updated to **3.8.16**[^1] [^1]:https://github.com/apache/directory-kerby/releases/tag/kerby-all-2.1.0#:~:text=Bump%20org.jboss.xnio%3Axnio%2Dapi%20from%203.8.15.Final%20to%203.8.16.Final). * remove unused KafkaSampleStore#_skipSampleStoreTopicRackAwarenessCheck (linkedin#2183) left over from linkedin#1572 (6ae3f41) * Test logging fix, by default log4j2 looks for log4j2.properties file (linkedin#2181) `log4j.properties` files are ignored in the test resources, after renamed, finally I was able to change the loglevels while unit/integration testing. I'm not sure if it was the issue on issue linkedin#2152, but this would be the fix for tests. Prod should work with the log4j.properties file as that is passed with -Dlog4j.configurationFile java opt * fix typo in comment (linkedin#2189) Fix 'the the' in the comments * new PR template (linkedin#2191) ## Summary Why: Improve PR quality and review-ability. What: modifies current PR template to be structured and require more details when submitting PRs. ## Expected Behavior PR must come with sufficient details to address or explain the issue. ## Actual Behavior PR template only requires link to the issue: ``` This PR resolves #<Replace-Me-With-The-Issue-Number-Addressed-By-This-PR>. ``` ## Steps to reproduce 1. either create a new PR or 2. see [the current template](https://github.com/linkedin/cruise-control/blob/c5545ef04618b5b42290edda2ee63eb6bfa2e1a6/docs/pull_request_template.md) ## Known Workarounds People voluntarily provide additional details ## Additional Evidence - n/a ## Categorization - [x] refactor * CI workflow with Github Actions (linkedin#2192) ## Summary ### Why 1. GIthub Actions workflow are native GH workflows 2. Github Actions do not require additional non-github accounts unlike CircleCI 3. plenty of compute resources[^0] available for OSS projects 4. unlike CircleCI resource limits (don't have details) [^0]:https://docs.github.com/en/actions/administering-github-actions/usage-limits-billing-and-administration#availability ### What 1. creates CI workflow `ci.yaml` 2. creates Artifactory workflow: `artifactory.yaml` Workflow structure is documented in the spec[^1] [^1]:https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions ## Expected Behavior CI is expected to 1. execute unit tests 1. execute integration tests 1. execute hw platform unit tests 1. publish artifacts to the artifactory when a tag is published 1. provide ability to re-run tests on failures 1. report results to corresponding PR/branch which is to be used as quality gates for PR merging. ## Actual Behavior 1. current Circle CI integration provides [1] [2] [3] [4] from the expected behavior 4. but re-run-ing checks requires additional efforts like logging in into the Circle CI 5. which slows PR feedback loop as users may not have CircleCI credentials and knowledge of the system [1]:https://github.com/linkedin/cruise-control/blob/a298df86095532264f13ca7490cfabb8ff68839f/.circleci/config.yml#L51-L53 [2]:https://github.com/linkedin/cruise-control/blob/a298df86095532264f13ca7490cfabb8ff68839f/.circleci/config.yml#L51-L53 [3]:https://github.com/linkedin/cruise-control/blob/a298df86095532264f13ca7490cfabb8ff68839f/.circleci/config.yml#L5-L34 [4]:https://github.com/linkedin/cruise-control/blob/a298df86095532264f13ca7490cfabb8ff68839f/.circleci/config.yml#L94-L103 ## Steps to reproduce 1. see failed PR checks, ie linkedin#2133 ## Known Workarounds 1. asking PR authors to trigger build ## Migration Plan 1. add GH Actions integration along with CircleCI 2. confirm GH Actions provide equivalent or better functionality 3. remove CircleCI integration 4. ensure publishing via GH actions works ## Categorization - [x] refactor * Update README.md * Set Embedded Zookeeper listen on 127.0.0.1 (linkedin#2196) ## Summary 1. Why: when on VPN, I can't run Cruise Control tests as ZK is binding to local real ip address and local network is restricted. 2. What: changing to bind to 127.0.0.1 fixes it (got the idea from Kafka embedded ZK setup. I think it won't make any difference how automation or human would run the tests, pls correct me if I'm wrong. * Add "documentation" category to PR template (linkedin#2195) ## Summary 1. Why: to categorize documentation PRs 2. What: adds "documentation" category to the PR template ## Expected Behavior - when users make documentation changes - they should be able to specify documentation as a change category ## Actual Behavior - no documentation category to specify * Add missing documentation for minNumBrokersViolateMetricLimit ## Summary 1. Why: Documentation for **min.num.brokers.violate.metric.limit.to.decrease.cluster.concurrency** is missing. 2. What: document the setting * Add more logging to help debugging the time spent on goal based operation (linkedin#2202) * Add more logging to help debugging the time spent on goal based operation * Update cruise-control/src/main/java/com/linkedin/kafka/cruisecontrol/async/progress/OperationProgress.java Co-authored-by: Maryan Hratson <[email protected]> * Update cruise-control/src/main/java/com/linkedin/kafka/cruisecontrol/servlet/handler/async/runnable/GoalBasedOperationRunnable.java Co-authored-by: Maryan Hratson <[email protected]> * Update cruise-control/src/main/java/com/linkedin/kafka/cruisecontrol/servlet/handler/async/runnable/GoalBasedOperationRunnable.java Co-authored-by: Maryan Hratson <[email protected]> --------- Co-authored-by: Maryan Hratson <[email protected]> * Fix the Circle CI error by removing the test-multi-arch job (linkedin#2203) * Remove test-multi-arch job in Circle CI * Remove test-multi-arch definitions * Improve per task observability through additional logging (linkedin#2204) * Fix the issue that uuid is null in the log after execution * Add more logging to track the task with its UUID * Rename "task" to "User task" in logging * Reformat logging * Fixing Unexpected method calls: HttpSession.invalidate (linkedin#2201) ## Summary 1. Why: The test failed sometimes with unexpected method calls. 2. What: The fix is preparing the test to accept invalidate method call too ## Expected Behavior Tests are running without failure ## Actual Behavior Tests are failing sometimes with unexpected method call. ## Steps to Reproduce 1. setup repeated run on e.g. `testCreateUserTask` in IDE 2. observe failure after multiple successful runs (for me it was failing after around 250 successful runs) ## Additional evidence ``` java.lang.AssertionError: On mock #2 (zero indexed): Unexpected method calls: HttpSession.invalidate() at org.easymock.EasyMock.getAssertionError(EasyMock.java:2230) at org.easymock.EasyMock.verify(EasyMock.java:2058) at com.linkedin.kafka.cruisecontrol.servlet.UserTaskManagerTest.testCreateUserTask(UserTaskManagerTest.java:59) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:112) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:40) at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:60) at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:52) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36) at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33) at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94) at com.sun.proxy.$Proxy5.processTestClass(Unknown Source) at org.gradle.api.internal.tasks.testing.worker.TestWorker$2.run(TestWorker.java:176) at org.gradle.api.internal.tasks.testing.worker.TestWorker.executeAndMaintainThreadName(TestWorker.java:129) at org.gradle.api.internal.tasks.testing.worker.TestWorker.execute(TestWorker.java:100) at org.gradle.api.internal.tasks.testing.worker.TestWorker.execute(TestWorker.java:60) at org.gradle.process.internal.worker.child.ActionExecutionWorker.execute(ActionExecutionWorker.java:56) at org.gradle.process.internal.worker.child.SystemApplicationClassLoaderWorker.call(SystemApplicationClassLoaderWorker.java:113) at org.gradle.process.internal.worker.child.SystemApplicationClassLoaderWorker.call(SystemApplicationClassLoaderWorker.java:65) at worker.org.gradle.process.internal.worker.GradleWorkerMain.run(GradleWorkerMain.java:69) at worker.org.gradle.process.internal.worker.GradleWorkerMain.main(GradleWorkerMain.java:74) ``` ## Categorization - [x] bugfix - [ ] new feature - [ ] refactor - [ ] CVE - [ ] other * Update the README (linkedin#2216) This is a minor improvement to the README.md. * fix: Fix CVEs (linkedin#2220) Update dependencies to fix CVEs: Zookeeper, Netty, Jetty, Nimbus JOSE+JWT * Fix: intra.broker.goals cannot be configured as default.goals (linkedin#2221) * Kerberos auth to local rules support (linkedin#2043) * Expose AdminClient exception when failing to describe the cluster (linkedin#2222) * Fix PartitionSizeAnomalyFinder, to be able to handle custom SELF_HEALING_PARTITION_SIZE_THRESHOLD_MB values (linkedin#2212) * Upgrade Kafka to 3.8.0 (linkedin#2180) * Upgrading kafka to 3.8.0 - config properties rewriting and adding necessary dependencies # Conflicts: # gradle.properties * Upgrading kafka to 3.8.0 - using alternative for removed getAllTopicConfigs zk admin client method * Upgrading kafka to 3.8.0 - adding 3.8 zk client creation way * Upgrading kafka to 3.8.0 - adding 3.8 network client creation way * replication/quota/topic log constants moved in 3.8 again its value hasn't changed, only where it was stored, this way it's backward compatible * Update usages of Metadata to conform to kafka 3.7 interface --------- Co-authored-by: David Simon <[email protected]> * Rectify docker run command for s390x (linkedin#2249) * Make startup more robust and prevent auto topic creation when using CruiseControlMetricsReporterSampler (linkedin#2211) * Update license to reflect the latest status (linkedin#2256) * Catch NoSuchFileException on load failed brokers list (linkedin#2255) * Replace deprecated methods to support Kafka 4.0.0 (linkedin#2254) * Upgrade Kafka to 3.8.0 (linkedin#2180) * Upgrading kafka to 3.8.0 - config properties rewriting and adding necessary dependencies * Upgrading kafka to 3.8.0 - using alternative for removed getAllTopicConfigs zk admin client method * Upgrading kafka to 3.8.0 - adding 3.8 zk client creation way * Upgrading kafka to 3.8.0 - adding 3.8 network client creation way * replication/quota/topic log constants moved in 3.8 again its value hasn't changed, only where it was stored, this way it's backward compatible * Update usages of Metadata to conform to kafka 3.7 interface --------- Co-authored-by: David Simon <[email protected]> * Use literal config name for listeners and broker.id config (linkedin#2169) * Disabling test and integration-test in github-workflows --------- Co-authored-by: yasiribmcon <[email protected]> Co-authored-by: Lee Dongjin <[email protected]> Co-authored-by: Andras Katona <[email protected]> Co-authored-by: wonjong-yoo <[email protected]> Co-authored-by: Maryan Hratson <[email protected]> Co-authored-by: ik <[email protected]> Co-authored-by: Allen Wang <[email protected]> Co-authored-by: Hao Geng <[email protected]> Co-authored-by: Aswin A <[email protected]> Co-authored-by: Kondrat Bertalan <[email protected]> Co-authored-by: Tamas Barnabas Egyed <[email protected]> Co-authored-by: harmadasg <[email protected]> Co-authored-by: David Simon <[email protected]> Co-authored-by: Paolo Patierno <[email protected]> Co-authored-by: Shubham Rawat <[email protected]> Co-authored-by: Henry Haiying Cai <[email protected]> Co-authored-by: Daniel Vaseekaran <[email protected]>
Summary
Categorization
This PR resolves # if any.