Skip to content

How to configure multiple binders to different clusters when some cluster is SSL secured and another plaintext-open #260

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
maistrovyi opened this issue Jan 17, 2025 · 2 comments

Comments

@maistrovyi
Copy link

maistrovyi commented Jan 17, 2025

Hey, i'm trying to configure multiple binders to:

  • Kafka cluster №1 (let's say kafka-secured) that is secured with SSL
  • Kafka cluster №2 (let's say kafka-open) that is simply available via plaintext

My application.yml:

spring:
  application:
    name: local
  cloud:
    function:
      definition: securedConsumer;openConsumer
    stream:
      bindings:
        openConsumer-in-0:
          binder: kafka-open
          destination: "some-topic-in-kafka-open"
          consumer:
            concurrency: 1
            use-native-decoding: true
        securedConsumer-in-0:
          binder: kafka-secured
          destination: "some-topic-in-kafka-secured"
          consumer:
            concurrency: 1
            use-native-decoding: true
      binders:
        kafka-open:
          type: kstream
          default-candidate: true
          inherit-environment: false
          environment:
            spring.cloud.stream.kafka.streams:
              binder:
                brokers: "some-open-kafka-host:9092"
                deserialization-exception-handler: logAndContinue
                auto-create-topics: false
                auto-add-partitions: false
                configuration:
                  security.protocol: PLAINTEXT
              bindings:
                openConsumer-in-0:
                  consumer:
                    startOffset: latest
                    application-id: "open-consumer-application-id"
                    key-serde: 'org.apache.kafka.common.serialization.Serdes$VoidSerde'
                    value-serde: 'org.apache.kafka.common.serialization.Serdes$StringSerde'
        kafka-secured:
          type: kstream
          default-candidate: false
          inherit-environment: false
          environment:
            spring.cloud.stream.kafka.streams:
              binder:
                brokers: "some-secured-kafka-host:9092"
                deserialization-exception-handler: logAndContinue
                auto-create-topics: false
                auto-add-partitions: false
                configuration:
                  security.protocol: SSL
                  ssl:
                    truststore:
                      location: "some.truststore.jks"
                      password: "some-pass"
                    keystore:
                      location: "some.keystore.jks"
                      password: "some-pass"
              bindings:
                securedConsumer-in-0:
                  consumer:
                    startOffset: latest
                    application-id: "secured-consumer-application-id"
                    key-serde: 'org.apache.kafka.common.serialization.Serdes$VoidSerde'
                    value-serde: 'org.apache.kafka.common.serialization.Serdes$StringSerde'

So the main problem is that SSL confic is affecting kafka-open streams, looks like multiple configs is merged to signle one.

Checked on spring boot 2.17.18 and 3.4.1.

@sobychacko
Copy link
Contributor

Looks like you are using Kafka Streams. When using Kafka Streams, multl binders the way you describe are not supported. For example, you cannot consume from one cluster and publish to another cluster. It is not allowed by Kafka Streams.

@maistrovyi
Copy link
Author

@sobychacko Hey, thanks for your response.

Yes, I'm using Kafka-streams, and actually consumers only. Actually this config is working when I use 2 different clusters with no auth (PLAINTEXT).

And again, based on your comment - why do we need mulbinders support if we can't connect to different clusters? Coz all info/doc that I found says that it should work like this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants