You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are running Kafka Cluster on Kubernetes with mutual TLS enabled configuration. Producer and consumer apps are able to connect and read/write to the Kafka Topics with mutual TLS authentication using the certs. The Keystore and Truststore certificates are in PEM format and not encrypted.
We tried to deploy Confluent Schema Registry by providing all the SSL configs and TLS certificates, but somehow Schema Registry is not able to read the private key. We tried with different Schema Registry deployment configurations by passing certificate to be referred directly from the Kubernetes Secrets as well as passing the certificates as string, but none of the configurations work. Below are the Kafka configuration & deployment details of Schema Registry:
Caused by: org.apache.kafka.common.KafkaException: Failed to create new NetworkClient
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:255)
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:190)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:545)
... 4 more
Caused by: org.apache.kafka.common.errors.InvalidConfigurationException: Invalid PEM keystore configs
Caused by: org.apache.kafka.common.errors.InvalidConfigurationException: No matching PRIVATE KEY entries in PEM file
Using log4j config /etc/schema-registry/log4j.properties
org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.lastModifiedMs(DefaultSslEngineFactory.java:386)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.(DefaultSslEngineFactory.java:351)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedPemStore.(DefaultSslEngineFactory.java:408)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.createKeystore(DefaultSslEngineFactory.java:296)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.configure(DefaultSslEngineFactory.java:162)
at org.apache.kafka.common.security.ssl.SslFactory.instantiateSslEngineFactory(SslFactory.java:147)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:100)
at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:70)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:193)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:82)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:120)
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:224)
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:190)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:545)
at org.apache.kafka.clients.admin.Admin.create(Admin.java:147)
at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:49)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:136)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:149)
org.apache.kafka.common.KafkaException: Failed to create new KafkaAdminClient
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:561)
at org.apache.kafka.clients.admin.Admin.create(Admin.java:147)
at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:49)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:136)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:149)
Caused by: org.apache.kafka.common.KafkaException: Failed to create new NetworkClient
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:255)
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:190)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:545)
... 4 more
Caused by: org.apache.kafka.common.errors.InvalidConfigurationException: Failed to load PEM SSL keystore -----BEGIN PRIVATE KEY
Has someone faced this issue with Confluent Schema Registry. I was able to find that mutual TLS with PEM certificates feature has already been implemented in Schema Registry #2062
The text was updated successfully, but these errors were encountered:
Hi,
We are running Kafka Cluster on Kubernetes with mutual TLS enabled configuration. Producer and consumer apps are able to connect and read/write to the Kafka Topics with mutual TLS authentication using the certs. The Keystore and Truststore certificates are in PEM format and not encrypted.
We tried to deploy Confluent Schema Registry by providing all the SSL configs and TLS certificates, but somehow Schema Registry is not able to read the private key. We tried with different Schema Registry deployment configurations by passing certificate to be referred directly from the Kubernetes Secrets as well as passing the certificates as string, but none of the configurations work. Below are the Kafka configuration & deployment details of Schema Registry:
kafka-version: 3.9.0
Confluent Schema Registry Version: 7.9.0 & 7.6.5
Deployment Config.1- Below is the deployment configurations with certificate directly referred from Kubernetes secret:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "15"
meta.helm.sh/release-name: cp-schema-registry
meta.helm.sh/release-namespace: default
creationTimestamp: "2025-03-24T16:56:53Z"
generation: 15
labels:
app: cp-schema-registry
app.kubernetes.io/managed-by: Helm
chart: cp-schema-registry-0.1.0
heritage: Helm
release: cp-schema-registry
name: cp-schema-registry
namespace: default
resourceVersion: "318053526"
uid: f105c3da-f6ae-4d06-bee5-3880847fd33f
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: cp-schema-registry
release: cp-schema-registry
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/port: "5556"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: cp-schema-registry
release: cp-schema-registry
spec:
containers:
- command:
- java
- -XX:+UnlockExperimentalVMOptions
- -XX:+UseCGroupMemoryLimitForHeap
- -XX:MaxRAMFraction=1
- -XshowSettings:vm
- -jar
- jmx_prometheus_httpserver.jar
- "5556"
- /etc/jmx-schema-registry/jmx-schema-registry-prometheus.yml
image: solsson/kafka-prometheus-jmx-exporter@sha256:6f82e2b0464f50da8104acd7363fb9b995001ddff77d248379f8788e78946143
imagePullPolicy: IfNotPresent
name: prometheus-jmx-exporter
ports:
- containerPort: 5556
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/jmx-schema-registry
name: jmx-config
- env:
- name: SCHEMA_REGISTRY_HOST_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: SCHEMA_REGISTRY_LISTENERS
value: http://0.0.0.0:8081
- name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
value: bootstrap.***:9094
- name: SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID
value: cp-schema-registry
- name: SCHEMA_REGISTRY_MASTER_ELIGIBILITY
value: "true"
- name: SCHEMA_REGISTRY_HEAP_OPTS
value: -Xms512M -Xmx512M
- name: JMX_PORT
value: "5555"
- name: SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL
value: SSL
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM
value: HTTPS
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_TYPE
value: PEM
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_TYPE
value: PEM
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_CERTIFICATE_CHAIN
valueFrom:
secretKeyRef:
key: user.crt
name: kafka-user
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_KEY
valueFrom:
secretKeyRef:
key: user.key
name: kafka-user
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_CERTIFICATES
valueFrom:
secretKeyRef:
key: tls.crt
name: kafka-tls-cert
image: confluentinc/cp-schema-registry:7.9.0
imagePullPolicy: IfNotPresent
name: cp-schema-registry-server
ports:
- containerPort: 8081
name: schema-registry
protocol: TCP
- containerPort: 5555
name: jmx
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 0
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: cp-schema-registry-jmx-configmap
name: jmx-config
Error reported from above deployment:
Caused by: org.apache.kafka.common.KafkaException: Failed to create new NetworkClient
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:255)
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:190)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:545)
... 4 more
Caused by: org.apache.kafka.common.errors.InvalidConfigurationException: Invalid PEM keystore configs
Caused by: org.apache.kafka.common.errors.InvalidConfigurationException: No matching PRIVATE KEY entries in PEM file
Using log4j config /etc/schema-registry/log4j.properties
==================================================
Deployment Config. 2- Below is the deployment configurations with reading keystore certificate as string:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "15"
meta.helm.sh/release-name: cp-schema-registry
meta.helm.sh/release-namespace: default
creationTimestamp: "2025-03-24T16:56:53Z"
generation: 15
labels:
app: cp-schema-registry
app.kubernetes.io/managed-by: Helm
chart: cp-schema-registry-0.1.0
heritage: Helm
release: cp-schema-registry
name: cp-schema-registry
namespace: default
resourceVersion: "318053526"
uid: f105c3da-f6ae-4d06-bee5-3880847fd33f
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: cp-schema-registry
release: cp-schema-registry
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/port: "5556"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: cp-schema-registry
release: cp-schema-registry
spec:
containers:
- command:
- java
- -XX:+UnlockExperimentalVMOptions
- -XX:+UseCGroupMemoryLimitForHeap
- -XX:MaxRAMFraction=1
- -XshowSettings:vm
- -jar
- jmx_prometheus_httpserver.jar
- "5556"
- /etc/jmx-schema-registry/jmx-schema-registry-prometheus.yml
image: solsson/kafka-prometheus-jmx-exporter@sha256:6f82e2b0464f50da8104acd7363fb9b995001ddff77d248379f8788e78946143
imagePullPolicy: IfNotPresent
name: prometheus-jmx-exporter
ports:
- containerPort: 5556
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/jmx-schema-registry
name: jmx-config
- env:
- name: SCHEMA_REGISTRY_HOST_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: SCHEMA_REGISTRY_LISTENERS
value: http://0.0.0.0:8081
- name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
value: bootstrap.**:9094
- name: SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID
value: cp-schema-registry
- name: SCHEMA_REGISTRY_MASTER_ELIGIBILITY
value: "true"
- name: SCHEMA_REGISTRY_HEAP_OPTS
value: -Xms512M -Xmx512M
- name: JMX_PORT
value: "5555"
- name: SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL
value: SSL
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM
value: HTTPS
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_TYPE
value: PEM
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_TYPE
value: PEM
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_LOCATION
value: -----BEGIN PRIVATE KEY-----\n-----END PRIVATE KEY-----\n-----BEGIN CERTIFICATE-----\n-----END CERTIFICATE
- name: SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_CERTIFICATES
valueFrom:
secretKeyRef:
key: tls.crt
name: kafka-tls-cert
image: confluentinc/cp-schema-registry:7.6.5
imagePullPolicy: IfNotPresent
name: cp-schema-registry-server
ports:
- containerPort: 8081
name: schema-registry
protocol: TCP
- containerPort: 5555
name: jmx
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 0
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: cp-schema-registry-jmx-configmap
name: jmx-config
Error reported from above deployment:
org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.lastModifiedMs(DefaultSslEngineFactory.java:386)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.(DefaultSslEngineFactory.java:351)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedPemStore.(DefaultSslEngineFactory.java:408)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.createKeystore(DefaultSslEngineFactory.java:296)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.configure(DefaultSslEngineFactory.java:162)
at org.apache.kafka.common.security.ssl.SslFactory.instantiateSslEngineFactory(SslFactory.java:147)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:100)
at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:70)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:193)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:82)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:120)
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:224)
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:190)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:545)
at org.apache.kafka.clients.admin.Admin.create(Admin.java:147)
at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:49)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:136)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:149)
org.apache.kafka.common.KafkaException: Failed to create new KafkaAdminClient
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:561)
at org.apache.kafka.clients.admin.Admin.create(Admin.java:147)
at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:49)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:136)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:149)
Caused by: org.apache.kafka.common.KafkaException: Failed to create new NetworkClient
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:255)
at org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:190)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:545)
... 4 more
Caused by: org.apache.kafka.common.errors.InvalidConfigurationException: Failed to load PEM SSL keystore -----BEGIN PRIVATE KEY
Has someone faced this issue with Confluent Schema Registry. I was able to find that mutual TLS with PEM certificates feature has already been implemented in Schema Registry #2062
The text was updated successfully, but these errors were encountered: