You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Details on the https://github.com/lettuce-io/lettuce-core/wiki/Redis-URI-and-connection-details#uri-syntax[Redis URI syntax] can be found in the Lettuce project https://github.com/lettuce-io/lettuce-core/wiki[wiki].
16
16
17
-
TLS connection URIs start with `rediss://`. To disable certificate verification for TLS connections use the following property:
17
+
TLS connection URIs start with `rediss://`.
18
+
To disable certificate verification for TLS connections use the following property:
Copy file name to clipboardExpand all lines: src/docs/asciidoc/_sink.adoc
+17-8Lines changed: 17 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,8 @@ The {name} guarantees that records from the Kafka topic are delivered at least o
20
20
[[sink-tasks]]
21
21
=== Multiples tasks
22
22
23
-
The {name} supports running one or more tasks. You can specify the number of tasks with the `tasks.max` configuration property.
23
+
The {name} supports running one or more tasks.
24
+
You can specify the number of tasks with the `tasks.max` configuration property.
24
25
25
26
[[data-structures]]
26
27
=== Redis Data Structures
@@ -60,7 +61,8 @@ value.converter=<Avro or JSON> <2>
60
61
----
61
62
62
63
<1> <<key-string,String>> or <<key-bytes,bytes>>
63
-
<2> <<avro,Avro>> or <<kafka-json,JSON>>. If value is null the key is https://redis.io/commands/del[deleted].
64
+
<2> <<avro,Avro>> or <<kafka-json,JSON>>.
65
+
If value is null the key is https://redis.io/commands/del[deleted].
64
66
65
67
==== String
66
68
Use the following properties to write Kafka records as Redis strings:
@@ -73,7 +75,8 @@ value.converter=<string or bytes> <2>
73
75
----
74
76
75
77
<1> <<key-string,String>> or <<key-bytes,bytes>>
76
-
<2> <<value-string,String>> or <<value-bytes,bytes>>. If value is null the key is https://redis.io/commands/del[deleted].
78
+
<2> <<value-string,String>> or <<value-bytes,bytes>>.
79
+
If value is null the key is https://redis.io/commands/del[deleted].
77
80
78
81
==== List
79
82
Use the following properties to add Kafka record keys to a Redis list:
@@ -90,7 +93,8 @@ redis.push.direction=<LEFT or RIGHT> <3>
90
93
<2> <<key-string,String>> or <<key-bytes,bytes>>: Kafka record keys to push to the list
91
94
<3> `LEFT`: LPUSH (default), `RIGHT`: RPUSH
92
95
93
-
The Kafka record value can be any format. If a value is null then the member is removed from the list (instead of pushed to the list).
96
+
The Kafka record value can be any format.
97
+
If a value is null then the member is removed from the list (instead of pushed to the list).
94
98
95
99
==== Set
96
100
Use the following properties to add Kafka record keys to a Redis set:
@@ -105,7 +109,8 @@ key.converter=<string or bytes> <2>
105
109
<1> <<collection-key,Set key>>
106
110
<2> <<key-string,String>> or <<key-bytes,bytes>>: Kafka record keys to add to the set
107
111
108
-
The Kafka record value can be any format. If a value is null then the member is removed from the set (instead of added to the set).
112
+
The Kafka record value can be any format.
113
+
If a value is null then the member is removed from the set (instead of added to the set).
109
114
110
115
==== Sorted Set
111
116
Use the following properties to add Kafka record keys to a Redis sorted set:
@@ -120,7 +125,8 @@ key.converter=<string or bytes> <2>
120
125
<1> <<collection-key,Sorted set key>>
121
126
<2> <<key-string,String>> or <<key-bytes,bytes>>: Kafka record keys to add to the set
122
127
123
-
The Kafka record value should be `float64` and is used for the score. If the score is null then the member is removed from the sorted set (instead of added to the sorted set).
128
+
The Kafka record value should be `float64` and is used for the score.
129
+
If the score is null then the member is removed from the sorted set (instead of added to the sorted set).
124
130
125
131
[[redisjson]]
126
132
==== JSON
@@ -134,7 +140,8 @@ value.converter=<string or bytes> <2>
134
140
----
135
141
136
142
<1> <<key-string,String>> or <<key-bytes,bytes>>
137
-
<2> <<value-string,String>> or <<value-bytes,bytes>>. If value is null the key is https://redis.io/commands/del[deleted].
143
+
<2> <<value-string,String>> or <<value-bytes,bytes>>.
144
+
If value is null the key is https://redis.io/commands/del[deleted].
Multiple data formats are supported for Kafka record values depending on the configured target <<data-structures,Redis data structure>>. Each data structure expects a specific format. If your data in Kafka is not in the format expected for a given data structure, consider using https://docs.confluent.io/platform/current/connect/transforms/overview.html[Single Message Transformations] to convert to a byte array, string, Struct, or map before it is written to Redis.
203
+
Multiple data formats are supported for Kafka record values depending on the configured target <<data-structures,Redis data structure>>.
204
+
Each data structure expects a specific format.
205
+
If your data in Kafka is not in the format expected for a given data structure, consider using https://docs.confluent.io/platform/current/connect/transforms/overview.html[Single Message Transformations] to convert to a byte array, string, Struct, or map before it is written to Redis.
Copy file name to clipboardExpand all lines: src/docs/asciidoc/_source.adoc
+16-6Lines changed: 16 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -18,12 +18,15 @@ The {name} guarantees that records from the Kafka topic are delivered at least o
18
18
19
19
[[source-tasks]]
20
20
=== Multiple Tasks
21
-
Use configuration property `tasks.max` to have the change stream handled by multiple tasks. The connector splits the work based on the number of configured key patterns. When the number of tasks is greater than the number of patterns, the number of patterns will be used instead.
21
+
Use configuration property `tasks.max` to have the change stream handled by multiple tasks.
22
+
The connector splits the work based on the number of configured key patterns.
23
+
When the number of tasks is greater than the number of patterns, the number of patterns will be used instead.
22
24
23
25
//
24
26
//[[key-reader]]
25
27
//=== Key Reader
26
-
//In key reader mode, the {name} captures changes happening to keys in a Redis database and publishes keys and values to a Kafka topic. The data structure key will be mapped to the record key, and the value will be mapped to the record value.
28
+
//In key reader mode, the {name} captures changes happening to keys in a Redis database and publishes keys and values to a Kafka topic.
29
+
//The data structure key will be mapped to the record key, and the value will be mapped to the record value.
27
30
//
28
31
//[IMPORTANT]
29
32
//.Supported Data Structures
@@ -41,12 +44,15 @@ Use configuration property `tasks.max` to have the change stream handled by mult
41
44
//topic=<topic> <2>
42
45
//----
43
46
//
44
-
//<1> Key portion of the pattern that will be used to listen to keyspace events. For example `foo:*` translates to pubsub channel `$$__$$keyspace@0$$__$$:foo:*` and will capture changes to keys `foo:1`, `foo:2`, etc. Use comma-separated values for multiple patterns (`foo:*,bar:*`)
47
+
//<1> Key portion of the pattern that will be used to listen to keyspace events.
48
+
For example `foo:*` translates to pubsub channel `$$__$$keyspace@0$$__$$:foo:*` and will capture changes to keys `foo:1`, `foo:2`, etc.
49
+
Use comma-separated values for multiple patterns (`foo:*,bar:*`)
45
50
//<2> Name of the destination topic.
46
51
47
52
[[stream-reader]]
48
53
=== Stream Reader
49
-
The {name} reads messages from a stream and publishes to a Kafka topic. Reading is done through a consumer group so that <<source-tasks,multiple instances>> of the connector configured via the `tasks.max` can consume messages in a round-robin fashion.
54
+
The {name} reads messages from a stream and publishes to a Kafka topic.
55
+
Reading is done through a consumer group so that <<source-tasks,multiple instances>> of the connector configured via the `tasks.max` can consume messages in a round-robin fashion.
50
56
51
57
52
58
==== Stream Message Schema
@@ -83,5 +89,9 @@ topic=<name> <6>
83
89
<2> https://redis.io/commands/xread#incomplete-ids[Message ID] to start reading from (default: `0-0`).
84
90
<3> Maximum https://redis.io/commands/xread[XREAD] wait duration in milliseconds (default: `100`).
85
91
<4> Name of the stream consumer group (default: `kafka-consumer-group`).
86
-
<5> Name of the stream consumer (default: `consumer-${task}`). May contain `${task}` as a placeholder for the task id. For example, `foo${task}` and task `123` => consumer `foo123`.
87
-
<6> Destination topic (default: `${stream}`). May contain `${stream}` as a placeholder for the originating stream name. For example, `redis_${stream}` and stream `orders` => topic `redis_orders`.
92
+
<5> Name of the stream consumer (default: `consumer-${task}`).
93
+
May contain `${task}` as a placeholder for the task id.
94
+
For example, `foo${task}` and task `123` => consumer `foo123`.
95
+
<6> Destination topic (default: `${stream}`).
96
+
May contain `${stream}` as a placeholder for the originating stream name.
97
+
For example, `redis_${stream}` and stream `orders` => topic `redis_orders`.
0 commit comments