You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Multiple data formats are supported for Kafka record values depending on the configured target <<data-structures,Redis data structure>>. Each data structure expects a specific format. If your data in Kafka is not in the format expected for a given data structure, consider using https://docs.confluent.io/platform/current/connect/transforms/overview.html[Single Message Transformations] to convert to a byte array, string, Struct, or map before it is written to Redis.
63
-
64
-
[options="header"]
65
-
|====
66
-
|Target|Record Value|Assigned To
67
-
|Stream|<<avro,Avro>> or <<json,JSON>>|Message body
68
-
|Hash|<<avro,Avro>> or <<json,JSON>>|Fields
69
-
|String|<<value-string,String>> or <<value-bytes,bytes>>|Value
70
-
|List|Any|Removal if null
71
-
|Set|Any|Removal if null
72
-
|Sorted Set|Float64|Score or removal if null
73
-
|====
74
-
75
-
[[value-string]]
76
-
===== StringConverter
77
-
If record values are already serialized as strings, use the StringConverter to store values in Redis as strings:
Use the byte array converter to store the binary serialized form (for example, JSON, Avro, Strings, etc.) of the Kafka record values in Redis as byte arrays:
<1> Set to `true` if the JSON record structure has an attached schema
110
-
111
24
[[data-structures]]
112
25
=== Redis Data Structures
113
26
114
-
Record keys and values have different roles depending on the target data structure:
27
+
Record keys and values have different roles depending on the target data structure.
115
28
116
29
[[collection-key]]
117
-
For collections (stream, list, set, sorted set) a single key is used which is independent of the record key. Use the `redis.key` configuration property (default: `${topic}`) to specify a format string for the destination collection, which may contain `${topic}` as a placeholder for the originating topic name. For example `kafka_${topic}` for the topic `orders` will map to the Redis key `kafka_orders`
30
+
==== Collections
31
+
For collections (stream, list, set, sorted set, timeseries) a single key is used which is independent of the record key.
32
+
33
+
Use the `redis.key` configuration property (default: `${topic}`) to specify a format string for the destination collection, which may contain `${topic}` as a placeholder for the originating topic name.
34
+
35
+
For example `kafka_${topic}` for the topic `orders` will map to the Redis key `kafka_orders`.
118
36
119
37
==== Stream
120
38
@@ -201,4 +119,125 @@ key.converter=<string or bytes> <2>
201
119
<1> <<collection-key,Sorted set key>>
202
120
<2> <<key-string,String>> or <<key-bytes,bytes>>: Kafka record keys to add to the set
203
121
204
-
The Kafka record value should be Float64 and is used for the score. If the score is null then the member is removed from the sorted set (instead of added to the sorted set).
122
+
The Kafka record value should be `float64` and is used for the score. If the score is null then the member is removed from the sorted set (instead of added to the sorted set).
123
+
124
+
==== JSON
125
+
Use the following properties to write Kafka records as RedisJSON documents:
126
+
127
+
[source,properties]
128
+
----
129
+
redis.type=JSON
130
+
key.converter=<string or bytes> <1>
131
+
value.converter=<string or bytes> <2>
132
+
----
133
+
134
+
<1> <<key-string,String>> or <<key-bytes,bytes>>
135
+
<2> <<value-string,String>> or <<value-bytes,bytes>>. If value is null the key is https://redis.io/commands/del[deleted].
136
+
137
+
==== TimeSeries
138
+
139
+
Use the following properties to write Kafka records as RedisTimeSeries samples:
140
+
141
+
[source,properties]
142
+
----
143
+
redis.type=TIMESERIES
144
+
redis.key=<key name> <1>
145
+
----
146
+
147
+
<1> <<collection-key,Timeseries key>>
148
+
149
+
The Kafka record key must be an integer (e.g. `int64`) as it is used for the sample time in milliseconds.
150
+
151
+
The Kafka record value must be a number (e.g. `float64`) as it is used as the sample value.
152
+
153
+
154
+
[[data-formats]]
155
+
=== Data Formats
156
+
157
+
The {name} supports different data formats for record keys and values depending on the target Redis data structure.
158
+
159
+
==== Kafka Record Keys
160
+
The {name} expects Kafka record keys in a specific format depending on the configured target <<data-structures,Redis data structure>>:
161
+
162
+
[options="header",cols="h,1,1"]
163
+
|====
164
+
|Target|Record Key|Assigned To
165
+
|Stream|Any|None
166
+
|Hash|String|Key
167
+
|String|<<key-string,String>> or <<key-bytes,bytes>>|Key
168
+
|List|<<key-string,String>> or <<key-bytes,bytes>>|Member
169
+
|Set|<<key-string,String>> or <<key-bytes,bytes>>|Member
170
+
|Sorted Set|<<key-string,String>> or <<key-bytes,bytes>>|Member
171
+
|JSON|<<key-string,String>> or <<key-bytes,bytes>>|Key
172
+
|TimeSeries|Integer|Sample time in milliseconds
173
+
|====
174
+
175
+
[[key-string]]
176
+
===== StringConverter
177
+
If record keys are already serialized as strings use the StringConverter:
Multiple data formats are supported for Kafka record values depending on the configured target <<data-structures,Redis data structure>>. Each data structure expects a specific format. If your data in Kafka is not in the format expected for a given data structure, consider using https://docs.confluent.io/platform/current/connect/transforms/overview.html[Single Message Transformations] to convert to a byte array, string, Struct, or map before it is written to Redis.
195
+
196
+
[options="header",cols="h,1,1"]
197
+
|====
198
+
|Target|Record Value|Assigned To
199
+
|Stream|<<avro,Avro>> or <<json,JSON>>|Message body
200
+
|Hash|<<avro,Avro>> or <<json,JSON>>|Fields
201
+
|String|<<value-string,String>> or <<value-bytes,bytes>>|Value
202
+
|List|Any|Removal if null
203
+
|Set|Any|Removal if null
204
+
|Sorted Set|Number|Score or removal if null
205
+
|JSON|<<value-string,String>> or <<value-bytes,bytes>>|Value
206
+
|TimeSeries|Number|Sample value
207
+
|====
208
+
209
+
[[value-string]]
210
+
===== StringConverter
211
+
If record values are already serialized as strings, use the StringConverter to store values in Redis as strings:
Use the byte array converter to store the binary serialized form (for example, JSON, Avro, Strings, etc.) of the Kafka record values in Redis as byte arrays:
0 commit comments