Skip to content

Commit 1d51ce7

Browse files
author
Julien Ruaux
committed
Formatted asciidoc
1 parent 7d17df5 commit 1d51ce7

File tree

3 files changed

+35
-15
lines changed

3 files changed

+35
-15
lines changed

src/docs/asciidoc/_connect.adoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,8 @@ redis.uri=redis://redis-12000.redis.com:12000
1414

1515
Details on the https://github.com/lettuce-io/lettuce-core/wiki/Redis-URI-and-connection-details#uri-syntax[Redis URI syntax] can be found in the Lettuce project https://github.com/lettuce-io/lettuce-core/wiki[wiki].
1616

17-
TLS connection URIs start with `rediss://`. To disable certificate verification for TLS connections use the following property:
17+
TLS connection URIs start with `rediss://`.
18+
To disable certificate verification for TLS connections use the following property:
1819

1920
[source,properties]
2021
----

src/docs/asciidoc/_sink.adoc

Lines changed: 17 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,8 @@ The {name} guarantees that records from the Kafka topic are delivered at least o
2020
[[sink-tasks]]
2121
=== Multiples tasks
2222

23-
The {name} supports running one or more tasks. You can specify the number of tasks with the `tasks.max` configuration property.
23+
The {name} supports running one or more tasks.
24+
You can specify the number of tasks with the `tasks.max` configuration property.
2425

2526
[[data-structures]]
2627
=== Redis Data Structures
@@ -60,7 +61,8 @@ value.converter=<Avro or JSON> <2>
6061
----
6162

6263
<1> <<key-string,String>> or <<key-bytes,bytes>>
63-
<2> <<avro,Avro>> or <<kafka-json,JSON>>. If value is null the key is https://redis.io/commands/del[deleted].
64+
<2> <<avro,Avro>> or <<kafka-json,JSON>>.
65+
If value is null the key is https://redis.io/commands/del[deleted].
6466

6567
==== String
6668
Use the following properties to write Kafka records as Redis strings:
@@ -73,7 +75,8 @@ value.converter=<string or bytes> <2>
7375
----
7476

7577
<1> <<key-string,String>> or <<key-bytes,bytes>>
76-
<2> <<value-string,String>> or <<value-bytes,bytes>>. If value is null the key is https://redis.io/commands/del[deleted].
78+
<2> <<value-string,String>> or <<value-bytes,bytes>>.
79+
If value is null the key is https://redis.io/commands/del[deleted].
7780

7881
==== List
7982
Use the following properties to add Kafka record keys to a Redis list:
@@ -90,7 +93,8 @@ redis.push.direction=<LEFT or RIGHT> <3>
9093
<2> <<key-string,String>> or <<key-bytes,bytes>>: Kafka record keys to push to the list
9194
<3> `LEFT`: LPUSH (default), `RIGHT`: RPUSH
9295

93-
The Kafka record value can be any format. If a value is null then the member is removed from the list (instead of pushed to the list).
96+
The Kafka record value can be any format.
97+
If a value is null then the member is removed from the list (instead of pushed to the list).
9498

9599
==== Set
96100
Use the following properties to add Kafka record keys to a Redis set:
@@ -105,7 +109,8 @@ key.converter=<string or bytes> <2>
105109
<1> <<collection-key,Set key>>
106110
<2> <<key-string,String>> or <<key-bytes,bytes>>: Kafka record keys to add to the set
107111

108-
The Kafka record value can be any format. If a value is null then the member is removed from the set (instead of added to the set).
112+
The Kafka record value can be any format.
113+
If a value is null then the member is removed from the set (instead of added to the set).
109114

110115
==== Sorted Set
111116
Use the following properties to add Kafka record keys to a Redis sorted set:
@@ -120,7 +125,8 @@ key.converter=<string or bytes> <2>
120125
<1> <<collection-key,Sorted set key>>
121126
<2> <<key-string,String>> or <<key-bytes,bytes>>: Kafka record keys to add to the set
122127

123-
The Kafka record value should be `float64` and is used for the score. If the score is null then the member is removed from the sorted set (instead of added to the sorted set).
128+
The Kafka record value should be `float64` and is used for the score.
129+
If the score is null then the member is removed from the sorted set (instead of added to the sorted set).
124130

125131
[[redisjson]]
126132
==== JSON
@@ -134,7 +140,8 @@ value.converter=<string or bytes> <2>
134140
----
135141

136142
<1> <<key-string,String>> or <<key-bytes,bytes>>
137-
<2> <<value-string,String>> or <<value-bytes,bytes>>. If value is null the key is https://redis.io/commands/del[deleted].
143+
<2> <<value-string,String>> or <<value-bytes,bytes>>.
144+
If value is null the key is https://redis.io/commands/del[deleted].
138145

139146
==== TimeSeries
140147

@@ -193,7 +200,9 @@ key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
193200
----
194201

195202
==== Kafka Record Values
196-
Multiple data formats are supported for Kafka record values depending on the configured target <<data-structures,Redis data structure>>. Each data structure expects a specific format. If your data in Kafka is not in the format expected for a given data structure, consider using https://docs.confluent.io/platform/current/connect/transforms/overview.html[Single Message Transformations] to convert to a byte array, string, Struct, or map before it is written to Redis.
203+
Multiple data formats are supported for Kafka record values depending on the configured target <<data-structures,Redis data structure>>.
204+
Each data structure expects a specific format.
205+
If your data in Kafka is not in the format expected for a given data structure, consider using https://docs.confluent.io/platform/current/connect/transforms/overview.html[Single Message Transformations] to convert to a byte array, string, Struct, or map before it is written to Redis.
197206

198207
[options="header",cols="h,1,1"]
199208
|====

src/docs/asciidoc/_source.adoc

Lines changed: 16 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,12 +18,15 @@ The {name} guarantees that records from the Kafka topic are delivered at least o
1818

1919
[[source-tasks]]
2020
=== Multiple Tasks
21-
Use configuration property `tasks.max` to have the change stream handled by multiple tasks. The connector splits the work based on the number of configured key patterns. When the number of tasks is greater than the number of patterns, the number of patterns will be used instead.
21+
Use configuration property `tasks.max` to have the change stream handled by multiple tasks.
22+
The connector splits the work based on the number of configured key patterns.
23+
When the number of tasks is greater than the number of patterns, the number of patterns will be used instead.
2224

2325
//
2426
//[[key-reader]]
2527
//=== Key Reader
26-
//In key reader mode, the {name} captures changes happening to keys in a Redis database and publishes keys and values to a Kafka topic. The data structure key will be mapped to the record key, and the value will be mapped to the record value.
28+
//In key reader mode, the {name} captures changes happening to keys in a Redis database and publishes keys and values to a Kafka topic.
29+
//The data structure key will be mapped to the record key, and the value will be mapped to the record value.
2730
//
2831
//[IMPORTANT]
2932
//.Supported Data Structures
@@ -41,12 +44,15 @@ Use configuration property `tasks.max` to have the change stream handled by mult
4144
//topic=<topic> <2>
4245
//----
4346
//
44-
//<1> Key portion of the pattern that will be used to listen to keyspace events. For example `foo:*` translates to pubsub channel `$$__$$keyspace@0$$__$$:foo:*` and will capture changes to keys `foo:1`, `foo:2`, etc. Use comma-separated values for multiple patterns (`foo:*,bar:*`)
47+
//<1> Key portion of the pattern that will be used to listen to keyspace events.
48+
For example `foo:*` translates to pubsub channel `$$__$$keyspace@0$$__$$:foo:*` and will capture changes to keys `foo:1`, `foo:2`, etc.
49+
Use comma-separated values for multiple patterns (`foo:*,bar:*`)
4550
//<2> Name of the destination topic.
4651
4752
[[stream-reader]]
4853
=== Stream Reader
49-
The {name} reads messages from a stream and publishes to a Kafka topic. Reading is done through a consumer group so that <<source-tasks,multiple instances>> of the connector configured via the `tasks.max` can consume messages in a round-robin fashion.
54+
The {name} reads messages from a stream and publishes to a Kafka topic.
55+
Reading is done through a consumer group so that <<source-tasks,multiple instances>> of the connector configured via the `tasks.max` can consume messages in a round-robin fashion.
5056
5157
5258
==== Stream Message Schema
@@ -83,5 +89,9 @@ topic=<name> <6>
8389
<2> https://redis.io/commands/xread#incomplete-ids[Message ID] to start reading from (default: `0-0`).
8490
<3> Maximum https://redis.io/commands/xread[XREAD] wait duration in milliseconds (default: `100`).
8591
<4> Name of the stream consumer group (default: `kafka-consumer-group`).
86-
<5> Name of the stream consumer (default: `consumer-${task}`). May contain `${task}` as a placeholder for the task id. For example, `foo${task}` and task `123` => consumer `foo123`.
87-
<6> Destination topic (default: `${stream}`). May contain `${stream}` as a placeholder for the originating stream name. For example, `redis_${stream}` and stream `orders` => topic `redis_orders`.
92+
<5> Name of the stream consumer (default: `consumer-${task}`).
93+
May contain `${task}` as a placeholder for the task id.
94+
For example, `foo${task}` and task `123` => consumer `foo123`.
95+
<6> Destination topic (default: `${stream}`).
96+
May contain `${stream}` as a placeholder for the originating stream name.
97+
For example, `redis_${stream}` and stream `orders` => topic `redis_orders`.

0 commit comments

Comments
 (0)