Skip to content

Commit 81f5e91

Browse files
afrittolitekton-robot
authored andcommitted
Updates the events docs for dogfooding
Signed-off-by: Andrea Frittoli <[email protected]>
1 parent 73eaf80 commit 81f5e91

File tree

1 file changed

+3
-177
lines changed

1 file changed

+3
-177
lines changed

docs/dogfooding.md

+3-177
Original file line numberDiff line numberDiff line change
@@ -201,187 +201,13 @@ spec:
201201

202202
Tekton Pipelines is configured in the `dogfooding` cluster to generate `CloudEvents`
203203
which are sent every time a `TaskRun` or `PipelineRun` is executed.
204-
`CloudEvents` are sent by Tekton Pipelines to an event broker. `Trigger` resources
205-
can be defined to pick-up events from broken and have them delivered to consumers.
206-
207-
### CloudEvents Broker
208-
209-
The broker installed is based on Knative Eventing running on top of a Kafka backend.
210-
Knative Eventing is installed following the [official guide](https://knative.dev/docs/install/eventing/install-eventing-with-yaml/)
211-
from the Knative project:
212-
213-
```shell
214-
# Install the CRDs
215-
kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.0.0/eventing-crds.yaml
216-
217-
# Install the core components
218-
kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.0.0/eventing-core.yaml
219-
220-
# Verify the installation
221-
kubectl get pods -n knative-eventing
222-
```
223-
224-
The Kafka backend is installed, as recommended in the Knative guide, using the [strimzi](https://strimzi.io/quickstarts/) operator:
225-
226-
```shell
227-
# Create the namespace
228-
kubectl create namespace kafka
229-
230-
# Install in the kafka namespace
231-
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
232-
233-
# Apply the `Kafka` Cluster CR file
234-
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka
235-
236-
# Verify the installation
237-
kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka
238-
```
239-
240-
A [Knative Channel](https://github.com/knative-sandbox/eventing-kafka) is installed next:
241-
242-
```shell
243-
# Install the Kafka "Consolidated" Channel
244-
kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-kafka/latest/channel-consolidated.yaml
245-
246-
# Edit the "config-kafka" config-map in the "knative-eventing" namespace
247-
# Replace "REPLACE_WITH_CLUSTER_URL" with my-cluster-kafka-bootstrap.kafka:9092/
248-
kubectl edit cm/config-kafka -n knative-eventing
249-
```
250-
251-
Install the [Knative Kafka Broker](https://knative.dev/docs/install/eventing/install-eventing-with-yaml/#optional-install-a-broker-layer)
252-
following the official guide:
253-
254-
```shell
255-
# Kafka Controller
256-
kubectl apply -f https://github.com/knative-sandbox/eventing-kafka-broker/releases/download/knative-v1.0.0/eventing-kafka-controller.yaml
257-
258-
# Kafka Broken Data plane
259-
kubectl apply -f https://github.com/knative-sandbox/eventing-kafka-broker/releases/download/knative-v1.0.0/eventing-kafka-broker.yaml
260-
```
261-
262-
Create a broker resource:
263-
264-
```yaml
265-
kind: Broker
266-
metadata:
267-
name: default
268-
namespace: default
269-
spec:
270-
config:
271-
apiVersion: v1
272-
kind: ConfigMap
273-
name: kafka-broker-config
274-
namespace: knative-eventing
275-
delivery:
276-
retry: 0
277-
```
278-
279-
The `retry: 0` part means that event delivery won't be retried on failure.
280-
This is required because Tekton Triggers replies to CloudEvents with a JSON body
281-
but no CloudEvents headers, which is interpreted by the message dispatcher as
282-
a failure - see the [feature proposal](https://github.com/tektoncd/triggers/issues/1439)
283-
on Triggers for more details.
284-
285-
### Kafka UI
286-
287-
The [Kafka UI](https://github.com/provectus/kafka-ui) allows viewing and searching for events stored by Kafka.
288-
Events are retained by Kafka for some time (but not indefinitely), which helps when debugging event based integrations.
289-
The Kafka UI allows managing channels and creating new events, so it is not publicly accessible. To access
290-
the Kafka UI, port-forward the service port:
291-
292-
```shell
293-
# Set up port forwarding
294-
export POD_NAME=$(kubectl get pods --namespace kafka -l "app.kubernetes.io/name=kafka-ui,app.kubernetes.io/instance=kafka-ui" -o jsonpath="{.items[0].metadata.name}")
295-
kubectl --namespace kafka port-forward $POD_NAME 8080:8080
296-
297-
# Point the browser to http://localhost:8080
298-
```
299-
300-
The Kafka UI is installed via an helm chart as recommended in the [Kubernetes installation guide](https://github.com/provectus/kafka-ui#running-in-kubernetes).
301-
302-
```shell
303-
helm install kafka-ui kafka-ui/kafka-ui --set envs.config.KAFKA_CLUSTERS_0_NAME=my-cluster --set envs.config.KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=my-cluster-kafka-bootstrap:9092 --set envs.config.KAFKA_CLUSTERS_0_ZOOKEEPER=my-cluster-zookeeper-nodes:2181 --namespace kafka
304-
```
204+
`CloudEvents` are sent by Tekton Pipelines an `EventListener` called `tekton-events`.
305205

306206
### CloudEvents Producer
307207

308-
Tekton Pipelines is the only `CloudEvents` producer in the cluster. It's [configured](../tekton/cd/pipeline/overlays/dogfooding/config-defaults.yaml) to send all events to the broker:
208+
Tekton Pipelines is the only `CloudEvents` producer in the cluster. It's [configured](../tekton/cd/pipeline/overlays/dogfooding/config-defaults.yaml) to send all events to the event listener:
309209

310210
```yaml
311211
data:
312-
default-cloud-events-sink: http://kafka-broker-ingress.knative-eventing.svc.cluster.local/default/default
212+
default-cloud-events-sink: http://el-tekton-events.default:8080
313213
```
314-
315-
### CloudEvents Consumers
316-
317-
`CloudEvents` are consumed from the broker via a Knative Eventing CRD called `Trigger`.
318-
The `dogfooding` cluster is setup so that all `TaskRun` start, running and finish events are forwarded from the
319-
broker to the `tekton-events` event listener, in the `default` namespace.
320-
This initial filtering of events allows to reduce the load on the event listener.
321-
322-
The following `Triggers` are defined in the cluster:
323-
324-
```yaml
325-
apiVersion: eventing.knative.dev/v1
326-
kind: Trigger
327-
metadata:
328-
name: taskrun-start-events-to-tekton-events-el
329-
namespace: default
330-
spec:
331-
broker: default
332-
filter:
333-
attributes:
334-
type: dev.tekton.event.taskrun.started.v1
335-
subscriber:
336-
uri: http://el-tekton-events.default.svc.cluster.local:8080
337-
---
338-
apiVersion: eventing.knative.dev/v1
339-
kind: Trigger
340-
metadata:
341-
name: taskrun-running-events-to-tekton-events-el
342-
namespace: default
343-
spec:
344-
broker: default
345-
filter:
346-
attributes:
347-
type: dev.tekton.event.taskrun.running.v1
348-
subscriber:
349-
uri: http://el-tekton-events.default.svc.cluster.local:8080
350-
---
351-
apiVersion: eventing.knative.dev/v1
352-
kind: Trigger
353-
metadata:
354-
name: taskrun-successful-events-to-tekton-events-el
355-
namespace: default
356-
spec:
357-
broker: default
358-
filter:
359-
attributes:
360-
type: dev.tekton.event.taskrun.successful.v1
361-
subscriber:
362-
uri: http://el-tekton-events.default.svc.cluster.local:8080
363-
---
364-
apiVersion: eventing.knative.dev/v1
365-
kind: Trigger
366-
metadata:
367-
name: taskrun-failed-events-to-tekton-events-el
368-
namespace: default
369-
spec:
370-
broker: default
371-
filter:
372-
attributes:
373-
type: dev.tekton.event.taskrun.failed.v1
374-
subscriber:
375-
uri: http://el-tekton-events.default.svc.cluster.local:8080
376-
```
377-
378-
### Troubleshooting Kafka
379-
380-
Occasionally, the Kafka cluster may stop working. Connecting via the Kafka UI
381-
shows the cluster as down. The `el-tekton-events` deployment logs don't get
382-
any new entry.
383-
384-
The kafka cluster logs show an error related to TLS certificates.
385-
The solution in this case is to kill all `Pods` in the `kafka` namespace and
386-
wait for things to start working again.
387-

0 commit comments

Comments
 (0)