Argo Events Getting Start

Experiment with Argo Event and Argo Workflow, and execute several examples.

Info

This article was originally written in Korean and has been translated using ChatGPT.

Introduction

In Kubernetes, batch-type applications are catered for.
They can be executed as Job or CronJob.
Yet, the platform lacks features for managing execution histories or setting dependencies among various jobs.
To ascertain if Argo Workflow and Argo Event fully address these needs, I installed and performed some tests.

Install

Argo-Event

Considering future usage, the installation is done cluster-wide, rather than at the namespace level.

namespace 생성

1
kubectl create namespace argo-events

Deploy Argo Events, SA, ClusterRoles, Sensor Controller, EventBus Controller and EventSource Controller.

1
2
3
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-events/stable/manifests/install.yaml
# Install with a validating admission controller
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-events/stable/manifests/install-validating-webhook.yaml

Deploy Eventbus

1
kubectl apply -n argo-events -f https://raw.githubusercontent.com/argoproj/argo-events/stable/examples/eventbus/native.yaml

Argo-workflow

Controller And Server

1
2
kubectl create namespace argo
kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.3.0/install.yaml

When you follow the getting-started guide for installation, it gets confined to the namespace.
Such a setup might hinder the convenience of using argo-event, so it’s advisable to opt for a cluster-wide installation unless there’s a compelling reason otherwise.

Install CLI

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# Download the binary
curl -sLO https://github.com/argoproj/argo-workflows/releases/download/v3.3.0/argo-darwin-amd64.gz

# Unzip
gunzip argo-darwin-amd64.gz

# Make binary executable
chmod +x argo-darwin-amd64

# Move binary to path
mv ./argo-darwin-amd64 /usr/local/bin/argo

# Test installation
argo version

Port-Forwarding (local)

1
kubectl -n argo port-forward deployment/argo-server 2746:2746

For a Remote Cluster setup, configure the Routing by following the instructions in this link.
For local setups, opt for the Load Balancer method.

Authorization

By adapting the approach from this link, you can retrieve an existing token and utilize it for login.

Concept

Architecture

architecture

Event Source

It’s a resource designed to process external events.

Sensor

The sensor accepts the Event Dependency Set as its input, while the Trigger stipulates the output.
Functioning as an Event Dependency Manager, it processes events relayed via the EventBus and subsequently initiates the Trigger.

Eventbus

It serves as the TransportLayer, bridging EventSources and Sensor.
EventSources are responsible for publishing events, whereas the Sensor subscribes to these events and initiates the Trigger.

Trigger

Once the Sensor resolves the Event Dependencies, it signifies the resource or workload that gets executed.

Feature tested

  • calendar
  • webhook-workflow
  • webhook-k8s-object
  • cross-namespace
  • webhook-auth

RBAC Configuration

Operating on the assumption that we’re utilizing argo-workflow and argo-events, let me elucidate.
For argo-workflow, one must provide permissions to inspect the status and logs of the executed pod.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: workflow-role
  namespace: argo-events
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
      - watch
      - patch
  - apiGroups:
      - ""
    resources:
      - pods/log
    verbs:
      - get
      - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: workflow-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: workflow-role
subjects:
  - kind: ServiceAccount
    name: default

When intending to utilize Argo-Workflow as a Trigger within Argo Events, the following setup is essential.
Given that the object spawned by the Trigger is a Workflow, requisite permissions are needed to access that specific api-resource.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: v1
kind: ServiceAccount
metadata:
  name: operate-workflow-sa
  namespace: argo-events
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: operate-workflow-role
  namespace: argo-events
rules:
  - apiGroups:
      - argoproj.io
    verbs:
      - "*"
    resources:
      - workflows
      - workflowtemplates
      - cronworkflows
      - clusterworkflowtemplates
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: operate-workflow-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: operate-workflow-role
subjects:
  - kind: ServiceAccount
    name: operate-workflow-sa

webhook-workflow

Webhooks are used as EventSources, and the Sensor uses argo-workflow as a Trigger.
Accordingly, to operate, you need to pre-install argo-workflow cluster wide.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
  name: webhook
spec:
  service:
    ports:
      - port: 12000
        targetPort: 12000
  webhook:
    example:
      port: "12000"
      endpoint: /example
      method: POST
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
  name: webhook
spec:
  template:
    serviceAccountName: operate-workflow-sa
  dependencies:
    - name: test-dep
      eventSourceName: webhook
      eventName: example
  triggers:
    - template:
        name: webhook-workflow-trigger
        k8s:
          operation: create
          source:
            resource:
              apiVersion: argoproj.io/v1alpha1
              kind: Workflow
              metadata:
                generateName: webhook-
              spec:
                entrypoint: whalesay
                arguments:
                  parameters:
                  - name: message
                    value: hello world
                templates:
                - name: whalesay
                  inputs:
                    parameters:
                    - name: message
                  container:
                    image: docker/whalesay:latest
                    command: [cowsay]
                    args: ["{{inputs.parameters.message}}"]
          parameters:
            - src:
                dependencyName: test-dep
                dataKey: body
              dest: spec.arguments.parameters.0.value

calendar

This EventSource can execute both Timer and CronJob tasks.

1
2
3
4
5
6
7
8
9
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
  name: calendar
spec:
  calendar:
    example-with-interval:
      interval: 10s
#     schedule: "30 * * * *"
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
  name: calendar
spec:
  template:
    serviceAccountName: operate-workflow-sa
  dependencies:
    - name: test-dep
      eventSourceName: calendar
      eventName: example-with-interval
  triggers:
    - template:
        name: calendar-workflow-trigger
        k8s:
          operation: create
          source:
            resource:
              apiVersion: argoproj.io/v1alpha1
              kind: Workflow
              metadata:
                generateName: calendar-workflow-
              spec:
                entrypoint: whalesay
                arguments:
                  parameters:
                  - name: message
                    value: hello world
                templates:
                - name: whalesay
                  inputs:
                    parameters:
                    - name: message
                  container:
                    image: docker/whalesay:latest
                    command: [cowsay]
                    args: ["{{inputs.parameters.message}}"]
          parameters:
            - src:
                dependencyName: test-dep
                dataKey: eventTime
              dest: spec.arguments.parameters.0.value
      retryStrategy:
        steps: 3

webhook-k8s-object

This setup resembles the Webhook-Workflow. However, what distinguishes it is that the Sensor triggers a k8s object.
It’s capable of executing Custom Resources, including Pod, Deployment, Job, and CronJob.
The available operations are Create, Update, Patch, and Delete.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
  name: k8s-webhook
spec:
  service:
    ports:
      - port: 12000
        targetPort: 12000
  webhook:
    example:
      port: "12000"
      endpoint: /example
      method: POST
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
  name: k8s-webhook
spec:
  template:
    serviceAccountName: operate-k8s-sa
  dependencies:
    - name: test-dep
      eventSourceName: k8s-webhook
      eventName: example
  triggers:
    - template:
        name: webhook-pod-trigger
        k8s:
          operation: create
          source:
            resource:
              apiVersion: v1
              kind: Pod
              metadata:
                generateName: hello-world-
              spec:
                containers:
                  - name: hello-container
                    args:
                      - "hello-world"
                    command:
                      - cowsay
                    image: "docker/whalesay:latest"
          parameters:
            - src:
                dependencyName: test-dep
                dataKey: body
              dest: spec.containers.0.args.0

To deploy a Kubernetes Object, you’ll need supplementary RBAC permissions.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: v1
kind: ServiceAccount
metadata:
  name: operate-k8s-sa
  namespace: argo-events
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: operate-k8s-role
  namespace: argo-events
rules:
  - apiGroups:
      - ""
    verbs:
      - "*"
    resources:
      - pods
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: operate-k8s-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: operate-k8s-role
subjects:
  - kind: ServiceAccount
    name: operate-k8s-sa

cross-namespace

The RBAC configurations used until now are restricted to individual namespaces. This poses a challenge as every user needs to be familiar with its usage, potentially causing user inconvenience.
For cross-namespace functionality, all definitions that were previously set as Role need to be converted to ClusterRole.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
apiVersion: v1
kind: ServiceAccount
metadata:
  name: operate-k8s-cluster-sa
  namespace: argo-events
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: operate-k8s-clusterrole
  namespace: argo-events
rules:
  - apiGroups:
      - ""
    verbs:
      - "*"
    resources:
      - pods
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: operate-k8s-clusterrole-binding
  namespace: argo-events
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: operate-k8s-clusterrole
subjects:
  - kind: ServiceAccount
    name: operate-k8s-cluster-sa
    namespace: argo-events

The provided ClusterRole example is configured solely for pod creation. If other resources are necessary, you can incorporate them and then proceed to use the role.

webhook-auth

For security reasons, it may be necessary to incorporate an authentication token when making a Webhook call.
This approach entails generating an authentication token and subsequently registering it as a Secret, a type of k8s object.
By designating the name of the Secret object, it becomes available for authentication purposes within the Webhook.

1
2
3
echo -n 'af3qqs321f2ddwf1e2e67dfda3fs' > ./token.txt

kubectl create secret generic my-webhook-token --from-file=my-token=./token.txt
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
  name: secret-webhook
spec:
  webhook:
    example:
      port: "12000"
      endpoint: /example
      method: POST
      authSecret:
        name: my-webhook-token
        key: my-token
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
  name: k8s-webhook
spec:
  template:
    serviceAccountName: operate-k8s-sa
  dependencies:
    - name: test-dep
      eventSourceName: secret-webhook
      eventName: example
  triggers:
    - template:
        name: webhook-pod-trigger
        k8s:
          operation: create
          source:
            resource:
              apiVersion: v1
              kind: Pod
              metadata:
                generateName: hello-world-
              spec:
                containers:
                  - name: hello-container
                    args:
                      - "hello-world"
                    command:
                      - cowsay
                    image: "docker/whalesay:latest"
          parameters:
            - src:
                dependencyName: test-dep
                dataKey: body
              dest: spec.containers.0.args.0

Summary

From the tests we’ve conducted, it’s evident that the emphasis was placed on assessing functional capabilities rather than operational aspects.
Given that all necessary resources for Batch can be precisely defined using yaml, it enabled seamless GitOps operations.
Moreover, with Argo Event’s extensive support for various Event Sources, we’ve verified that both Webhook and Cron can be utilized to their full potential.
In the upcoming article, I’ll delve into the operational requirements and discuss our approach to evaluating them.

Built with Hugo
Theme Stack designed by Jimmy