Argo is a robust workflow engine for Kubernetes that enables the implementation of each step in a workflow as a container. It provides simple, flexible mechanisms for specifying constraints between the steps in a workflow and artifact management for linking the output of any step as an input to subsequent steps
- Step level input & outputs (artifacts/parameters)
- Timeouts (step & workflow level)
- Retry (step & workflow level)
- Garbage collection of completed workflow
- Scheduling (affinity/tolerations/node selectors)
With Argo and Kubernetes, a user not only can create a “pipeline” for building an application, but the pipeline itself can be specified as code and built or upgraded using containers. In other words, you can use CI to manage your CI infrastructure :)
利用argo workflow形成这个?
We create a Job object with 5 completions and 5 parallelism that will launch 5 pods in parallel to search for a solution. A Redis queue service is used for pubsub. The pod that finishes first will publish a message on a channel on the Redis server. All workers are subscribed to the channel. On receiving “finished” message, all other workers stop the search and exit
The workflow then consists of the following steps:
- Create a parallelized Kubernetes Job which launches 5 parallel workers. Once any pod has exited with success, no other pod will be doing any work. Return the job name and job uid as output parameters.
- Using the uid of the job, query any of its associated pods and print the result to the stdout.
- Delete the job using the job name.