Kubernetes ingress is a collection of routing rules that govern how external users access services running in a Kubernetes cluster.
Ingress in Kubernetes
- Using a Kubernetes service of type
NodePort
, which exposes the application on a port across each of your nodes - Use a Kubernetes service of type
LoadBalancer
, which creates an external load balancer that points to a Kubernetes service in your cluster - Use a Kubernetes Ingress Resource
nodePort与service部分绑定,无关node部分,
This external load balancer is associated with a specific IP address and routes external traffic to a Kubernetes service in your cluster.
Typically, though, your Kubernetes services will impose additional requirements on your ingress. Examples of this include:
- content-based routing, e.g., routing based on HTTP method, request headers, or other properties of the specific request
- resilience, e.g., rate limiting, timeouts
- support for multiple protocols, e.g., WebSockets or gRPC
- authentication
用ingress,扩展来说,可以说是service mesh?应该是可以实现每一个部分?通过ingress controller部分来实现
Different ingress controllers will have different functionality, just like API Gateways. Here are a few choices to consider:
- There are three different NGINX ingress controllers, with different feature sets and functionality.
- Traefik can also be deployed as an ingress controller, and exposes a subset of its functionality through Kubernetes annotations.
- Kong is a popular open source API gateway built on NGINX. However, because it supports many infrastructure platforms, it isn’t optimized for Kubernetes. For example, Kong requires a database, when Kubernetes provides an excellent persistent data store in etcd. Kong also is configured via REST, while Kubernetes embraces declarative configuration management.
- Ambassador is built on the Envoy Proxy, and exposes a rich set of configuration options for your services, as well as support for external authentication services.