Publish to my blog (weekly)

    • The event stream is the canonical source of truth.
    • t is a perfect audit log
    • A projection is an event handler that receives every persisted event from the event store. It executes queries against the database to add, update, and delete data. Event handlers run concurrently and are eventually consistent.
    • An example of this is using different  database access techniques for read and update.
    • A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
    • A Deployment controller provides declarative updates for Pods and ReplicaSets.

    • You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
    • A Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created.

    • Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.
    • On-disk files in a Container are ephemeral, which presents some problems for non-trivial applications when running in Containers. First, when a Container crashes, kubelet will restart it, but the files will be lost - the Container starts with a clean state. Second, when running Containers together in a Pod it is often necessary to share files between those Containers. The Kubernetes Volume abstraction solves both of these problems
    • if some set of Pods (let’s call them backends) provides functionality to other Pods (let’s call them frontends) inside the Kubernetes cluster, how do those frontends find out and keep track of which backends are in that set?
    • A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector (see below for why you might want a Service without a selector).
    • As an example, consider an image-processing backend which is running with 3 replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that or keep track of the list of backends themselves. The Service abstraction enables this decoupling.
    • A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.
    • providing self-healing capabilities
    • rollout
    • handling replication
    • But Kubeflow’s strict focus on ML pipelines gives it an edge over Airflow for data scientists, Scott says.
    • Kubeflow Pipelines
    • Nauta

Posted from Diigo. The rest of my favorite links are here.

댓글

이 블로그의 인기 게시물

Publish to my blog (weekly)

Publish to my blog (weekly)