1.1 Introduction中 Kafka for Stream Processing官网剖析(博主推荐)

时间:2023-03-08 23:20:51
1.1 Introduction中  Kafka for Stream Processing官网剖析(博主推荐)

不多说,直接上干货!

  一切来源于官网

http://kafka.apache.org/documentation/

1.1 Introduction中  Kafka for Stream Processing官网剖析(博主推荐)

Kafka for Stream Processing

kafka的流处理

  It isn't enough to just read, write, and store streams of data, the purpose is to enable real-time processing of streams.

仅仅读,写和存储是不够的,kafka的目标是实时的流处理。

  In Kafka a stream processor is anything that takes continual streams of data from input topics, performs some processing on this input, and produces continual streams of data to output topics.

在kafka中,流处理持续获取输入topic的数据,进行处理加工,然后写入输出topic。例如,一个零售APP,接收销售和出货的输入流,统计数量或调整价格后输出。

  For example, a retail application might take in input streams of sales and shipments, and output a stream of reorders and price adjustments computed off this data.

  It is possible to do simple processing directly using the producer and consumer APIs. However for more complex transformations Kafka provides a fully integrated Streams API. This allows building applications that do non-trivial processing that compute aggregations off of streams or join streams together.

可以直接使用producer和consumer API进行简单的处理。对于复杂的转换,Kafka提供了更强大的Streams API。可构建聚合计算或连接流到一起的复杂应用程序。

  This facility helps solve the hard problems this type of application faces: handling out-of-order data, reprocessing input as code changes, performing stateful computations, etc.

助于解决此类应用面临的硬性问题:处理无序的数据,代码更改的再处理,执行状态计算等。

  The streams API builds on the core primitives Kafka provides: it uses the producer and consumer APIs for input, uses Kafka for stateful storage, and uses the same group mechanism for fault tolerance among the stream processor instances.

Sterams API在Kafka中的核心:使用producer和consumer API作为输入,利用Kafka做状态存储,使用相同的组机制在stream处理器实例之间进行容错保障。