上面这个可以是安装整个cluster的参考。不过现在我只想专心于Spring XD的配置和开发。
这个介绍有点意思,之前看过,但看完后印象不深,现在基本上忘得差不多了。所以还是需要实践,要接地气一点。
这个就比较接地气了。
- wget下载
- rpm安装
- 然后service运行两个业务:spring-xd-admin,spring-xd-container
注意还要装jdk
安装好后,就可以顺着这个Quick Start来看看怎样用了。
基本上可以实验到--deploy --destroy 一个ticktock实验,接下来就很多关于cluster的东西。
当然可以选择马上进入cluster,不过作为一个job scheduler,我觉得可以先考虑学习spring-xd tasklet的编程。
Spring XD is a unified, distributed, and extensible service for data ingestion, real time analytics, batch processing, and data export. The foundations of XD's architecture are based on the over 100+ man years of work that have gone into the Spring Batch, Integration and Dat projects. Building upon these projects, Spring XD provides servers and a configuration DSL that you can immediately use to start processing data. You do not need to build an application yourself from a collection of jars to start using Spring XD.
- Runtime Architecture
The key components in Spring XD are the XD Admin and XD Container Servers. Using a high-level DSL, you post the description of the required processing task to the Admin server over HTTP. The Admin server then maps the processing tasks into processing modules. A module is a unit of execution and is implemented as Spring ApplicationContext. A distributed runtime is provided that will assign modules to execute across multiple XD Container servers. A single XD Container server can run multiple modules. When using the single node runtime, all modules are run in a single XD Container and the XD Admin is run in the same process.
- DIRT Runtime
A distributed runtime, called Distributed Integration Runtime, aka DIRT, will distribute the processing tasks across multiple XD Container instances. The XD Admin server breaks up a processing task into individual module definitions and assigns each module to a container instance using ZooKeeper. Each container listens for module definitions to which it has been assigned and deploys the module, creating a Spring ApplicationContext to run it.
Modules share data by passing messages using a configured messaging middleware (Rabbit, Redis, or Local for single node). To reduce the number of hops across messaging middleware between them, multiple modules may be composed into larger deployment units that act as a single modules.
Import concept:
- Container Server Architecture
- Streams
- Streams define how event driven data is collected, processed, and stored or forwarded. For example, a stream might collect syslog data, filter, and store it in HDFS.
- Jobs
- Jobs define how coarse grained and time consuming batch processing steps are orchestrated, for example a job could be defined to coordinate performing HDFS operations and the subsequent execution of multiple MapReduce processing tasks.
- Taps
- Taps are used to process data in a non-invasive way as data is being processed by a Stream or a Job. Much like wiretaps used on telephones, a Tap on a Stream lets you consume data at any point along the Stream's processing pipeline. The behavior of the original stream is unaffected by the presence of the Tap.
The programming model for processing streams in Spring XD is based on the well known Enterprise Integration Patterns as implemented by components in the Spring Integration project. The programming model was designed so that it is easy to test components.
- Stream Deployment
The Container Server listens for module deployment events initiated from the Admin Server via ZooKeeper. When the container node handles a module deployment event, it connects the module's input and output channels to the data bus used to transport messages during stream processing. In a single node configuration, the data bus uses in-memory direct channels. In a distributed configuration , the data bus communications are backed by the configured trasnport middleware. Redis and Rabbit are both provided with the Spring XD distribution, but other transports are envisioned for future release.
No comments:
Post a Comment