Tag Archives: Arch-CEP

[project ]StreamBase Server

original:http://www.streambase.com/products/streambasecep/streambase-server/#axzz1u5NAqqP9

StreamBase Server

The StreamBase Server, the run-time engine of the StreamBase Event Processing Platform, is an ultra low-latency application server optimized for processing real-time streaming event data at throughput rates of 10s of thousands of messages/second up to 500,000+ messages/second or more on multi-core CPU hardware.

The Server’s multi-threaded architecture runs in 64-bit operating systems, takes full advantage of multi-core CPUs, scales readily across blades and clusters, and offers enterprise-class resilience (high availability) and security options.

The StreamBase Server achieves factors of 5-10x faster performance than other systems by utilizing our patent-pending second generation Dynamic Stream CompilerTM technology.  Using this innovative technology, the StreamBase Server compiles multiple StreamSQL queries at run-time into single, highly efficient execution units, and then executes the compiled application at ultra low latency.  It also offers fine-grained parallelism controls to optimize message throughput and application scalability.

Whether you’re dealing with high-volume data from low-latency financial market feeds, intelligence networks, or distributed message-based systems, StreamBase’s enterprise-class server can process, analyze, and act on fast-moving data instantly upon arrival.

To learn more and test-drive StreamBase, download the StreamBase Developer Edition.

StreamBase’s server is distinguished by the following core capabilities.

Industry-leading Performance

StreamBase has been benchmarked processing data at rates of more than 500,000 messages/second on a single processor, with the capacity of scaling to process millions of messages/second on multi-processor systems. This superior performance enables you to run StreamBase applications with a fraction of the hardware and administrative resources of competitive systems and ensures headroom to support future growth as data volumes continue growing.

Deterministic, Predictable Processing

The StreamBase Server uses an innovative compilation technology that translates high-level StreamBase specifications into low-level highly-optimized code. This compilation-based approach ensures correct, deterministic execution even during parallel operation on multiple processors, while achieving extraordinary performance.

Embedded Persistence and Storage

Since many complex event processing tasks require accessing historical data or maintaining intermediate “scratch pad” information, StreamBase offers a rich set of storage options so that application developers retain fine-grained control over performance and capacity trade-offs. These options for dealing with stored data include: in-memory tables optimized for extremely fast access, an embedded disk-resident database offering high-capacity disk storage without the latency delays of a process switch, JDBC access to external databases or datastores.

Scalability Up and Out

StreamBase is designed to scale up from a single-CPU implementation to a large distributed multi-server deployment. Support for 64-bit machines enables you to make maximum use of your system resources – including increased memory sizes. This is key for the near-zero latency demands of streaming applications, which often require that data be stored in memory rather than on disk. The StreamBase Server’s multi-threaded architecture also provides optimal performance on symmetric multi-processing machines and multi-core processors as additional CPUs are added.

For even higher scalability, and handling of unpredictable peak loads, StreamBase processing may be distributed to a cluster of machines across a fast local-area network or even more widely distributed (for higher availability). StreamBase provides users with tools for capacity planning and configuring multiple servers.

High Availability

To preserve the integrity of mission-critical information and to avoid disruptions in real-time processing, StreamBase uses a high-availability (HA) solution. The solution is based on a standard process-pairs approach in which two servers, a primary and a secondary, operate as a processing pair. The primary houses the live execution, and the secondary houses a backup process that accumulates enough information, through a novel light-weight checkpointing and synchronization approach, to be able to pick up execution without gaps in the production of results, should the primary fail. In this way, failover is transparent to the client applications whose input comes from StreamBase or whose output goes to StreamBase.

Multi-Platform Support

StreamBase runs on off-the-shelf hardware and common operating systems including Linux, Windows, or Solaris.

Helpful Links

Read more: http://www.streambase.com/products/streambasecep/streambase-server/#ixzz1u5PxgZZX

[project ]StreamBase CEP

original:http://www.streambase.com/products/streambasecep/#axzz1u5NAqqP9

StreamBase CEP

The StreamBase Complex Event Processing platform is a high-performance system for rapidly building applications that analyze and act on real-time streaming data.   With StreamBase, organizations rapidly build real-time systems in record time and are deployed at a fraction of the cost and risk of alternatives.

StreamBase’s complex event processing (CEP) software is distinguished by bringing together three significant capabilities in one integrated platform: rapid development via the industry’s first and only graphical event-flow language, extreme performance with a low-latency high-throughput event server, and the broadest connectivity to real-time and historical data.  Industry leaders and StreamBase partners in capital markets, the intelligence/security sector, and other industries are benefiting from rapid, on-the-fly processing of complex data streams-and the fastest time to prototype, test, and deploy real-time applications.

StreamBase Platform

The traditional approach for processing this complex event data has required costly, time-consuming custom-coding of the infrastructure and application logic, using specialized expertise to build a complete functional system. StreamBase eliminates these problems by offering commercial systems software designed to process, analyze and respond to those real-time data streams instantaneously, offering superior speed, scalability, and value compared to conventional infrastructures or custom-coded environments.

Rapid Time-to-Value with Graphical Development Environment

StreamBase Studio™ is an Eclipse-based integrated development environment (IDE) which provides tools for all stages of the development process, including design, test and deployment. Applications can be prototyped and built in just a few hours or days.

The operations in event processing inherently follow a workflow pattern, and StreamBase Studio provides an intuitive drag-and-connect graphical authoring environment with workflow orientation that eliminates the need for custom-coding application logic – which can be very time-consuming and expensive. StreamBase also makes it easy to modify applications when business needs change or data volumes increase.

High Performance Complex Event Processing

StreamBase applications achieve performance levels measured at hundreds of thousands messages/second by virtue of the StreamBase Server, an ultra low-latency application server optimized for real-time event processing.  It utilizes an inbound processing architecture that queries data as it streams through the system.  Business rules and rich application logic are applied in real-time to deliver results in-flight as they are produced, enabling significant speed/performance gains compared to alternatives that require storing and indexing the data before queries are processed.

Enterprise Connectivity

StreamBase offers a broad set of connectivity options that allow for integration with a variety of data sources and enterprise systems. These include StreamBase Adapters to leading financial market data feeds (Thomson Reuters, Bloomberg, Lime, Activ, exchanges), messaging systems (e.g. Tibco RV, EMS/JMS messaging), JDBC-compliant databases, and real-time dashboard development environments like Adobe Flex; and in addition connectivity to high capacity databases and data warehouses (Kx, HP Vertica, Thomson Reuters Velocity Analytics, and DB2). StreamBase also offers published Java, C++, and .NET API support and a wizards-based rapid adapter development toolkit.

The Bottom Line

Before StreamBase, processing real-time data feeds with high throughput was a difficult undertaking. With StreamBase’s enterprise-class engine running StreamSQL, more and more organizations are now gaining:

  • Faster processing and reaction to real-time complex event streams
  • Shorter development cycles, with significantly easier maintenance
  • Dramatically lower development and programming costs
  • Flexibility to quickly adapt to changing business and analytic needs
  • Reduced hardware and operational expenses
  • Faster time-to-profit from real-time initiatives

For additional information about StreamBase, please see our answers to Frequently Asked Questions, or visit the support knowledge base.

Read more: http://www.streambase.com/products/streambasecep/#ixzz1u5PFt7Fd

[repost]IBM, BEP and CEP

IBM has just held its first annual analyst’s conference on what it calls Business Event Processing (BEP). Now, followers of the complex event processing (CEP) market will know that IBM has renamed its acquisition of AptSoft (a CEP product) as WebSphere Business Events so you could be forgiven for thinking that IBM was simply redefining CEP as BEP. However, that is by no means the case: the company defines business event processing much more widely than just CEP.

To give you an idea of where IBM’s thinking is, there were three sets of presentations: on how you use WebSphere Business Events in conjunction with business process management; on how Tivoli uses event monitoring across both IT and other asset environments; and on the introduction of IBM’s own (new) CEP engine, InfoSphere Streams. This actually made the conference sort of confusing because these areas tend to be covered by different analysts, each of whom was only interested in a part of the proceedings. However, leaving that aside, the point is that IBM sees BEP much more broadly than CEP and, indeed, it claims to be the market leader in BEP with more than 3,700 customers. I have no way of disputing, or even checking, this.

As regular readers will appreciate my main interest here was on InfoSphere Streams, which the company is actually positioning as a “stream computing engine used for CEP deployments”. Quite why the company is making this distinction is not clear to me (though I will speculate about it in a separate article). In any case, my information is limited at present but I can give you a flavour. The first thing is that the product has been designed for very high throughput, low latency, highly complex environments. In particular, it is hardware agnostic which means that not only will it run on your average server but it will also run on IBM’s supercomputers. This means that you can really get extreme performance out of it: for example, IBM claims that it is running at 2 million transactions per second but that one of its early adopters expects to have it running at 5 million transactions per second with a sub-millisecond latency. That is seriously impressive, Streambase and Progress: eat your hearts out!

InfoSphere Streams comes with a development language (and compiler) called SPADE (stream processing application declarative engine) that has been specifically developed for processing streams. However, it could equally well be described as an ADE (application development environment—and it was, at the conference) as it includes tools for things like debugging as well as the language itself.

This raises an interesting issue with respect to other IBM approaches to CEP. For example, WebSphere Business Events uses a more 4GL-like approach with no coding, while the company is also involved in the formulisation of StreamSQL as a standard, along with Streambase, Oracle, Coral8 and Truviso. Quite how all of these might pan out remains to be seen but clearly IBM wants to keep its options open in the event of any agreed standard appearing.

Anyway, back to SPADE. While IBM has plans to add graphical application composition features that could be used by business analysts in the future, at present SPADE is purely for developers. This is unusual as most CEP vendors offer both. Initial comments from early adopters suggest that it is an easy environment to work in so this may not be too much of an issue but I haven’t seen it yet so I can’t comment. One possibility that IBM has been exploring is to front-end InfoSphere Streams with WebSphere Business Events (which is more focused at the business level), using the former to determine the events that are actionable or to do deep analysis and the latter to determine the relevant processing by applying relevant business rules.

This raises another point, which is the integration across IBM’s portfolio in support of events. This is across a broad front and, in keeping with its general view of the importance of events it is event-enabling various existing products. So, for example, it is going to release a (free) support pack for CICS that will generate events from CICS transactions that can then be processed using one or other of its event processing technologies.

IBM has announced various new products and extensions to products and there will be more to come. IBM clearly sees event-driven architectures becoming pervasive and it wants to be able to play in all areas of this market, with systems running on laptops up to those on supercomputers. While its entrance into this sector will be heralded by the likes of Streambase and Progress as validating the market, and should give them some temporary momentum, it is quite clear that IBM is aiming to dominate this market: it has put a stake in the ground and it is a big (blue) stake.

original:http://www.it-director.com/technology/applications/content.php?cid=10750

[repost]Complex Event Processing (CEP)

CEP is a lightweight and agile complex event processing (CEP) engine. It aims to ease the cost of changing business logic, automating and monitoring business processes, and enabling an on demand business environment. Our unique CEP capabilities were recognized by Gartner as an emerging market in which “enterprises will achieve new levels of flexibility and a deeper understanding of their business processes by applying the techniques of complex event processing to their daily work”.

CEP Flow
The flow of Complex Event Processing


CEP extends the message at a time paradigm, by detecting complex situations that involve a context-sensitive (semantic context, temporal context, spacious-temporal context) composition of messages and events. This functionality is extremely practical when applied to the processing of input from multiple event sources — from the perspective of a business, application or infrastructure within different contexts. For example, such scenarios may involve SLA (Service Level Agreement) alerts and compliance checking, applicable to the financial markets, banking, and insurance industries.

Event Processing Agent
Complex Event Processing can be performed in many different solution architectures, with a number of CEP (EPA) agents.

The following are some sample rules or events that can be monitored and detected using CEP:

  • Suspicious money transfers – send notification if the total value of all withdrawals (ATM, check, etc.) in the last X days exceeds $Y (where X and Y are parameters that each customer can specify)
  • Fraud detection – send an alert notification if two credit card purchases were made within one hour at a distance greater than 200 Km.
  • Market trends – check for situations where the foreign exchange rate is up by more than X percent and Dow Jones is up by less than Y percent as compared to yesterday’s closing rate.
  • Customer care – send an alert if the same request was reassigned to three agents and no answer was given to the requester.

CEP lets business users and application developers configure the middleware in terms of business needs, thus triggering more sophisticated conclusions and enhancing automation and efficiency. CEP can be configured to monitor situations relating to specific rules/events. For example, you can pre-set and automatically correct parameters for breaches of Service Level Agreements.

CEP technology includes an Eclipse based authoring tool for the business user, and a Java-based execution engine for complex event processing.

  • At build time, rules and conditions (called patterns) are composed using a wizard based authoring tool. The user can also use simulation to test these definitions and then export them to the runtime engine.
  • During runtime, events of different format and from multiple sources are fed in. The CEP engine analyzes them based on current rules and conditions. Upon detection of the predefined patterns, alerts are sent out as messages and actions are triggered. These messages are also fed back into the engine and treated as incoming events, enabling nested patterns.

Figure 3
The build-time part represents the rule definition phase. During the runtime phase, events enter the system, and they are processed, correlated, and analyzed to detect complex events.

The following diagram presents the concept of the CEP runtime architecture. All inputs to the system, including both the rule definitions and the events, enter the system through the Input Adapters. All the outputs of the system, including situation alerts, definition messages and system messages, go to the Output Adapters, and optionally to the Action Managers, if an action is required, for example, sending an email or updating a database. This flexible architecture allows the CEP engine to be integrated into any system, to receive inputs and then send alerts to continue the natural flow of information in the system.
CEP Runtime Middleware Componenet Architecture
CEP Runtime Middleware Component Architecture

First, definitions are loaded through the Input Adapters to the Definition Manager. Definitions are parsed, and if they are consistent and complete, they are loaded into the Rule Engine through the Routing Manager. Success / Fail messages are sent to the Output listeners. At this point events can begin to flow into the system, upon their occurrence. Events enter from multiple sources through multiple Input Adapters. The events are routed to the Rule Engine, in order to participate in evaluation of a rule. When a rule is satisfied, a situation is detected.

A detection of a situation may result in three outcomes: Detected situations are sent to the Output Listeners, for further processing of the information; Situations may trigger stand-alone actions, such as sending email, storing information in a DB, etc.; Situations return to the system as incoming events, thus allowing nested rules to be executed.

Haifa’s CEP rule engine is a unique attempt to build general purpose event processing engine. CEP is based on previous research focused on specific fields such as active database and network management. We extended pervious concepts and developed a flexible event processing rule engine while resolving several research challenges such as:

  1. How to support different kind of events and messages
  2. How to synchronize events that arrive from different source in different time due to network delays.
  3. How to identify the context (temporal, spatial, semantic) in which a situation detection is relevant
  4. How to change event processing rules without stopping the system (hot updates)
  5. How to support vast numbers of events and conditions in an optimal way
  6. How to detect cycles in rule firing sequence.

For the past seven years, the CEP team in the Haifa Research Lab has been working on complex event processing technology and event driven solutions, from defining and refining the CEP definition language and implementing and optimizating the run-time execution engine for high volume even processing in distributed environments, to building business level CEP tools and integrating them into IBM products, solutions and customer projects. This work on event processing was published in dozens of papers and patents and is helping to form an event processing community.

original:http://domino.watson.ibm.com/comm/research.nsf/pages/r.datamgmt.innovation.cep.html





[repost]Reconnoiter – Large-Scale Trending and Fault-Detection

original:

Reconnoiter – Large-Scale Trending and Fault-Detection

One of the top recommendations from the collective wisdom contained in Real Life Architectures is to add monitoring to your system. Now! Loud is the lament for not adding monitoring early and often. The reason is easy to understand. Without monitoring you don’t know what your system is doing which means you can’t fix it and you can’t improve it. Feedback loops require data.

Some popular monitor options are Munin, Nagios, Cacti and Hyperic. A relatively new entrant is a product called Reconnoiter from Theo Schlossnagle, President and CEO of OmniTI, leading consultants on solving problems of scalability, performance, architecture, infrastructure, and data management. Theo’s name might sound familiar. He gives lots of talks and is the author of the very influential Scalable Internet Architectures book.

So right away you know Reconnoiter has a good pedigree. As Theo says, their products are born of pain, from the fire of solving real-life problems and that’s always a harbinger of good things to come.

The problem Reconnoiter is trying to solve is monitoring thousands of nodes across many datacenters where the nodes can vary widely in power, architecture, and software configuration. With that kind of problem what they really want is the ability to:

  • Configure everything from one place.
  • Cheap checks that are made on the specified time interval and aren’t late and don’t cause a heavy load on the machine.
  • Change the configuration from any datacenter without coordination.
  • Add checks in the field.
  • Separate data collection from visualization and fault-detection.
  • Analyze trends for long-term capacity planning and postmortem analysis.
  • Detect when faults have happened and when they are about to happen.
  • Support trending: the intelligent data correlation, regression analysis/curve fitting and looking into the past to see how much you go where you are now so you can do better next time.
  • Create a monitoring system that doesn’t require a separate powerful network and its own set of hosts on which to run.

    If you’ve ever used or written a distributed stats collection system the architecture of Reconnoiter will look somewhat familiar:

    Some of the more interesting bits of the architecture are:

  • PostgresSQL stores all the data. The data isn’t stuck in funky little files.
  • Fault-detection is based on Esper, a streaming complex event processing system. It’s not clear how well this approach will work but the hooks are there.
  • A Comet-style web server is used to feed real-time updates. Much better than your traditional polling cycle.
  • Although the web console is PHP based, PHP is used mainly to execute Json calls. Rendering happens in the browser in an AJAX client.
  • Canvas is used for real time graphics. No images are created on the fly.
  • Data is transferred securely over SSL.
  • The system is robust against failures.
  • Data is not thrown away as it is with some systems so you can check against history.

    Reconnoiter isn’t completely pain free. Lua for an extension language is an interesting choice. The installation and configuration process is very complex. There are a lot of separate steps and bits to configure. Another potential problem is monitoring produces a lot of real-time data. I have to wonder if PostgresSQL can handle that flow for very large systems. The data is partitioned by month, but a large number of machines and a large number of events can be crushing. And I wasn’t sure if graph data could be correlated with released features or other system changes. In the video Theo mentions seeing in the graphs that using deflate improved performance, but I’m not sure just looking at the graph how you would be able correlate system data with system changes.

    It’s droolingly clear where Reconnoiter shines is on creating complex graphs, charts, and other visualizations. The graphs look useful and quick to render. The real time visualizations are spectacular and extremely are difficult to do in other systems.

    Related Articles

  • OmniTI Reconnoiter: Web Management and Analysis by Eric J. Bruno
  • Reconnoiter Update by Theo Schlossnagle
  • Reconnoiter Project Home Page
  • Video: Reconnoiter: a whirlwind tour
  • Big Picture of the Overall System
  • Reconnoiter: Monitoring and Trend Analysis from OSCON
  • OmniTI Unveils Open Source Monitoring Tool, Reconnoiter by Jayashree Adkoli
  • The sad state of open source monitoring tools by Grig Gheorghiu
  • How to Succeed at Capacity Planning Without Really Trying : An Interview with Flickr’s John Allspaw on His New Book.
  • New open source IT management tool: Lighter-weight than Nagios, more granular than Cacti by Matt Stansberry