Enter the era of big data in real-time

Real-time analysis of big data has been announced as the major trend of big data in 2017

From now on, big data impacts all the industrial fields and a great number of CEOs wish to benefit from real-time experiences to take decisions more quickly.

This evolution of trend has not come about by chance: the very quick increase of the IoT, of the number of users and the ease to exploit data from sensors has widely contributed to this
buzz. Thus, several offers have been developed digging into the “hype” of real-time experience to meet this need, it has been the same for the solutions of processing on the fly
that are improving as time goes on like Spark or Storm that are usually the preferred frameworks of the large structures for this use. Globally, the carriers of real-time offers have all sold more or at least the same thing, in a more or less packaged way, the main message is the same: collecting, storage, processing and visualization of data, all in real-time !

Here are some examples of trendy frameworks that are well fitted for real-time experience:

You are already part of the mouvement

In fact, even on an everyday basis you are confronted to real-time analysis: an e-commerce website needs to display as fast as possible the recommendations or the adds. Your actions are constantly translated into data that is already processed in a Big Data architecture for real-time analysis. When we carry out calculations on a flow of data, we usually need a result quickly, and in some cases, it is possible to need the result in less than a hundred seconds, you can easily see that targeted ads are almost immediately displayed! Likewise, the amount of collected data can be important and it must be possible to exploit this data without being overwhelmed, think of all the users that log in on social networks! The data on your actions generate massive quantities of data and also generate value. In the same way, autonomous cars must be able to process a multitude of signals detected by sensors, material failures of a cluster of calculation must trigger alarms, high-frequency trading requires to deliver a result very quickly, the surveillance of production line, the optimization of supply chain, fraud detection, patient care, smart devices, smart cities smart grid, … In short, you have understood it, real-time Big Data is well suited for many fields.

Speed and management of flows, this is the entire issue of real-time processing and this requires the conception of dedicated distributed architectures.

Build your own real-time architecture

At first, this diagram may seem complicated, but it only exposes the fact that the stream processing requires a large place in the conception of an architecture. Among others, the real-time architectures are usually used for: preprocess, correlate data, train models, make predictions, detect trends or sequence patterns, tracking, trigger alerts.

In practice, these architectures are not easy to execute, the diversity of tools of storage, of processing and of visualization is significant and it is not easy to find its way in this cloud of tools. Especially as the connectors between the different technical bricks are sometimes nonexistent and it is for you to implement the non-existing functionalities, without considering the upgrades, the nightmarish settings (Hadoop), the skeletal documentations, etc.

After this introduction, I offer you to a series of tutorials to create little by little a simple architecture to do real-time processing. We will rely on an easy use-case: the storage of flows with Couchbase and its processing with Spark.

All the articles of the series “Build your real-time Big Data architecture” here.

Contact us

122 avenue du Général Leclerc
92100 Boulogne Billancourt
France


+33 (0)1 46 44 24 66
contact@aneo.fr

Join us

Come and play !
A strong team is where it all begins.

Join us