Companies use data as a strategic resource, and the unrestrainable growth in data sources and the increasing user demand are pushing relational databases and monolithic architectures beyond their limits. This can have a bad impact on business agility and scalability, forcing organizations to migrate to NoSQL databases and to exploit the benefits of cloud computing and distributed systems. The case study I have analysed is about a rich, voluminous and high-frequency dataset, containing real-time data from a betting company, to be processed in a streaming fashion. I have used Python scripts to transform and ingest the initial dataset from a batch-formatted XML file to many XML-streamable messages, generated at the event, in order to improve the global speedup and reactivity of the system. I wanted to eliminate the system bottlenecks (the blocking queue and the redundant information that caused unuseless client billing). In order to build the final architecture, I have used the following technological components: • Apache Kafka as a distributed streaming message queue • Apache Spark (with its Spark Core, Spark SQL, Spark Streaming components) as distributed in-memory processing layer • MongoDB as a no-SQL storage layer for the processed dataset I have implemented and tested the architecture on both a single machine and a cluster of three nodes by using the Amazon AWS cluster computing service, in order to exploit the benefits of distributed computing and clusterization in big data streaming.
Le aziende usano i dati come risorsa strategica e il crescente aumento del volume dei dati e della domanda degli utenti stanno spingendo le architetture monolitiche e i database relazionali al di là delle loro possibilità. Questo può avere un cattivo impatto sull'agilità e la scalabilità delle architetture aziendali, obbligando le compagnie a migrare verso database NoSQL e a sfruttare i vantaggi del cloud computing e dei sistemi distribuiti. Il caso d’uso che ho analizzato intende gestire un dataset ricco, voluminoso e ad alta frequenza per un’azienda che si occupa di giochi a pronostico, contenente dati real time da elaborare in streaming. Ho utilizzato script Python per trasformare e ingerire il set di dati iniziale da un file XML in batch generato giornalmente a multipli messaggi XML in streaming generate in corrispondenza dell’evento stesso. Ho voluto eliminare i colli di bottiglia del sistema (la blocking queue e le informazioni ridondanti che hanno causavano un’inutile oberazione del client). Per comporre l’architettura finale ho utilizzato i seguenti componenti tecnologici: • Apache Kafka come coda di messaggi in streaming distribuita • Apache Spark (coi suoi component Spark Core, Spark SQL, Spark Streaming) come strato di elaborazione in-memory distribuito • MongoDB come strato di memorizzazione no-SQL dei dati elaborati Ho implementato e testato l’architettura sia su una singola macchina che su un cluster di tre nodi sfruttando il servizio di cluster computing Amazon AWS, per sfruttare i benefici del calcolo distribuito e della clusterizzazione in un processo di streaming di big data.
Big data integration streaming architecture
LOCATELLI, CAROLINA
2016/2017
Abstract
Companies use data as a strategic resource, and the unrestrainable growth in data sources and the increasing user demand are pushing relational databases and monolithic architectures beyond their limits. This can have a bad impact on business agility and scalability, forcing organizations to migrate to NoSQL databases and to exploit the benefits of cloud computing and distributed systems. The case study I have analysed is about a rich, voluminous and high-frequency dataset, containing real-time data from a betting company, to be processed in a streaming fashion. I have used Python scripts to transform and ingest the initial dataset from a batch-formatted XML file to many XML-streamable messages, generated at the event, in order to improve the global speedup and reactivity of the system. I wanted to eliminate the system bottlenecks (the blocking queue and the redundant information that caused unuseless client billing). In order to build the final architecture, I have used the following technological components: • Apache Kafka as a distributed streaming message queue • Apache Spark (with its Spark Core, Spark SQL, Spark Streaming components) as distributed in-memory processing layer • MongoDB as a no-SQL storage layer for the processed dataset I have implemented and tested the architecture on both a single machine and a cluster of three nodes by using the Amazon AWS cluster computing service, in order to exploit the benefits of distributed computing and clusterization in big data streaming.È consentito all'utente scaricare e condividere i documenti disponibili a testo pieno in UNITESI UNIPV nel rispetto della licenza Creative Commons del tipo CC BY NC ND.
Per maggiori informazioni e per verifiche sull'eventuale disponibilità del file scrivere a: unitesi@unipv.it.
https://hdl.handle.net/20.500.14239/20346