Pushing intelligence to the edge with a stream processing architecture

TitlePushing intelligence to the edge with a stream processing architecture
Publication TypeConference Proceedings
Year of Conference2017
AuthorsDautov, R., S. Distefano, D. Bruneo, F. Longo, G. Merlino, and A. Puliafito
Conference NameProceedings - 2017 IEEE International Conference on Internet of Things, IEEE Green Computing and Communications, IEEE Cyber, Physical and Social Computing, IEEE Smart Data, iThings-GreenCom-CPSCom-SmartData 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Conference LocationExeter, UK - 21-23 June 2017
ISBN Number9781538630655
KeywordsApache NiFi, cloud computing, Cluster computing, Computational networks, Data handling, Edge computing, Green Computing, Hardware resources, Horizontal offloading, Internet of thing (IOT), Internet of Things, Memory architecture, Network architecture, Network latencies, Processing activity, Stream processing

The cloud computing paradigm underpins the Internet of Things (IoT) by offering a seemingly infinite pool of resources for processing/storing extreme amounts of data generated by complex IoT systems. The cloud has established a convenient and widely adopted approach, where raw data are vertically offloaded to cloud servers from resource-constrained edge devices, which are only seen as simple data generators, not capable of performing more sophisticated processing activities. However, there are more and more emerging scenarios, where the amount of data to be transferred over the network to the cloud is associated with increased network latency, making the results of the computation obsolete. As various categories of edge devices are becoming more and more powerful in terms of hardware resources - specifically, CPU and memory - the established way of off-loading computation to the cloud is not always seen as the most convenient approach. Accordingly, this paper presents a Stream Processing architecture for spreading workload among a local cluster of edge devices to process data in parallel, thus achieving faster execution and response times. The experimental results suggest that such a distributed in-memory approach to data processing at the very edge of a computational network has a potential to address a wide range of IoT-related scenarios. © 2017 IEEE.