Big data - Velocity - Data Processing.
What is Velocity?
Velocity is the rate that data is obtained; the speed at this information can be analysed and used as a consequence. (Amir Gandomi, 2015). Moreover, the widespread of IoT gadgets and the use of technology daily generate a considerable amount of data rapidly. Systems need to be capable of gathered, processed, evaluated and stored at high speeds. (AWS, 2019)
Frequently, the information generated growths on an extraordinary scale that, most of the time, needs real-time analysis. (Amir Gandomi, 2015) Besides, the businesses need to collect the data immediately to make better decisions at all levels; to reach this goal, systems need to run as efficiently as required. (AWS, 2019)
"If a company collects data that can be processed and analyse in real-time and with the use of demographics and buying behaviour. This company will be able to show ads effectively to their target audience, creating real customer value."
What is Data Processing?
Data processing is collecting and utilising data that can be used to create relevant analytics that will be converted into management reports. Nowadays, The need for new approaches, technologies and techniques to extract value from data is becoming increasingly important. (AWS, 2019)
Data Processing Methods
There are two types of data processing methods, which depends on the quantity, size, complex and immediacy that the information is needed; these methods are Batch Data Processing and Stream Data Processing. (Abdul Jabbar, 2020)
Batch Data Processing
This method is frequently misunderstood as a slow process. However, it is capable of processing rapidly and effectively a large quantity of data, all at the same time. (AWS, 2019) A batch process is usually used when the information needs to be collected in a specific period. (Abdul Jabbar, 2020) Further, this method does not require intervention manually as the system works automatically to gather and store data for future queries. (AWS, 2019)
Schedule Batch Processing
This method could be foreseeable because it involves a vast amount of information crawling over a specific period; it could be one day, once a week. (AWS, 2019)
Periodic Batch Processing
This method is focused on a batch of data stored at irregular intervals. Workloads are also carried out after a particular volume of data has been acquired. This method might make them complicated and difficult to prepare around. (AWS, 2019)
Stream data processing
Stream data processing is one of the fastest-growing processing fields. The number of gadgets that collect information in real-time is increasing rapidly. This process drives the need for solutions that can match the performance of the data generation. (AWS, 2019) In real-time, organisations need to consider a swift, efficient solution that can manage various data types and formats; in this scenario, data is seen as complex, continually evolving and up-to-date based on current environmental factors. (Abdul Jabbar, 2020)
Organisations can get precise insights from data immediately, analyse data in real-time from social media, eCommerce sales and web applications, and give them an incredible opportunity to solve any critical situation rapidly. (AWS, 2019)
Keywords: analytics, batch, big data, data, gadgets, methods, real-time, stream, value, velocity.
References
Abdul Jabbar, P. A. S. D., 2020. Real-time big data processing for instantaneous marketing decisions: A problematization approach. Industrial Marketing Management, Volume 90, pp. 558-569.
Amir Gandomi, M. H., 2015. Beyond the hype: Big data concepts, methods, and analytics. International Journal of Information Management, 35(2), pp. 137-144.
AWS, 2019. Data Analytics Fundamentals | AWS Training and Certification. [Online]
Available at: https://www.aws.training/Details/eLearning?id=35364
[Accessed 3 March, 2021].
Very informative, thanks!!
ReplyDeleteI love your blog. Mariel Montenegro
ReplyDeleteSo interesting
ReplyDeleteGood Job, excellent
ReplyDeleteNice contents!
ReplyDeleteVery nice!
ReplyDeleteGreat info!
ReplyDeleteGreat!!!
ReplyDeleteInteresting info!!
ReplyDelete''This company will be able to show ads effectively to their target audience'' I would like to know if it really possible without be intrusive as is retargeting
ReplyDeleteby the way interesting information. Thanks
Great job!!
ReplyDeleteExcellent writeup, the future of data velocity amongst all sphere of life is limitless. As much as we will like to admit that data is the new crude oil globally. We must also consider the intricacies of processing this data. I agree with you that data processing must be done in batches, however I am left pondering what the expense will be when all industries begin to welcome big data with full arms. Will the cost of processing data be reasonable enough that it won’t affect the return on investment for organizations who are striving to beat their competitors and stay ahead?
ReplyDeleteImpressed. To add on to it, I believe the cloud is the future of data processing. Cloud computing takes advantage of the ease of today's electronic data processing methods and improves their speed and effectiveness. Faster, higher-quality data means more data to use and more useful information to extract for each organization. Moreover, huge companies aren't the only ones that profit from cloud data mining. Small businesses, in reality, can reap significant benefits on their own. Cloud systems can be relatively inexpensive, and they provide the ability to develop and expand capabilities as the business expands. It allows businesses to grow without incurring high costs.
ReplyDelete