Desktop version

Home arrow Philosophy

  • Increase font
  • Decrease font


<<   CONTENTS   >>

What is big data?

Nowadays, almost every action we take both digitally and physically leaves behind a digital footprint. Whenever we go online, use our smart phones, send an email, or even simply watch our TV, we leave behind a digital bread crumb trail. Additionally, mobile devices, wearable technology, and even everyday objects like refrigerators, thermostats, and front door locks are all increasingly connected to the web and actively share data about our behavior in the physical “real world.” Along with these user-generated data points, machine-generated data is also a growing trend. Machine-generated data comes about whenever devices interact with other devices, either directly or through a digital service: think of your Alexa home assistant turning your lights or TV on. There are also ambient sensors in the everyday physical environment that collect data, such as information from traffic signals collecting data about traffic patterns or congestion. When combined, the information that is collected, transmitted, and stored across all of these sources, aggregated into what are referred to as “Big Data.” Frequently referred to as the “new oil” (Kennedy & Moss, 2015; Lane, 2014) or a rich “gold mine” (Asay, 2013; Steinberg, 2013), Big Data, like its highly sought-after natural resources analogs, is a coveted asset, and as such, those who control it hold political, social, and financial power (Jones, 2015).

It is important to point out that there is a significant difference between big data with a lower case “b”, and Big Data with a capital “B.” The former, “big data” merely refers to data sets that are collected across a large set of users, meaning it is a “big data” set. The Census for instance, is an example of “big data”. Big Data, however, is often defined by a series of characteristics that indeed include volume, but also velocity, variety, value, and veracity, or the “5 Vs” (Anuradha, 2015). Volume, as the name connotes, deals with the amount of data, both structured and unstructured, that is captured. However, to be considered Big Data, the volume collected and aggregated must surpass that which can be aggregated and analyzed by traditional analysis tools. Velocity is the speed at which data is acquired, often measured in terms of minimizing the gap between collection and real time production of the data. Some argue that the speed at which data comes in, and the tempo at which it is processed, is more important than the volume because it is the essential element for making quick and strategic decisions to aid in goal attainment. In addition to the amount and speed of Big Data, variety of data is also a key attribute. Often, researchers minimize their thinking regarding the type of data that can be collected to understand user intention, process, and progress, for example limiting thinking about data forms to click streams, time stamps, and user input. But when considering Big Data, it is not only about what they are doing in situ but also ancillary information collected about how they access (e.g., mobile versus laptop) and where they access online materials (i.e., GPS), what they are posting in forums (e.g., review sites and discussion forums) and on social media regarding their interactions and comprehension, and other data streams that provide indicators of user intent, strategy selection, and success rate.

While volume, velocity, and variety are indeed key components that define Big Data, any data examined under the umbrella of “Big Data” is meaningless unless there is value to what is being aggregated or analyzed. Value is a proposition that focuses on understanding what the data communicates about the user and how this knowledge can help dictate resources to improve the user’s experience, quickly and decisively. Finally, Big Data is only useful if the data has a level of veracity that yields consistency across users and interactions. Thus, key questions that Big Data must address are identifying how clean the data is, how much noise needs to be filtered, and what level of dependability does the data produce. In this chapter, we attempt to focus on the affordances of Big Data and what this means for the future of research examining learners’ strategic processing and the design of instruction to improve these strategies in the service of obtaining positive learning outcomes.

While the five “Vs” describe the collection and analysis of data in all its various forms, the operational side of Big Data focuses on how to implement the analysis and interpretation of data into actionable interventions altering behaviors and habits toward desired outcomes of the organization. From this perspective, Big Data can, over time, be channeled into machine-based, learning algorithms that automate the adaptation of digital environments to direct and shape user behaviors and actions toward a specific set of desired outcomes. The combination of mining Big Data, analyzing it, and then operationalizing it, is what industry and business sectors have begun to master. Learning about an individual consumer from a wide and varied set of digital actions, profiling actions, habits, and behaviors across learners to identify common patterns, allows industry to use this knowledge to manipulate the digital experience of this consumer in ways that benefit user experience or further encourage purchasing actions. In the next section, we discuss how industry has been successful across these three linked areas of consumer strategy training.

 
<<   CONTENTS   >>

Related topics