Home Business & Finance
A Qualitative Approach
The interviews with US-based IT managers involved asking them about their hiring processes and were designed to understand how they approached the challenge of meeting their big data initiatives in lieu of a shortage of qualified big data specialists. We know there are data scientists employed. Shan, Wang, Chen, and Song (2015) conducted interviews with data scientists who work for businesses such as Linkedln™, Yahoo!™, Uber™, and others. Is there a standard, most successful manner to fill these positions? Has the lack of available qualified big data specialists forced IT Managers to find innovative ways to get the value of big data for their respective businesses? What exactly has been done to fill these roles? The research was limited to the United States. What processes are US-based business leaders taking to meet the big data demands of their businesses? What quality criteria are business leaders in the United States considering in big data specialist candidates?
IT Managers are the people in organizations most commonly tasked with filling technology-centric roles. Because of this, they were the best suited, considering their efforts in finding resources to fill those roles. In the event they could not find a person to fill the role, they would be responsible for attempting other methods to meet the big data demands, such as outsourcing to consultants, sending existing employees to be trained, identifying multiple people due to not finding one with all of the skills, etc. While “IT Manager” is the name being used throughout the research, that title may be either a specific or a general term for the person responsible for filling IT-related positions. In some businesses, this may be the Vice President of IT, Director of IT, Information Systems Lead, etc. The choice of this was me asking myself, “Self, who would best know the big data resource hiring process?”
The number of IT managers hiring big data resources numbers into the thousands. However, due to the homogenous nature of the IT managers, Boddy (2016) made claim that a sample size of 20-30 is standard and is often enough to reach saturation in a case study. Sample subjects were identified through Linkedln™. A message was posted to the Big Data and Analytics group asking for volunteers or for someone to nominate a manager they know' who acts as a big data resource hiring manager. Potential subjects were also reached out directly based on their profile information identifying them as a qualified subject. Once the subjects were identified with their contact information, a letter of consent was sent to them.
I used open-ended surveys and semistructured interviews to gather my data. While survey questions gathered information about specific qualities, technologies, and locations of search efforts, the interview' question allowed the author to gain insight into the decision-making processes, challenges, and behaviors of the subjects in relation to the phenomenon. These interviews provided me w'ith rich and in-depth information.
To understand w'hat types of projects each subject was working on, the technologies they have employed, and the requirements they are looking for in their big data specialists, the following survey questions were developed:
Ql: What are your top big data analytic initiatives?
Q2: Why did you start your big data initiative?
Q3: How large is your big data specialist team?
Q4: What big data resources are still needed and why'?
Q5: Where have you looked to find big data specialists?
Q6: What challenges have you had finding big data specialists?
Q7: What qualities are most difficult to find in your big data specialist candidates?
Q8: Describe your ideal big data specialist.
Q9: How many candidates have you interviewed for your big data resource roles?
Q10: What are two or three factors that would make the big data resource hiring process easier or more successful?
Next, to investigate the “big picture” of what the big data specialist shortage means to the subjects, what processes these IT Managers are taking to meet the big data demands of their business, and what quality criteria are being considering in big data specialist candidates, the following list of semistructured interview questions was created to create a narrative to learn more about this big data resource phenomenon:
Ql: Where are you on your big data analytic path?
Was the subject in the beginning stages of collecting data? Building out the infrastructure? Designing the data flow architecture? Fully mature and analyzing data? This helped me to understand how far along their data analytics journey they were.
Q2: Describe how your big data path has evolved.
Are the subjects sticking with their original plan or have new questions or priorities required modifications to their data analytic plans?
Q3: Describe what your big data specialists you have working for you do on daily basis.
No two roles are ever exactly the same. This helped me to understand what their resources are asked to do.
Q4: Tell me if you are looking for more of the same types of big data skills or are you looking for new and different skills and why?
Are resources needed for more of the same tasks or to do additional tasks?
Q5: Describe the challenges you have experienced in finding big data resources.
Q6: What qualities in a data scientist are most valuable to your organization and why?
Q7: Describe the search process to find big data resources.
This was an open-ended question to gather both functional and sentiment data.
Q8: How do you determine if someone is qualified or not?
I was looking for their decision-making process for determining qualified applicants.
Q9: How many people have you interviewed for your big data positions?
Q10: What quality tends to be lacking the most in the applicants?
While Dicicco-Bloom and Crabtree (2006) argue that semistructured interviews are commonly the sole data source for a qualitative study, I included three different data sources: Surveys and Interviews with IT Managers as well as Interviews with big data specialists.
To validate the data collected from the IT managers and understand the responsibilities and expectations of the big data specialists, the following five interview questions were asked of specialists:
Ql: Describe what you do in your role.
Q2: How did you get the position you currently have?
Q3: How did you gain the knowledge needed to be successful in your role?
Q4: What skill sets do you feel you need to strengthen to be more effective?
Q5: Where do you go to improve your skill set?
The two different data types for IT managers differed in that the surveys provided static information about the current state of the subjects’ environments, and the semistructured interviews provide a view of the big picture of the world the subjects’ work. Identifying anticipatory areas in the interview discussion and points of frustration or excitement provide a richness and insight above the information gathered from surveys alone.
So that others can evaluate, compare, and synthesize the research, it is important that they understand how I approached the data analysis and what assumptions were made (Braun & Clarke, 2006). The two analysis methods used included thematic and structural. Thematic analysis, according to Vaismoradi, Turunen, and Bondas (2013), looks deeper than the direct answers to the questions about where IT Managers are finding their big data resources. By identifying themes, the researcher learns the
“why” of the processes taken, which will help readers to comprehend the journey taken to get to the solution IT Managers have decided on. Structural analysis is intended to identify and provide insight into areas of frustration, of excitement, and look for voice fluctuations or nonverbal cues (Herz, Peters, & Truschkat, 2014).
The method of analysis followed the five steps proposed by Yin (2008), which include compiling, deassembling, reassembling, interpreting, and concluding. This method, applied to each of the data samples, provides a disciplined approach, allowing for easier reproduction of the research. Along with the researcher’s own manual technique, the use of computer software, NVivo, a qualitative research support software, was used to identify patterns.
Dependability and Trustworthiness
Trustworthiness in qualitative research, according to Graneheim and Lundman (2004), consists of credible, dependable, and transferable findings. Credibility addresses the focus of the research study and pertains to confidence in how appropriate the data and analysis processes were in addressing the study’s purpose (Graneheim & Lundman, 2004; Thomas & Magilvy, 2011). Dependability refers to the stability and consistency of the data and whether changes made in the data collection or analysis of it exist due to some factors or phenomena (Graneheim & Lundman, 2004). Transferability is explained by Thomas and Magilvy (2011) as research findings being able to be carried over to another group or environment. It is argued that it is up to the reader to determine if they believe the results are transferrable or not (Graneheim & Lundman, 2004).
Triangulation in the form of comparing the different data and/or analysis methods was one method used to maximize the trustworthiness of the data analysis. Member checking was applied for this purpose as well. Member checking, according to Thomas and Magilvy (2011), requires ensuring with the subjects that the interpretations made by the researcher are accurate representations of the participant’s experiences. Each step in the data collection, coding, and analysis was explained in detail to increase the dependability of the study results. Each technique was communicated to justify the reason the researcher believed it was required and to increase the study’s credibility. This type of reflexivity allows the readers to understand how and why the researcher interpreted the data and see the “big picture” of the study (Thomas & Magilvy, 2011, p. 154).