Desktop version

Home arrow Computer Science

  • Increase font
  • Decrease font


<<   CONTENTS   >>

EXPERIMENTAL PROCEDURE

The experiment was conducted for computer graduate learners with a sample size of 40. The learners are subjected to register for the course endorsement session. During the class, the cognitive values are taken from the BCI machine. In the form of digital signals procured for a sign for eveiy second, learning is subjected to answer online questionnaire on the particular subject. Brain signals are accumulated in hertz. The average value of the attention span and the score attained at the end of the session. The brain wave visualizes as shown in Figure 2.4 by the learner, while the cognition action is performed, but for the researcher, data have to be converted into the digital format. The brain wave visualizes are represented in different frequencies as alpha, beta, gamma, and theta waves in e-sense of attention and meditation meters from 0 to 100 Hz.

Attention and meditation are resolved and documented on an interface measuring scale with a relative e-sense scale ranging from 1 to 100. The values between 20 and 40 are diminished levels, and values fr om 1 to 20 are treated as weakly lowered signals. A neutral value is considered in the range of 40-60. Values above 60 are considered to be values higher than the standard. The frequency is to interpret to an algorithm depending upon the alpha, beta, gamma, and theta waves; we can determine the concentration depending upon the spatial memory of the cognitive skills. The data as per Table 2.4 are the attention span for one single learner having a duration of 26 s. Twenty-six frequency signals are recorded in terms of attention and

Brain wave visualizer

FIGURE 2.4 Brain wave visualizer.

meditation. As the scale ranges from 0 to 100, each value is multiplied with 100 to get attention span. It is organized to be ahead for operating systems such as Windows and MAC OS and statistical analysis by the WEKA tool.

TABLE 2.4 Attention Span of a Single Learner

Name: LEARNER 1

Time stamp: 2019/04/09 3:21:56 PM

Time

Meditation Frequency (Hz)

Attention Frequency (Hz)

Conversion

Attention

Span

1:09:19

0.64

0.44

0.44x100

44

1:09:20

0.11

0.44

0.44x100

44

1:09:21

0.22

0.53

0.53x100

53

1:09:22

0.81

0.48

0.48x100

48

1:09:23

0.68

0.41

0.41x100

41

1:09:24

0.33

0.30

0.30x100

30

1:09:25

0.33

0.26

0.26x100

26

1:09:26

0.43

0.38

0.38x100

38

1:09:27

0.32

0.44

0.44x100

44

1:09:28

0.54

0.54

0.54x100

54

1:09:29

0.35

0.67

0.67x100

67

1:09:30

0.82

0.61

0.61x100

61

1:09:31

0.85

0.53

0.53x100

53

1:09:32

0.83

0.56

0.56x100

56

1:09:33

0.59

0.56

0.56x100

56

1:09:34

0.75

0.67

0.67x100

67

1:09:35

0.76

0.74

0.74x100

74

1:09:36

0.68

0.78

0.78x100

78

1:09:37

0.74

0.69

0.69x100

69

1:09:38

0.55

0.61

0.61x100

61

1:09:39

0.59

0.56

0.56x100

56

1:09:40

0.56

0.54

0.54x100

54

1:09:41

0.66

0.75

0.75x100

75

1:09:42

0.59

0.84

0.84x100

84

1:09:43

0.59

0.83

0.83x100

83

2.9.1 RESULTS AND DISCUSSION

The results for the cognizance factors for AI and DM following Likert scale ranging from strong competent to not competent are given in Tables 2.5-2.14. These cognizance factors are performance indicators for self-assessment of the learner to identify the prerequisites owned by a learner before a course endorsement, which is summarized in Table 2.5. The performance is analyzed in the form of a Likert scale ranging from 1 to 5:1 signifies strong competent and 5 signifies not competent. The proprietary algorithm for the BCI is used to get the data saved for analytical purposes.

2.9.2 COGNIZANCE FACTOR FOR AI

The cognizance factors ensuring the willingness of the learner to endorse a new course such as AI are shown in Table 2.5. These cognizance factors are performance indicators for self-assessment of the learner to identify the prerequisites owned by a learner before a course endorsement, which is summarized in Table 2.5. The performance is analyzed in the form of a Likert scale ranging from 1 to 5:1 signifies strong competent and 5 signifies not competent.

The cognizance factor concerning each student is obtained by classifying into the scale of strong competitors to no competency, as shown in Table 2.6. The learners are given a suffix of L1-L40 with a prefix of the course code AI denotes artificial intelligence.

2.9.3 COURSE ENDORSEMENT COMPETENCY FOR AI

The EEG values obtained from the BCI device termed as “Cognitive (COG).’’ The scores are obtained by the evaluation of 50 MCQs; each phase has 10 questions from the AI concept. The first 10 questions assess the learning ability through an audio-video credential, followed by 10 MCQs to answer. The second phases of indicators procure from the existing knowledge domain of the learner under the hardware and software compatibility, as shown in Table 2.7. The performance indicator values are shown in Table 2.8. The last 10 questions on the applicability have to be answered.

Cognizance Factors

Strong

Competent

Moderate

Competent

Average

Competent

Less

Competent

Not

Competent

I am interested in enrolling and learning more of AI course

12

5

3

3

0

I have a stronghold on mathematics

9

1

6

5

1

I have good knowledge of programming language like C, C++, Java

10

3

2

8

1

I have ability to writing algorithm for finding patterns and learning

8

3

5

6

2

I have strong data analytics skills.

6

4

1

9

4

I have good knowledge of Discrete Mathematics

7

4

7

4

1

I have a strong will to leam ML languages

12

1

5

2

3

I have excellent logic building skills

8

3

3

8

2

I know arm Controller Boards

9

2

4

5

3

I have working experience with sensors and actuators

7

2

6

6

3

TABLE 2.6 Result for the Learner Competency Final Outcome for AI

Learner Competency

AIU

AIU

-^L4

AIls

AIL6

AIL7

Ah8

Strong competent

8

4

3

1

4

2

0

0

Moderate competent

2

5

1

0

2

2

1

1

Average competent

1

0

4

1

1

2

3

2

Low competent

0

3

2

4

2

2

4

5

Not competent

0

1

0

1

2

0

0

0

TABLE 2.7 Result for the Learner Competency Based on Parameters for AI

Learner Tag

AIU

ail.

AIu

A*L4

aiL5

Parameters

CAI

S

CAI

S

CAI

S

CAI

S

CAI

S

PI. Learning ability

53.86

6

67.71

6

45.04

6

62.88

6

53.4

8

P2. Hardware compatibility

62.74

3

60.62

3

44.44

7

63.12

6

59.98

6

P3. Software compatibility

62.79

3

52.41

3

48.86

3

43.23

3

56.22

2

P4. Applicability

56.47

9

254.9

9

54.14

7

61.87

6

75.35

9

2.9.4 COGNIZANCE FACTOR FOR DM

The cognizance factors for DM include good communication, reading, and writing skills. These cognizance factors show flexibility for learning. The performance is analyzed in the form of a Likert scale ranging from 1 to 5:1 signifies strong competent and 5 signifies not competent.

The analyzed data shown in Table 2.8 throw light on the competency of each learner while answering the 10 cognizance questions. Table 2.8 describes the learner’s competency level based on performance indicators. The learners are given a suffix of L1-L40 with a prefix of the course code DM denotes digital marketing.

2.9.5 COURSE ENDORSEMENT COMPETENCY FOR DM

The EEG values are obtained from the BCI device. The scores are obtained by the evaluation of 50 MCQs; each phase has 10 questions from the DM concepts, as shown in Table 2.8. The first 10 questions assess the learning ability through an audio-video credential, followed by 10 MCQs. The second phase of indicators procures from the existing knowledge domain of the learner following the hardware and software compatibility. The last 10 questions on the applicability have to be answered.

TABLE 2.8 Cognizance Factors for DM

Cognizance Factors

Strong

Competent

Moderate

Competent

Average

Competent

Less

Competent

Not

Competent

I have good communication skills

8

4

3

2

8

I have good writing skills

3

4

1

10

10

I am a good observer

2

3

3

7

7

I can identify the trends

2

3

5

7

7

I can develop new approaches

i

3

1

9

9

I can quickly adopt a new idea

3

1

3

8

8

I know web applications

4

3

0

3

3

I work on social media

4

5

0

9

9

I have good imagination power

1

3

7

4

4

I can create innovative designs

1

0

0

0

0

TABLE 2.9 Result for the Learner Competency Outcome for DM

Learner Tag

dml1

DMU

DMU

dmL5

dmL6

dmL7

dmL8

Strong competent

4

0

0

0

0

7

1

1

Moderate competent

6

2

0

0

1

3

0

0

Average competent

0

1

0

2

1

0

0

1

Low competent

0

4

4

2

4

0

0

5

Not competent

0

1

2

3

2

0

6

1

TABLE 2.10 Result for the Learner Competency Based on Parameters for DM

Learner Tag

DMu

DMl:

DML3

DMu

DM..

Ls

Parameters

CDM

S

CDM

S

CDM

S

CDM

S

CDM

S

PI. Learning ability

43

8

59

7

64

7

67

7

60

7

P2. Exemplary study

55

7

49

5

66

7

73

6

60

7

P3. Case studies

49

7

52

4

61

6

45

3

65

5

P4. Applicability

46

3

49

5

13

5

56

7

76

4

2.9.6 RESULT ANALYSIS THROUGH ML TECHNIQUES

The data received from the proposed model are input to the ML algorithm based on supervised learning decision tree algorithms: J48, RF, random tree, NB classifier, and SVM-implemented cognitive EEG data and performance indicators of learners based on four parameters for the course endorsement. Before implementing data to the ML algorithm based on digital data retrieved from the EEG signals, these have to be converted into numeric data based on the numerosity reduction method of data discretization. The converted numeric and nominal data of cognitive EEG data and performance indicators of the learner through WEKA for further processing based on a supervised and unsupervised learning algorithm. Decision tree techniques, J48 and random tree, identified that the cognizance factor variable is the first decision variable to decide about the competency of the learner for the course, as shown in Figure 2.5.

Decision tree J48

FIGURE 2.5 Decision tree J48.

Correctly classified instances, predicted value, and sensitivity calculation generated from implementing supervised learning techniques are mentioned in Table 2.11 and Figure 2.6. These statistics represent that the 100% recall value is interpreted as the completeness of the result. The consistent appraisals of class precision 92.3% for decision tree techniques are relevant values of prediction, and 77.3% of NB classifier and 75% of SVM class precision values are appropriate values of the forecast for data acquired from the experiment of AI course endorsement.

ML Techniques for AI Competency (Supervised Learning)

Class Precision (%)

Class Recall (%)

Decision Tree J48

92.3

100

Decision Tree Random Forest

92.3

100

Decision Tree Random Tree

92.3

100

Naive Bayes Classifier

79

77

Support Vector Machine

75

100

Supervised learning algorithm AI

FIGURE 2.6 Supervised learning algorithm AI.

Unsupervised ML algorithms /с-means and density-based clustering are applied, and results are mentioned in Table 2.12 and Figure 2.7. Two clusters are represented in Table 2.13 showing the probability from the training data sets and the probability of course competency predicted to be 60% and course incompetency predicted to be 40% for data acquired from the experiment of AI course endorsement.

TABLE 2.12 ML Techniques AI for Unsupervised Learning

ML Techniques AI Competency (Unsupervised Learning)

Cluster 1 TRUE Probability (%)

Cluster 2 FALSE Probability (%)

.KT-means

60

40

Make Density-Based Cluster

60

40

Unsupervised learning algorithm AI

FIGURE 2.7 Unsupervised learning algorithm AI.

Correctly classified instances predicted the value and sensitivity calculation generated from implementing supervised learning techniques mentioned in Table 2.13 and Figure 2.8. These statistics represent that the 50%-86% of the recall value interprets the completeness of the result. The consistent appraisals of class precision of 65% for decision tree techniques are relevant values of prediction. Therefore, 87% of NB classifier, 53% of SYM, and class precision values are appropriate values of a forecast for data acquired from the experiment of DM course endorsement.

TABLE 2.13 Class Precision and Class Recall for DM

ML Techniques for DM Competency (Supervised Learning)

Class Precision (%)

Class Recall (%)

Decision Tree J48

52

50

Decision Tree Random Forest

65

64

Decision Tree Random Tree

65

64

Naive Bayes Classifier

87

86

Support Vector Machine

53

55

Unsupervised ML algorithms /г-means and density-based clustering are applied, and results are mentioned in Table 2.14 and Figure 2.9. Table 2.14 represents two clusters’ probability from the training data sets and the probability of course competency predicted to be 40% and course incompetency predicted to be 60% for data acquired from the experiment of DM course endorsement.

Supervised learning algorithm DM

FIGURE 2.8 Supervised learning algorithm DM.

TABLE 2.14 ML Techniques DM for Unsupervised Learning

ML Techniques DM Competency (Unsupervised Learning)

Cluster 1 TRUE Probability (%)

Cluster 2 FALSE Probability’ (%)

AT-means

40

60

Make Density-Based Cluster

40

60

Unsupervised learning algorithm DM

FIGURE 2.9 Unsupervised learning algorithm DM.

STRENGTH AND LAPSE OF THE FRAMEWORK

The proposed model in the CoML framework has an advantage over the other models as the cognition values are derived fr om brain, which speaks the tongue of the mind, unlike the traditional way of questionnaire or survey. The cognizance factors are drawn on to self-assess the learners by reading the EEG signals. The validity of the analysis of the CoML model utilizes as pre- and postevaluation of a course fulfillment. ML algorithms are implemented on the model to check the accuracy and correctly classified instances.

The model mainly requires a BCI headset, without which the cognitive signals cannot be procured, and has a batteiy life of 8 h. If the accurate selections of the frequency are not accumulated in a precise maimer, then there is a loss of data. Data are also lost if the headset gets disconnected during the experiment. The model is implemented on bigger sample size and more courses for more clustering and classification. The automated interface is generated, which reduces the time duration of data procurement.

CONCLUSION

The proposed model has conceded out the experimental procedure appropriated with cognizance factors, BCI, and ML techniques to classify the competency values for course endorsement. The learners are classified into five competency categories. This chapter also focuses on considering the cognitive ability of the learner with the conceptual capacity of understanding the concepts, hardware, and software compatibility putting forth the applicability. The analytical results from WEKA have confirmed the model accuracy of 92.3% under the ML algorithms and class precision of 87% under the NB classifier. The CoML framework based on neuroeducation, which is the science, deals with the learner’s unique cognitive strengths and their appropriate learning style fostering successful intensification of academic skills and confidence to proceed for the selected course.

FUTURE SCOPE

Future work can be extended by the automation process of data procurement with the help of user-friendly language such as Phyton. At present, the model is implemented for computer science course, which can be applicable for discrete courses such as management, commerce, etc. Uplifting the course content as per the learning ability of the learners can help gain more knowledge. The automated system implemented on static IP can acquiesce the virtual users having the headset to measure their cognitive ability for endorsing a course.

KEYWORDS

  • cognitive skills
  • neuro science
  • neuro education
  • EEG signals
  • neuro-feedback
  • machine learning
  • active teaching learning
  • cognitive machine learning (CoML)

REFERENCES

  • 1. T. О. K. Ichihara. “Span of attention, backward masking, and reaction time,” Percept. Psychophys., vol. 29, no. 2, pp. 106-112, 1981.
  • 2. R. Yerbrugge, “Logic and social cognition,” J. Philos. Logic, vol. 38, no. 6, pp. 649-680. 2009.
  • 3. D. Ansari, “Neuroeducation—A critical overview of an emerging field,” Neuroethics, vol. 5, no. 2, pp. 105-117,2011.
  • 4. N. Bousbia, I. Reba'i, J. M. Labat. and A. Balia, “Analysing the relationship between learning styles and navigation behavior in the web-based educational system,” Knowl., Manage. E-Learn.: Int. J., vol. 2, no. 4, pp. 400-421, 2010.
  • 5. T. Chamillard and D. Karolick, “Using learning style data in an introductory computer science course,” ACMSIGCSE Bull., vol. 31, no. 1, pp. 291-295, 1999.
  • 6. M. Nakayama, H. Yamamoto, and R. Santiago, “The impact of learner characteristics on learning performance in hybrid courses among japanese students,” Election. J. e-Learn., vol. 5, no. 3, pp. 195-206, 2006.
  • 7. G. Schalk. D. McFarland, T. Hinterberger, N. R. Birbaumer, and J. Wolpaw, BCI2000: A general-purpose brain-computer interface (BCI) system.” IEEE P ans. Biomed. Eng., vol. 7, no. 4, pp. 223-228, 2015.
  • 8. K. Crowley, A. Sliuey, I. Pitt, and D. Murphy, “Evaluating a brain-computer interface to categorize human emotional response,” in Proc. 10th IEEE Int. Conf. Adv. Leant. Techno!., 2010, pp. 276-278.
  • 9. M. Duvinage et al., “A РЗОО-based quantitative comparison between the Emotive Epoc headset and a medical EEG device,” in Proc. 9th IASTED Int. Conf. Biowed. Eng. BioMed., 2012, pp. 37-42.
  • 10. K. D. E. Mezghaniand R. B. Halima. “DRAAS: Dynamicallyrecoufigurable architecture for autonomic services,” in Web Setrices Foundations. New York, NY, USA: Springer, 2014, pp. 483-505.
  • 11. M. J. Khan and K.-S. Hong, “Passive BCI based on drowsiness detection: An fNIRS study,” Biomed. Opt. Express, vol. 6, no. 10, pp. 4063-4078, 2015.
  • 12. J. Lagopoulos et al., “Increased theta and alpha EEG activity dining nondirective meditation,” J. Alternative Complementary’Med., vol. 15, no. 11. pp. 1187-1192, 2009.
  • 13. A. L. Crowell et al., "Oscillations in sensorimotor cortex in movement disorders: An electrocorticography study.” Brain: J. Neurol, vol. 135, no. 2, pp. 615-630, 2012.
  • 14. M. Ungureanu, C. Bigan, R. Stnmgaiu. and Y. Laarescu. “Independent component analysis applied in biomedical signal processing politelmica,” J. Inst. Meas. Sci. SAS, vol. 4, no. 2, pp. 1-8, 2004.
  • 15. T. Brown, M. C. Thompson. J. Herron, А. Ко, H. Chizeck. and S. Goering. “Controlling our brains—A case study on the implications of brain-computer interface-triggered deep brain stimulation for essential tremor.” Brain-Comput. Interfaces, vol. 3, no. 4, pp. 165-170,2016.
  • 16. L. Botrel, E. M. Holz, and A. Kttbler, “Brain Painting Y2: Evaluation of РЗОО-based brain-computer interface for creative expression by an end-user following the user- centered design,” Brain-Comput. Interfaces, vol. 2, no. 3, pp. 135-149, 2015.
  • 17. В. H. Green and G. Manager, The Internet of Things in the Cognitive Era. Dec. 16,2015.
  • 18. E. Mezghani, R. B. Halima, and K. Drira, “DRAAS: Dynamically reconfigurable architecture for autonomic sendees,” in Web Setrices Foundations. New York, NY, USA: Springer, 2014, pp. 483-505.
  • 19. R. Meliar and А. Каш, “Career choice preferences among rural and urban adolescents in relation to then intelligence,” Educ. Quest—Int. J. Educ. Appl. Social Sci., vol. 6, no. 3, pp. 197-206, 2015.
  • 20. W. Klimesch, “EEG alpha and theta oscillations reflect cognitive and memory performance: Areview and analysis,” Brain Res. Rev, vol. 29, nos. 2/3, pp. 169-195,1999.
  • 21. M. Dunleavy, C. Dede, and R. Mitchell, “Affordances and limitations of immersive, participatory augmented reality simulations for teaching and learning,” J. Sci., Educ. Technol, vol. 18.no. l.pp. 17-22, 2009.
  • 22. H. S. Anupama, N. K. Cauvery, and G. M. Lingaraju, “Brain-computer interface and its types—A study,” Int. J. Adv. Eng. Technol., vol. 3, no. 2, pp. 739-745, 2012.
  • 23. В. He, B. Baxter, B. J. Edehnan. С. C. Cline, and W. W. Ye, “Noninvasive brain- computer interfaces based on sensorimotor rhythms,” Proc. IEEE. vol. 103, no. 6, pp. 907-925,2015.
  • 24. A. Delorme, C. Kotlie, N. Bigdely-Shamlo, A. Yankov, R. Oostenveld. and S. Makeig, “MATLAB-based tools for BCI research,” in Human-Computer Interaction and Brain- Computer Interfaces. New York, NY, USA: Springer, 2010. pp. 241-259.
  • 25. M. Kart.” Effects of learning styles on student outcomes in a general education computing course,” Consortium Comput. Scencesin Colleges, vol. 30, no. 4, pp. 37-43,2015.
  • 26. H. G. Colt, M. Davoudi, S. Quadrelli, and N. Z. Rohani, “Use of competency-based metrics to determine the effectiveness of a postgraduate thoracoscpy course,” Respiration, vol. 80, no. 6, pp. 553-559, 2010.
  • 27. G. U. Navalyal and R. D. Gavas, “A dynamic attention assessment and enhancement tool using computer graphics,” Human-Centtic Comput. Inf. Sci., vol. 4, no. 1,2014, Ait. no. 11.
  • 28. H. Serby, E. Yom-Tov, and G. F. Inbar, “An improved P300-based brain-computer interface,”IEEE Trans. NeuralSyst. Rehabil. Eng., vol. 13, no. 1, pp. 89-98, 2005.
  • 29. M. Dlamiui and W. S. Leung, “Evaluating machine learning techniques for improved adaptive pedagogy,” in Proc. IST-Afiica Week Conf., 2018, pp. 1-10.
  • 30. M. van Liesliout et al. Neurocognitive predictors of ADHD outcome: A б-year follow-up study,”./. Abnormal Child Psychol., vol. 45, no. 2, pp. 261-272, 2017.
  • 31. T. D. Siumy, T. Apama, R Neethu, J. Yenkateswaran, Y Yishnupriya. and P. S. Yyas,” Robotic arm with brain—Computer interfacing." Procedia Technol., vol. 24, pp. 1089-1096. 2016.
  • 32. F. Pestilli, M. Carrasco. D. J. Heeger, and J. L. Gardner.” Attentional enhancement via selection and pooling of early sensory responses in human visual cortex,” Neuron, vol. 72, no. 5, pp. 832-846, 2011.
 
<<   CONTENTS   >>

Related topics