• Nie Znaleziono Wyników

Emotion recognition based on EEG signals during watching video clips

N/A
N/A
Protected

Academic year: 2021

Share "Emotion recognition based on EEG signals during watching video clips"

Copied!
13
0
0

Pełen tekst

(1)

40 Summary

Brain signal analysis for human emotion recognition plays important role in psy-chology, management and human machine interface design. Electroencephalogram (EEG) is the reflection of brain activity – by studying and analysing these signals we are able to perceive emotional state changes. In order to do so, it is necessary to select the appropriate EEG channels that are placed mostly on the frontal part of the head. In this paper we used video stimuli to induce happy and sad mood of 20 participants. To classify the emotions experienced by the volunteers we used five different classifi-cation methods to obtain optimal result taking into account all features that were ex-tracted from signals. We observed that the Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) obtained the highest accuracy of emotion recognition. Keyowrds: cognitive neuroscience, EEG, emotion recognition, classification

Introduction

In recent years, assessment of human emotions from Electroencephalogram (EEG) has become one of the most actively researched areas in developing intellectual man-machine interfaces. Emo-tions play a major role in many aspects of our daily lives, including decision making, perception, learning, rational thinking and actions. Assessing emotions is the key to understand human beings. In most experiments of emotion recognition images, sounds and video clips are used to stimulate the mind of participants. Acquisition of brain activity signals in a form that enables sufficient level of recognition and classification is very difficult due to the various sources of noise, such as eye blinking, muscle movements and surrounding electrical devices. (Yuan-Pin Lin et al., 2010).

Many papers have been reported on emotion recognition using contradictory emotions as it has become known to be the easiest mood diagnosis and it can be more easily concealed by other sub-jects (Hosseini, 2012, Landowska, Szwoch, Szwoch, Wróbel, & Kołakowska, 2012). Contradictory emotions include, among others, fear, tension, discomfort, and confusion, sadness and nervousness on the negative side and elation, serenity, calm, comfort, relaxation and activity on the positive side. They create so called 2D valence-arousal model as illustrated in figure (1).

(2)

Figure 1. 2D valence-arousal model

Source: (Hosseini, 2012)

In the scope of related scientific research for mood recognition with the use of video clips to stimulate emotion of the participant brain, works of several researchers can be mentioned.

Murugappan & Murugappan(2013), for example, were using video clips and music stimuli to determine emotions for five different moods (happy, surprised, frightened, disgusted and neutral), collecting the EEG signals from 20 subjects using high resolution equipment (62 channels). They were extracting the features during pre-processing, these features are mapped into the corresponding emotions using two classifiers: K-Nearest Neighbours and Probabilistic Neural Network. They fo-cused only on the beta band and got a maximum mean classification accuracy of 91.33 %.

Takahashi (2004) in his work investigates emotion recognition of five emotional states (joy, anger, sadness, fear, and relaxation) by using Support Vector Machines. He observed how multi-modal bio-potential signals are effective for emotion recognition. By experimenting with recogni-tion of emorecogni-tions, he attained a recognirecogni-tion rate of 41.7% for all researched emorecogni-tions.

Other researchers, Bajaj and Pachori (2013), worked to select specific features that have more effect on the signals during audio-video stimuli. These features are used as an input to Multiclass Least Squares Support Vector Machine (MC-LS-SVM) for classification of human emotions. This method has provided an accuracy of 80.83% for the classification of human emotions from EEG signals.

Most of research focuses on one or a few features that are extracted from signals to obtain the best recognition, like: high alpha, low alpha, high beta, low beta or only gamma. In this paper we use a broader set of features that are extracted from brain signals to be more comprehensive in diagnosing mood.

(3)

42

In the first section of the paper, we describe a scenario that uses video stimuli presentation for inducing two moods: happy and sad. In the second section we explain EEG data acquisition. The third section illustrates automatic artefact correction and rejection methods used in the research. In section four we show, how to make feature extraction and the final section presents the application classification algorithms that are acclaimed in the literature.

1. Experiment Design

The aim of the experiment is to recognize the mood (happy and sad) from observation of brain signals that are recorded using EEG electrodes connected to a person's scalp. The experiment con-sists of displaying one of two videos – funny or doleful. With the use of such videos we are trying to induce the specified mood in the participants. Both video clips can be found on YouTube web-site1. The duration of each clip was about 4 minutes and 50 second, Figure (2) presents a collection of pictures from funny and sad movie respectively.

Figure 2. Five pictures as stimuli elicited from video clips Source: own elaboration.

Before displaying the video, we showed a black screen for two minutes trying to calm down each participant and to focus his/her attention to the middle of screen, (Kong, Zhao, Hu, Vecchiato, & Babiloni, 2013; Hosseini, S.A.; Khalilzadeh, 2010). In order to obtain subjective opinions about the participant’s mood, we asked him/her a self-assessment question. As an answer, he/she evaluated the mood in a scale of 0–9. A value of 0 represents a sad mood, 5 is a neutral one and 9 means happy mood. This process is useful to add supplementary comparison between signals obtained from brain and the assessment made by the participant. This process is done twice during the experiment. The figure (3) illustrates scenario workflow in the proposed experiment.

1https://www.youtube.com/watch?v=OxXmMbviaRo&feature=youtu.be,

(4)

Figure 3. Experiment workflow Source: own elaboration.

2. EEG Data acquisition

EEG data signals were recorded from a group of 20 healthy participants. The average age of the participants was 33.95 years (13 men, 7 women). The participant sat in front of a computer and he/she entered first some basic information related to gender, age, language, nationality and profes-sion. The experiment started by displaying for 30 seconds some explanation on the screen to demon-strate what a participant is expected to do during experiment procedure.

Signal acquisition was done using a Contec KT88 device. In our experiment we used the 10–20 international system shown in figure (4). This system is based on the relationship between the loca-tion of electrodes and the underlying area of the cerebral cortex (“10” and “20” refer to the 10% or 20% of inter electrode distance (Bhoria, Singal, & Verma, 2012)).

S Participant Data Scenario Expla-nation Video Stim-uli (Happy or Sad)

User Information

Determining the Mood(0-9)

Determining the Mood(0-9) Displaying Black Screen E EEG si-gnals (EDF file)

(5)

44

Figure 4. Labels for points according to 10–20 electrode placement system Source: [12].

In our experiment we recorded signals from electrodes marked as: Fp1, Fp2, F7, F3, Fz, F4 and F8. We have chosen electrodes that were placed on the front of head, because, from a neuroscientific point of view, main functions of the brain in the frontal lobe are related with emotional responding (Papousek, Reiser, Weber, Freudenthaler & Schulter, 2012). The sample rate was set to 200 samples per second, which is the maximum rate for the Contec KT88 device. The obtained data were ex-ported into a edf (European Data Format)2 to enable more convenient processing in EEGlab3 and with the use of Matlab environment.

3. EEG Data Pre-Processing

In pre-processing phase we performed two essential processes: data filtering to eliminate noise that consists of frequencies outside the standard range and artefact removal to diminish nested sig-nals produced from eyes blinking and muscles movement.

2 http://www.edfplus.info/specs/edf.html

3 EEGLAB is an interactive Matlab toolbox for processing continuous and event-related EEG, MEG and other

electrophys-iological data using independent component analysis (ICA), time/frequency analysis, and other methods including artefact rejection.(“Getting Started - SCCN” n.d.)

(6)

3. 1. Data Filtering

EEG recording is comprised of several different types of signals, each of them has a different frequency range. Digital filtering is used to retain only the frequency components of interest and to remove other data, whether it is noise or merely physiological signals outside the range of interest. It is important to note that the way in which data is filtered depends largely on the sampling rate at which the original data was acquired (Widmann, Schröger, & Maess, 2014).

Digital filters which can be implemented, are: Infinite Impulse Response (IIR) or Finite Impulse Response filters (FIR). In most publications, using the IIF filter requires a cut-off or threshold given in Hz. We have applied low and high pass filter to remove data with a frequencies below 0.4 Hz and above 50 Hz (Nitschke, Miller, Cook, 1998). Filters were applied on signals for each channel sepa-rately to make filtering process faster.

3. 2. Artefact removing

When brain activity signals were recorded through the electrodes placed on the scalp of a par-ticipant, eye blinks and muscle movements caused contamination of the EEG signal. This causes additional noise in addition to surrounding electric fields which magnitude is greater than the desired electrical activity of the brain (Joyce, Gorodnitsky, & Kutas, 2004). The electrical signals emanating from the eye movements and blinks, produce a signal known as electrooculogram (EOG). A fraction of EOG contaminate the electrical activity of the brain and these contaminating potentials are com-monly referred to as ocular artefacts (OA). In data acquisition, these OA are often dominant over other signals. Hence, devising a method for successful removal of OA from EEG recordings is still is a major challenge (Krishnaveni, Jayaraman, Aravind, Hariharasudhan, & Ramadoss, 2006; Naraharisetti, 2010). Similar situation occurs when it comes to muscle artefacts as well – although not as problematic as ocular ones, they have to be removed from the EEG signal.

Figure 5a. shows a segment of EEG signal corrupted by ocular artefacts and Figure 5b. shows a segment of EEG signals corrupted by muscle artefacts.

[a] [b]

Figure 5. a) EEG recording corrupted by ocular artefacts, b): Corrupted by muscle artefacts Source: own elaboration.

A variety of methods have been proposed for correcting ocular and muscle artefacts. One com-mon strategy is artefact rejection. The rejection of epochs contaminated with OA is very laborious

(7)

46

and time consuming and often result in considerable loss in the amount of data available for analysis. Other, widely used methods for removing artefacts, are based on regression in the time or frequency domain techniques (Yuan-Pin Lin et al., 2010). However, regression based artefact removal elimi-nates the neural potentials common to reference electrodes and to other frontal electrodes and that may cause some problems.

In our experiment we used automatic artefact removal based on Blind Source Separation (BSS) techniques. The main method which was applied is wavelet Independent Component Analysis (wICA, (Castellanos & Makarov, 2006)), which has been proven useful for suppression of artefacts in EEG recordings, both in the time and frequency domains.

4. Feature extraction and classification

Feature extraction helps us to elicit useful information from EEG signal. The features are char-acteristics of the signal in frequency domain. According to these features we will distinguish be-tween different emotions. In this stage it will be described how features are extracted and then clas-sified.

4.1. Feature Extraction

By using discrete wavelet transformation (DWT) function, the signal for each channel will be decomposed into five levels depending on Daubechies-8 (db8) or Symlets-8 (sym8) wavelet func-tion (Al-kadi & Marufuzzaman, 2013, Murugappan, Nagarajan, & Yaacob, 2010). The selecting suitable wavelet function and decomposition level is important for analysis of the signal. The scope of interest ranges between 0–50 Hz. The decomposition level which we have used is five, because all other ranges are considered to be noise or they are used for another purpose like epilepsy moni-toring (Joyce et al., 2004).

On the other hand, the decomposition process of the signal depends on the number of sampling frequency that was used for the signal recording. In our experiment we used frequency of 200 sam-ples per second. This value was determined by the used EEG recording device characteristics. The exemplary signal reconstruction from one-dimensional signal 1-D wavelet coefficients is illustrated in figure (6).

To obtain successful result we applied the (db8) function. The elicited wavelet coefficients pro-vide a compact representation for the energy distributed in the EEG signals in the time and frequency domain (Hosseini, 2012). The table (1) displays frequency corresponding to various levels of de-composition for db8 wavelet function with a frequency sampling of 200 Hz.

(8)

Figure 6. 1-D wavelet coefficients

Source: (“Reconstruct single branch from 1-D wavelet coefficients – MATLAB wrcoef,” n.d.). Table 1. Frequency bands according to decomposition level.

Seq. Decomposition level Frequency bandwidth Band 1 A5 0.4–3.5 Delta 2 D5 3.5–6.5 Theta 3 D4 6.5–11.5 Alpha 4 D3 11.5–23.5 Beta 5 D2 23.5–50 Gamma 6 D1 50–64 Noises

Source: own elaboration.

Other data above 50 Hz is noise and it is not considered a part of features to the signals. By using Fast Fourier Transformation (FFT), we obtained high order spectrum for each band that was used to obtain input values for a classification phase.

4.2. Emotion recognition and classification

The brain signals classification problem is difficult task. There are many techniques for the implementation of such classification. Trying to get better results for emotion classification, we used many approaches in the scope of supervised algorithms and we obtained the best result using a Sup-port Vector Machine (SVM) and Linear Discriminate Analysis (LDA).

Among different supervised classifiers, SVM is the one that performs significantly better than others. The algorithm of SVM was proposed by Vapnik (1979). To understand the concept of SVM consider a binary classification for simple case of a two dimensional feature of linearly separable training samples as in Figure (7).

(9)

48

Equation (1) expresses a simple representation for EEG data that should be classified, ݇ ൌ ሼሺݔଵǡ ݕଵሻǡ ሺݔଶǡ ݕଶሻǡ ǥ Ǥ Ǥ ǡ ሺݔ௡ǡ ݕ௡ሻሽǡ

(1)

where x is the input vector (high order spectrum from the feature extraction phase) and y is the class labelled 1 and 2 (value 1 stands for happy and 2 for sad mood).

A discriminating function could be defined as in equation (2). ݂ሺݔሻ ൌ ܸܵܯݏݐݎݑܿݐሺݓǡ ݔሻ ൌ ൜൐ ͲǤͷݔܾ݈݁݋݊݃ݏݐ݋ݐ݄݈݁ܿܽݏݏͳ൏ ͲǤͷݔܾ݈݁݋݊݃ݏݐ݋ݐ݄݈݁ܿܽݏݏʹ

(2)

In this formula w determines the orientation of a discriminant plane. There is an infinite number of possible planes that could correctly classify the training data and defining the margin of a sepa-rating hyperplane. An optimal classifier finds the hyperplane for which the best generalising hyper-plane is equidistant or far from each set of points. Optimal separation is achieved when there is no separation error and the distance between the closest vector and the hyperplane is maximal (Stoean & Stoean, 2014).

Figure 7. SVM separating hyperplane into two class Source: [18].

Obtaining an optimal classification result with SVM is still difficult, therefore we applied other algorithms, like LDA which is a technique used to detect a linear combination of features which describe and separate two or multi classes of objects. The resulting combination is used as a linear classifier. In LDA it is supposed that the item of the class has normal distribution. For instance, in a two class dataset, supposed a priori probabilities for class 1 and class 2 are respectively p1 and p2,

and individual classes means and overall mean are written as m1, m2, and m (Balakrishnama &

Ganapathiraju, 1998):

݉ ൌ ݌ଵכ݉ଵ൅݌ଶכ݉ଶ

(3)

LDA is based upon the concept of searching for a linear combination of attributes that best separates two classes (0 and 1) of a binary attribute, mathematically robust and often produces mod-els whose accuracy is as good as more complex methods (Sayad, 2010).

(10)

We have been testing another three algorithms (K-Nearest Neighbours, Naïve Bayes and Prob-abilistic Neural Network) trying to obtain satisfactory results as well. However, these algorithms did not give good result to determine accurate classes for most of datasets. We have chosen SVM and LDA algorithms among all others because they’ve got the best results.

5. Experimental Results and Discussion

In our experiment we obtained two main results. The first is extraction of features from brain signals and the second is classification and recognition of emotions that is made on the basis of these features.

The video stimuli that was displayed for each participant in the experiment we defragment it into three parts, because the raw EEG data that recoded was too long when we wanted to make comparisons between data and difficult to recognize whole data into which mood belong. Because perhaps the participant is influenced by some of episodes from entire clip. To be able to partitioning the raw EEG data into three equal parts, we have omitted the first and the last 10 seconds from the data, to be each parts consist of 90 seconds.

Total number of participants was 20. They were distributed into two groups, the first 10 person were stimulated by funny video (class 1), and the second 10 person were stimulated by a sad video (class 2). We used 35 features of EEG signal for each part (5 frequency bands multiplied by 7 chan-nels). Data for all participants were saved in an edf file – each file was split into three parts. We used 70% of the data obtained for training and 30% for testing dataset.

We applied the most popular algorithms that are used in signal classification (supervised) to achieve optimal result. Research that was done allowed to state that the SVM and LDA are the best. The other algorithms that we have tried, like naïve Bayes and k-Nearest Neighbours or Probabilistic Neural Networks (PNN), not always allowed to get more that 50% of recognition rate, as illustrated in table (2).

Table 2. Emotion classification result for each algorithm

Classification

Algorithm Training set Test Set

Emotion Reco-gnition Rate SVM 42 18 13 72.22% LDA 42 18 13 72.22% KNN 42 18 12 66.66% NB 42 18 6 33.33% PNN 42 18 4 22.22%

Source: own elaboration.

The results show that in the SVM and LDA classification algorithm, we were able to recognize the mood of 13 participants among 18 that belonged do the testing dataset. All these results extracted from the analysis of brain signals (raw EEG data) regardless of the data introduced by the participant to determine his/her mood during the registration process. Those data were taken to verify the impact of video on the mood of the participant.

(11)

50 6. Conclusion

This paper contributes to recognizing human mood on the basis of the brain signals. The con-ducted research allowed to conclude that using the entire set features that are extracted from the brain signals to classify the mood, causes difficulties when we want to achieve high accuracy. Better results can be probably achieved focusing on most significant bands like alpha and beta. In our experiment using SVM and LDA when increasing the size of training set to get higher score of emotion not effected much the final result. This study also can be extended in future to get emotion indicator that tells more precisely about the strength of an emotion, like very happy or only happy.

Bibliography

[1] Al-kadi, M., & Marufuzzaman, M. (2013). JART 399 Effectiveness of Wavelet Denoising on Electroencephalogram Signals, 11(February), 156–160.

[2] Balakrishnama, S., & Ganapathiraju, a. (1998). Linear Discriminant Analysis – a Brief Tu-torial. Compute, 11, 1–9.

[3] Bhoria, R., Singal, P., & Verma, D. (2012). Analysis of effect of sound levels on eeg. Inter-national Journal of Advanced Technology & Engineering Research (IJATER), 121–124. [4] Castellanos, N. P., & Makarov, V. a. (2006). Recovering EEG brain signals: Artifact

sup-pression with wavelet enhanced independent component analysis. Journal of Neuroscience Methods, 158(2), 300–312.

[5] Getting Started – SCCN. (n.d.). Retrieved July 27, 2015, from http://sccn.ucsd.edu/wiki/Get-ting_Started.

[6] Hosseini, S. A. (2012). Classification of Brain Activity in Emotional States Using HOS Anal-ysis. International Journal of Image, Graphics and Signal Processing, 4(1), 21–27.

[7] Hosseini, S.A.; Khalilzadeh, M. A. (2010). Emotional stress recognition system using EEG and psychophysiological signals: Biomedical Engineering and Computer Science (ICBECS), 2010 International Conference on, 1–6.

[8] Joyce, C. A., Gorodnitsky, I. F., & Kutas, M. (2004). Automatic removal of eye movement and blink artifacts from EEG data using blind component separation. Psychophysiology, 41(2), 313–325.

[9] Kong, W., Zhao, X., Hu, S., Vecchiato, G., & Babiloni, F. (2013). Electronic evaluation for video commercials by impression index. Cognitive Neurodynamics, 7(6), 531–535.

[10] Krishnaveni, V., Jayaraman, S., Aravind, S., Hariharasudhan, V., & Ramadoss, K. (2006). Automatic identification and Removal of ocular artifacts from EEG using Wavelet transform. Measurement Science Review, 6(4), 45–57.

[11] Landowska, A, Szwoch, M., Szwoch, W., Wróbel, M. R., & Kołakowska, A. (2012). Emo-tion recogniEmo-tion and its applicaEmo-tions.

[12] Murugappan, M., Nagarajan, R., & Yaacob, S. (2010). Discrete Wavelet Transform Based Selection of Salient EEG Frequency Band for Assessing Human Emotions. Universiti Ma-laysia …, (Takahashi).

[13] Naraharisetti, K. V. P. (2010). Removal of ocular artifacts from EEG signal using joint ap-proximate diagonalization of eigen matrices (JADE) and wavelet transform. Canadian Jour-nal on Biomedical, 1(4), 56–60.

(12)

[15] Papousek, I., Reiser, E. M., Weber, B., Freudenthaler, H. H., & Schulter, G. (2012). Frontal brain asymmetry and affective flexibility in an emotional contagion paradigm. Psychophy-siology, 49(4), 489–98.

[16] Reconstruct single branch from 1-D wavelet coefficients – MATLAB wrcoef. (n.d.). Re-trieved January 18, 2016, from http://www.mathworks.com/help/wavelet/ref/wrcoef.html [17] Sayad, S. (2010). Real Time Data Mining.

[18] Stoean, C., & Stoean, R. (2014). Support Vector Machines and Evolutionary Algorithms for Classifi cation. Single or Together.

[19] Taywade, S. A. (2012). A Review : EEG Signal Analysis With Different Methodologies. [20] Vapnik. (1979). Support Vector Machines.

[21] Widmann, A., Schröger, E., & Maess, B. (2014). Digital filter design for electrophysiological data – a practical approach. Journal of Neuroscience Methods, 1–16.

[22] Yuan-Pin Lin, Chi-Hong Wang, Tzyy-Ping Jung, Tien-Lin Wu, Shyh-Kang Jeng, Jeng-Ren Duann, & Jyh-Horng Chen. (2010). EEG-Based Emotion Recognition in Music Listening. IEEE Transactions on Biomedical Engineering, 57(7), 1798–1806.

(13)

52

ROZPOZNAWANIE EMOCJI W OPARCIU O SYGNAŁY EEG PODCZAS OGLĄDANIA WIDEOKLIPÓW

Streszczenie

Analiza sygnałów mózgowych w celu rozpoznawania emocji odgrywa znaczącą rolĊ w psychologii, zarządzaniu oraz projektowaniu interfejsów człowiek-maszyna. Elektroencefalogram jest odzwierciedleniem aktywnoĞci mózgu – badając i analizując te sygnały moĪna dostrzec zmiany stanu emocjonalnego. Aby uzyskaü taki efekt, ko-nieczne jest wybranie odpowiednich kanałów EEG. Są one połoĪone przede wszystkim w czĊĞci czołowej czaszki. W prezentowanym artykule przeprowadzono badania, w których u 20 badanych osób, za pomocą filmu video, starano siĊ wzbudziü nastrój wesoły lub smutny. W celu przeprowadzenia klasyfikacji odczuwanych przez ochotni-ków emocji wykorzystano 5 róĪnych metod, biorących pod uwagĊ wszystkie cechy, które zostały wyodrĊbnione z zarejestrowanych sygnałów. Najlepsze wyniki osiągniĊto z wykorzystaniem metody maszyny wektorów noĞnych (Support Vector Machine, SVM) oraz liniowej analizy dyskryminacyjnej.

Słowa kluczowe: neuronauka poznawcza, EEG, rozpoznawanie emocji, klasyfikacja

Kesra Nermend

Katedra Metod Komputerowych w Ekonomii Eksperymentalnej Uniwersytet Szczeciski, Wydział Nauk Ekonomicznych i Zarzdzania e-mail: kesra@wneiz.pl

Akeel Alsaaka Wydział Informatyki Uniwersytet w Karbali, Irak e-mail: akeeldb@gmail.com Anna Borawska

Katedra Metod Komputerowych w Ekonomii Eksperymentalnej Uniwersytet Szczeciski, Wydział Nauk Ekonomicznych i Zarzdzania e-mail: latuszynska@gmail.com

Piotr Niemcewicz

Katedra Metod Komputerowych w Ekonomii Eksperymentalnej Wydział Nauk Ekonomicznych i Zarzdzania

Uniwersytet Szczeciski e-mail: german@post.pl

Cytaty

Powiązane dokumenty

In the model, whatever leads to an emotion is conceptualised as a cause that has enough force to effect a change of state, and the emotion itself is also seen as a cause that has

Po pierwsze: jeśli parafialne wspólnoty ruchu „Światło-Życie” chcą być zaczynem odnowy parafii, to muszą najpierw sobie uświadamiać, że mają w sobie

Norbert Honsza Nad twórczością Tomasza Manna.. Kroeber Istota

This narration becomes also a leitmotif in the work of Professor Agnieszka Cybal-Michalska—the author of the book Academic Youth and Professional Career—who, in her

Alternative classic theory of Cannon and Bard (1915) proposed that emotions arise in subcortical structures (thalamus) and lead to both physiological response (mediated by

To właśnie Stefania Podhorska-Okołów – wybitna literatka i publicystka – kreowała na łamach “Bluszczu” nowy model współczesnej rodziny polskiej, nowy typ

Wiary bowiem potrzeba w rzeczach, które się tu dzieją, i oczu duszy, by nie tylko o tym myśleć, co się widzi, ale by i to sobie uprzytomnić, czego się nie widzi.. A tę

„Ptaszyńska” do osoby zakonnika nawiązała w rozmowie z prymasem W y­ szyńskim podczas świąt Bożego Narodzenia. Spytała go, czy zna „ks. Prymas odpowiedział, że zna,