• Nie Znaleziono Wyników

Index of /rozprawy2/11300

N/A
N/A
Protected

Academic year: 2021

Share "Index of /rozprawy2/11300"

Copied!
157
0
0

Pełen tekst

(1)AGH University of Science and Technology Faculty of Computer Science, Electronics and Telecommunications Department of Electronics. Doctoral dissertation. R ECONSTRUCTION OF SIGNALS SAMPLED AT SUB -N YQUIST RATE USING EVENT- BASED SAMPLING Dominik Rzepka. jSupervisor:. dr hab. in˙z. Marek Mi´skowicz. Auxiliary supervisor: dr in˙z. Dariusz Ko´scielnik. Kraków, 2017.

(2)

(3) Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie Wydział Informatyki, Elektroniki i Telekomunikacji Katedra Elektroniki. Rozprawa doktorska. R EKONSTRUKCJA SYGNAŁÓW ´ CI ˙ EJ CZ ESTOTLIWO PRÓBKOWANYCH PONI Z ˛ S N YQUISTA Z WYKORZYSTANIEM PRÓBKOWANIA WYZWALANEGO ZDARZENIAMI. Dominik Rzepka. Promotor: dr hab. in˙z. Marek Mi´skowicz Promotor pomocniczy: dr in˙z. Dariusz Ko´scielnik Kraków, 2017.

(4) iv.

(5) v. Acknowledgements. I would like to express my deepest gratitude to all people who were leading me and were my companions on the scientific way, which resulted in this dissertation. Starting from the beginning, I thank to my Parents for their enormous effort of providing me an education. I am also grateful, that looking for the truth and curiosity of the world was a every-day reality in my home, which I had an opportunity to notice only when I started my independent life. My family from Kraków, Bo˙zena and Stanisław Stacho´ n helped me to survive the first meeting with the university life. Next, thanks to scientific supervision of Dr Cezary Worek. over my internship, master thesis and thanks to involving me in the scientific grants I had an opportunity to watch and learn professionalism in engineering, and take first steps in creating a science. Next stage of the scientific way I started with my supervisor, Dr Marek Mi´skowicz Associate Professor, who inspired this work and was tireless proponent of studies on event-triggered sampling. His experience and skills helped me to go through complicated intricacies of science and university, allowed for gaining financial support of our studies, and for establishing and sustaining fruitful cooperation with other researchers. The statististical and probabilistics aspects of this work were developed in cooperation with Professor Mirosław Pawlak, who generously invited me for a number of visits on the University of Manitoba. Professor Thao Truong Nguyen inspired using POCS method for the recovery of signal and encouraged me to use the universal language of operator theory to describe the reconstruction problems. Invaluable guide in my scientific way was my auxiliary supervisor, Dr Dariusz Ko´scielnik, to whom I owe countless number of lunches, discussions over coffee, inspirations and ideas. With his passion, curiosity, scientific enthusiasm, positivistic didactic zeal and perfectionism, he always was for me a role model of a scientist. In the end I would like to thank my fiancée Aneta and all my friends who supported me during my efforts on finishing this dissertation. Dominik Rzepka Kraków, October 1st, 2017.

(6) vi.

(7) vii. Podziekowania , Chciałbym gł˛eboko podzi˛ekowa´c wszystkim, którzy mnie prowadzili i towarzyszyli mi w naukowej drodze, której owocem jest ta praca. Zaczynajac ˛ od poczatku, ˛ dzi˛ekuj˛e moim Rodzicom za ich wysiłek, ˙zeby zapewni´c mi wykształcenie. Jestem im wdzi˛eczny równie˙z za to, ˙ze poszukiwanie prawdy i ciekawo´s´c ´swiata była (i jest) w naszym domu codzienno´scia, ˛ której niezwykło´s´c zauwa˙zyłem dopiero, kiedy zaczałem ˛ samodzielne ˙zycie. Pierwsze spotkanie z uczelniana˛ rzeczywisto´scia˛ pomagali mi przetrwa´c moi bliscy z Krakowa, Bo˙zena i Stanisław Stachoniowie. Nast˛epnie dzi˛eki naukowej opiece dr in˙z. Cezarego Worka nad moimi praktykami, praca˛ magisterska˛ oraz dzi˛eki zaanga˙zowaniu mnie w prace nad grantami naukowymi miałem mo˙zliwo´s´c obserwowania i uczenia si˛e profesjonalizmu w in˙zynierii, oraz postawienia pierwszych kroków w tworzeniu nauki. W kolejny etap naukowej drogi wyruszyłem pod opieka˛ mojego promotora, dr hab. in˙z. Marka Mi´skowicza, który był inspiratorem tej pracy i niestrudzonym or˛edownikiem bada´ n nad próbkowaniem wyzwalanym zdarzeniami. Jego do´swiadczenie i umiej˛etno´sci pomogły mi przebrna´ ˛c przez trudne meandry uczelnianej i naukowej rzeczywisto´sci, umo˙zliwiły finansowanie naszych bada´ n oraz nawiazanie ˛ i podtrzymywanie owocnej współpracy z innymi badaczami. Z prof. dr hab. in˙z. Mirosławem Pawlakiem współpracowałem nad statystycznymi i probabilistycznymi aspektami niniejszej pracy. m.in. podczas kilkukrotnych wizyt na University of Manitoba w Kanadzie, na które miałem zaszczyt by´c przez niego zaproszonym. Z kolei dr Thao Truong Nguyen z City College of New York (USA) zainspirował u˙zycie metody POCS do rekonstrukcji sygnałów oraz motywował mnie do opisywania zagadnie´ n rekonstrukcji przy pomocy uniwersalnego j˛ezyka teorii operatorów. Nieocenionym przewodnikiem po naukowych ´scie˙zkach był mój promotor pomocniczy, dr in˙z. Dariusz Ko´scielnik, któremu zawdzi˛eczam niezliczona˛ ilo´s´c wspólnych obiadów, dyskusji przy kawie, inspiracji i pomysłów. Ze swoja˛ pasja, ˛ ciekawo´scia, ˛ naukowym entuzjazmem, pozytywistycznym zapałem dydaktycznym i perfekcjonizmem był dla mnie zawsze wzorem naukowca. Na koniec dzi˛ekuj˛e mojej narzeczonej Anecie oraz wszystkim moim przyjaciołom, którzy wspierali mnie i trzymali za mnie kciuki podczas mojej naukowej drogi. Dominik Rzepka Kraków, 1 pa´zdziernika 2017.

(8) viii.

(9) ix. Abstract In this thesis the methods of signal reconstruction from event-triggered sampling (ETS) were explored in the regime of sampling under the Nyquist rate. We propose a classification of the ETS systems, which can be universally described as a crossing betweet the test signal and the signal which is transformation of the input, optionally accompanied by an auxiliary information about the crossing. The first part of the thesis is devoted to generalized nonuniform sampling theory and the methods for reconstruction of signals from event-triggered samples, since the event-triggered samples occur usually irregularly in time and in some ETS systems they include not only the values of input, but for example also its derivative. A new, fast recovery algorithm based on Slepian functions is presented. The output of ETS system is, however, richer than only sample values, since it is known that the signal does not fulfill the triggering condition between consecutive sampling instants. This type of information, called implicit, can be particularly useful in the sub-Nyquist sampling regime. Implicit information usually has a form of inequalities defined over the continuous subintervals of time axis, bounding signal in the certain range of values. The four methods of reconstruction using such constraints were proposed: with the continuous or discretized locations of constraints, and with strict constraints on the signal values or signal attraction to the so called initial guess signal. All of four resulting approaches allow to obtain reconstructed signal meeting the constraints stemming from ETS with good approximation. Another type of information about signal which can be extracted using ETS are the local spectral properties of signal. The relation between occurrence of level-crossings and the signal bandwidth is studied using the Rice formula for stochastic Gaussian process, which states that mean bandwidth is directly proportional to mean level-crossing rate. The absolute bandwidth, useful for the signal recovery, is proportional to mean bandwidth, but the proportionality coefficient depends entirely on the spectrum shape, which is usually unknown. We found, that the proportionality coefficient can be bounded for the arbitrary spectrum using Chebyshov and Gauss inequalities, if the generalized type of power bandwidth, describing a span of frequencies containing fraction of total signal power is used. Three methods of mean bandwidth estimation from the level-crossing statistics were developed, studied, and compared numerically. Furthermore Rice formula was extended using nonlinear transformations of time and amplitude axes, into description of levelcrossing rate of non-Gaussian and non-stationary stochastic processes. A framework for the estimation of the time-varying bandwidth was proposed, with the use of projection onto convex sets method. The reconstruction using this estimate provides a good approximation of the sampled signal, and can be also used in conjunction with the methods using implicit information..

(10) x.

(11) xi. Streszczenie Tematem niniejszej pracy jest rekonstrukcja sygnałów próbkowanych przy u˙zyciu wyzwalania zdarzeniami (PWZ) poni˙zej cz˛esto´sci Nyquista. Zaproponowano klasyfikacj˛e systemów PWZ, które charakteryzuje próbkowanie w momencie przekroczenia sygnału testowego przez sygnał b˛edacy ˛ transformacja˛ sygnału wej´sciowego, z opcjonalna˛ dodatkowa˛ informacja˛ opisujac ˛ a˛ zdarzenie. Pierwsza cz˛e´s´c pracy jest po´swi˛econa uogólnionej teorii próbkowania nierównomiernego i metodom rekonstrukcji sygnałów próbkowanych przy pomocy PWZ, poniewa˙z otrzymywane w nich próbki sa˛ w ogólno´sci rozmieszczone nierównomiernie w czasie i moga˛ reprezentowa´c nie tylko warto´sci sygnału, ale równie˙z warto´sci jego przekształce´ n, np. pochodnych. Zaprezentowano nowy, szybki algorytm rekonstrukcji sygnału oparty o funkcje Slepiana. Próbki z systemów PWZ przenosza˛ tak˙ze dodatkowa˛ informacj˛e, wynikajac ˛ a˛ z faktu, ˙ze pomi˛edzy kolejnymi próbkami nie jest spełniony warunek wyzwolenia. Tego rodzaju informacja, nazywana ukryta, ˛ mo˙ze by´c szczególnie u˙zyteczna w przypadku próbkowania poni˙zej cz˛esto´sci Nyquista. Informacja ukryta ma posta´c nierówno´sci okre´slonych w ciagłych ˛ podprzedziałach czasu, ograniczajacych ˛ do pewnego zakresu warto´sci sygnału. Zaproponowano cztery metody rekonstrukcji z u˙zyciem informacji ukrytej: lokalizacje ogranicze´ n w czasie miały posta´c ciagł ˛ a˛ lub zdyskretyzowana, ˛ a ograniczenia amplitudy realizowane były ´sci´sle lub po´srednio, przy pomocy sygnału atraktora, nazywanego przypuszczeniem poczatkowym. ˛ Wszystkie cztery metody umo˙zliwiły otrzymanie zrekonstruowanego sygnału spełniajacego ˛ z dobrym przybli˙zeniem ograniczenia wynikajace ˛ z informacji ukrytej. Innym rodzajem dodatkowej informacji, która˛ mo˙zna okre´sli´c za pomoca˛ PWZ sa˛ lokalne wła´sciwo´sci widmowe sygnału. Zale˙zno´s´c wystepowania przekrocze´ n poziomu i pasma sygnału okre´sla wzór Rice’a dla Gaussowskich procesów stochastycznych, zgodnie z którym ´srednie pasmo jest wprost proporcjonalne do ´sredniej cz˛esto´sci przekraczania poziomu. Pasmo bezwzgl˛edne, u˙zyteczne do rekonstrukcji sygnału, jest proporcjonalne do pasma ´sredniego, ale współczynnik proporcjonalno´sci zalezy całkowicie od zazwyczaj nieznanego kształtu widma. W pracy pokazano, ˙ze współczynnik ten mo˙zna ograniczy´c stosujac ˛ nierówno´sci Czebyszewa i Gauss dla tzw. pasma mocy, które opisuje przedział cz˛estotliwo´sci zawierajacych ˛ cz˛e´s´c całkowitej mocy sygnału. Opracowano trzy metody estymacji ´sredniego pasma na podstawie statystyki przej´s´c przez poziom, a nast˛epnie zbadano ich wła´sciwo´sci i porównano wyniki numerycznie. Wzór Rice’a został równie˙z uogólniony przy pomocy nieliniowych transformacji osi czasu i amplitudy tak, aby uzyska´c zale˙zno´s´c ´sredniej cz˛esto´s´c przekraczania poziomu dla procesów niegaussowskich i niestacjonarnych. Zaproponowano metod˛e estymacji zmiennego w czasie pasma z u˙zyciem metody rzutowania na zbiory wypukłe. Rekonstrukcja z wykorzystaniem tej estymaty dostarcza dobrego przybli˙zenia sygnału próbkowanego i mo˙ze by´c wykorzystywana w połaczeniu ˛ z metodami u˙zywajacymi ˛ informacji ukrytej..

(12) xii.

(13) xiii. Contents. Acknowledgements. v. Abstract. ix. Contents. xv. 1 Introduction 1.1 Motivation and dissertation outline . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Contribution and related publications . . . . . . . . . . . . . . . . . . . . . . .. 1 1 2. 2 Fundamentals of event-triggered sampling and reconstruction at sub-Nyquist rates 2.1 Event-triggered sampling systems . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Sub-Nyquist sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Sparse bandlimited signals . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Signals with the finite rate of innovation . . . . . . . . . . . . . . . . . 2.2.4 Signals with varying bandwidth . . . . . . . . . . . . . . . . . . . . . .. 7 7 7 8 11 11 11 13 13. 3 Reconstruction of signals from generalized samples 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Signals in Hilbert spaces . . . . . . . . . . . . . . . . . . . . 3.2 Generalized sampling and perfect reconstruction . . . . . . . . . . 3.2.1 Signal analysis . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Uniqueness of the representation . . . . . . . . . . . . . . . 3.2.3 Signal synthesis . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Reconstruction using inverse Gram matrix . . . . . . . . . . 3.2.5 Whittaker-Shannon interpolation formula . . . . . . . . . . 3.2.6 Reconstruction using frame operator and dual frame . . . . 3.2.7 Reconstruction using inverse of mixed inner product matrix 3.3 Nonuniform sampling and perfect reconstruction of periodic signals 3.3.1 Representation of periodic bandlimited signals . . . . . . . 3.3.2 Reconstruction with inverse frame operator . . . . . . . . . 3.3.3 Reconstruction with dual frame operator . . . . . . . . . . . 3.3.4 Reconstruction with inverse mixed frame operator . . . . . 3.4 Finite nonuniform sampling and approximate reconstruction . . . .. 17 17 17 18 20 20 22 24 25 26 26 28 28 28 30 30 30 32. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . ..

(14) xiv. CONTENTS. 3.4.1 Signal synthesis and analysis . . . . . . . . . . . . . . . . . 3.4.2 Minimum energy solution . . . . . . . . . . . . . . . . . . . 3.4.3 Truncated dual frame and mixed recovery . . . . . . . . . . 3.5 Practical aspects of reconstruction from finite nonuniform samples 3.5.1 Slepian functions (prolate spheroidal waveform functions) . 3.5.2 Noise and its suppression . . . . . . . . . . . . . . . . . . . 3.5.2.1 Impact of the input noise . . . . . . . . . . . . . . 3.5.2.2 Impact of numerical errors . . . . . . . . . . . . . 3.5.2.3 Suppression of input noise and numerical errors . 3.5.3 Fast recovery using Slepian functions . . . . . . . . . . . . 3.5.3.1 Sparsification of the reconstruction matrix . . . . . 3.5.3.2 Computational complexity . . . . . . . . . . . . . 3.5.3.3 Numerical stability . . . . . . . . . . . . . . . . . . 3.5.3.4 Simulations . . . . . . . . . . . . . . . . . . . . . . 3.5.4 Reconstruction of signal from derivatives’ samples . . . . . 3.5.4.1 Derivative sampling and reconstruction using dual frames . . . . . . . . . . . . . . . . . . . . . . . . 3.5.4.2 Finite derivative sampling and reconstruction . . . 3.6 Summary and the research problems . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . canonical . . . . . . . . . . . . . . . . . .. 4 Reconstruction of signal using implicit information 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Explicit and implicit information in level-crossing sampling . . . . . . . . . . . 4.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Informative value of implicit information . . . . . . . . . . . . . . . . . 4.2.3 Methods related to signal reconstruction with use of event-triggered samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3.1 Signal recovery methods dedicated to event-triggered samples 4.2.3.2 Recovery with use of inequality constraints . . . . . . . . . . 4.3 Reconstruction with continuous-time implicit information . . . . . . . . . . . 4.3.1 Sets corresponding to level-crossing samples . . . . . . . . . . . . . . . 4.3.2 Projections onto convex sets . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 POCS using sets corresponding to level-crossing samples . . . . . . . . 4.3.3.1 Convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3.2 Iterative projection onto B ∩ I . . . . . . . . . . . . . . . . . 4.3.3.3 One-step projection onto B ∩ E . . . . . . . . . . . . . . . . . 4.3.4 Choice of the initial guess . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.4.1 Motivation and design requirements . . . . . . . . . . . . . . 4.3.4.2 Piecewise constant signal . . . . . . . . . . . . . . . . . . . . 4.3.4.3 Piecewise linear signal . . . . . . . . . . . . . . . . . . . . . . 4.4 Reconstruction with discrete-time implicit information . . . . . . . . . . . . . 4.4.1 Amplitude constraints at discrete-time instants . . . . . . . . . . . . . 4.4.2 Reconstruction with discrete-time amplitude constraints using quadratic programming . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Reconstruction with discrete-time attractors using Gaussian regression 4.5 Performance evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Test signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Numerical experiments . . . . . . . . . . . . . . . . . . . . . . . . . .. 32 33 34 35 35 37 37 39 39 42 42 44 45 46 46 46 47 49 51 51 52 52 53 55 55 57 58 58 59 61 61 61 62 64 64 65 66 68 68 69 71 73 73 74.

(15) xv. 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Reconstruction of signal according to varying bandwidth 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Signal bandwidth and the mean level-crossing rate . . . 5.2.1 General stationary stochastic process . . . . . . . 5.2.2 Stationary Gaussian process . . . . . . . . . . . . 5.2.3 Stationary non-Gaussian process . . . . . . . . . 5.2.4 Relation between absolute and mean bandwidth 5.2.5 Power bandwidth . . . . . . . . . . . . . . . . . . 5.3 Constant bandwidth estimation . . . . . . . . . . . . . . 5.3.1 Bandwidth estimation paradigm . . . . . . . . . 5.3.2 Least squares method . . . . . . . . . . . . . . . 5.3.3 Total excursion length method . . . . . . . . . . 5.3.4 Ratio method . . . . . . . . . . . . . . . . . . . . 5.3.5 Numerical comparison of estimators . . . . . . . 5.4 Signals with varying bandwidth . . . . . . . . . . . . . . 5.4.1 Definition of local bandwidth . . . . . . . . . . . 5.4.2 Determining local bandwidth . . . . . . . . . . . 5.4.2.1 Methods based on signal values . . . . . 5.4.2.2 Methods based on sampling instants . . 5.4.3 Varying bandwidth stochastic process . . . . . . . 5.5 Varying bandwidth estimation . . . . . . . . . . . . . . . 5.5.1 Intensity estimation . . . . . . . . . . . . . . . . 5.5.2 Estimation of the local mean/power bandwidth . 5.5.3 Local mean estimation . . . . . . . . . . . . . . . 5.5.4 Numerical evaluation of estimator . . . . . . . . 5.6 Reconstruction of signal with varying bandwidth . . . . 5.6.1 Reconstruction method . . . . . . . . . . . . . . 5.6.2 Performance evaluation . . . . . . . . . . . . . . 5.6.2.1 Test signals . . . . . . . . . . . . . . . . 5.6.2.2 Numerical experiments . . . . . . . . . 5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 77 79 79 81 81 82 84 87 89 92 92 94 96 97 99 104 104 106 106 108 109 110 110 111 112 112 113 113 116 116 117 118. 6 Closing remarks. 121. List of Figures. 128. Bibliography. 141.

(16) xvi. CONTENTS.

(17) 1. 1. Chapter. Introduction. 1.1. Motivation and dissertation outline. Introducing the sampling theorem [Shannon, 1949] established a link between the the world of physical analog signals and the world of mathematical entities comprehensible to the human and processable by emerging digital computers. The simplicity and stability of the signal representation with uniform samples gave birth to the field of digital signal processing and became its paradigm. Nowadays, after more than 60 years, development of integrated circuits technology and their miniaturization following Moore’s law allows not only for common and omnipresent use of digital electronic devices, but also for using more sophisticated signal sampling and reconstruction methods, going beyond the Shannon-Nyquist paradigm. The main motivation for this is a need for the minimizing energy consumption, which would allow to extend operation time of battery-powered electrical devices. Another important factor is a reduction of communication, due to limited and costly resources. While the bandlimited signal model is able to handle huge class of signals, it not always allows for their most efficient representation. A priori knowledge regarding signal characteristics in connection with properly chosen signal model allows therefore to reduce the number of measurements (samples) below the Nyquist rate governed by the highest non-negligible frequency component of signal. On the other hand, the number of measurements must be sufficient to reconstruct the signal with the assumed quality. In some cases it is possible to determine this number before sampling, but the possibility of determining it during the sampling process is also appealing. This very idea of taking measurement at relevant time instants stays behind the concept of event-triggered sampling (called interchangeably event-driven or event-based sampling). Such approach is especially applicable to the signals of bursty nature, such as biomedical (ECG, EEG), radar, geophysics, astronomy or sensor network transmissions. This is because the relevant activity in such signals can be easily measured by simple event-triggered methods, such as level-crossing or its modifications. In this dissertation, the problem of reconstruction of signal from event-triggered samples taken at sub-Nyquist rate is considered. The Chapter 2 addresses the two questions: what is event-triggered sampling and how the sub-Nyquist sampling is.

(18) 2. Introduction. possible. In Section 2.1 the classification of event-triggered sampling systems is proposed, showing the common features of seemingly different sampling methods. This systematization shows that the central concept in event-triggered sampling is a levelcrossing and it provides the rationale for studying it as an elementary building block of such systems. Next, the most important signal models allowing sub-Nyquist sampling are presented along with the relevant recovery techniques: sparse bandlimited signals, signals with finite-rate of innovation and signals with varying bandwidth. The last is the one on which this dissertation is focused on. Appropriately parametrized event-triggered sampling visibly mimics the behavior of input signals, in the sense that bursty input produces burst of samples and slowly varying input gives small number of samples. This makes the event-triggered sampling inherently nonuniform. The Chapter 3 is therefore devoted to the survey on the theory and practice of nonuniform sampling and reconstruction. Since some of the event-triggered sampling methods give information not only about the signal value, but also about its transformations (e.g. derivatives), then also the generalized theory of sampling reconstruction is analyzed. Furthermore, the algorithm for fast reconstruction using Slepian functions is introduced. Since the part of the event-triggered sampling system are conditions for triggering the sampling, then it is known that the intervals with no sample the signal does not meet these conditions. This “no-activity” information also can be used in the reconstruction. This type of information is called negative or implicit, in contrast to the sampling values and time instants, which belong to the class of explicit information. In the Chapter 4 the methods for using both types of data are introduced. Exact agreement of reconstructed signal with the given information is sometimes possible only asymptotically, for infinite number of iterations in a reconstruction algorithm. Therefore also approximate, fast algorithms are proposed. The local event-rate seems to be related to the local signal bandwidth. The Chapter 5 focuses on explaining the details of such relation for stochastic signals. The new notion of power bandwidth allows to determine bounds constraining this dependence. Three bandwidth estimation algorithms, devoted for stationary signals are analyzed, and the extension into non-stationary bandwidth estimation is proposed. This allows to use varying bandwidth model of the bursty signal and apply it for the reconstruction. The new methods give additional improvement over the methods from Chapter 4. The ideas studied in the present dissertation allow for developing more efficient event-triggered sampling systems and reconstruction algorithms, but also shows possible direction of development in event-triggered signal processing. The conclusions and perspectives are presented in last, summarizing Chapter 6.. 1.2. Contribution and related publications. Main contributions of the present dissertation are the following: 1. Event-triggered sampling systems are systematized using a unified description which shows their connection with level-crossing sampling (Chapter 1)..

(19) 3. 2. Comprehensive survey on the methods of signal reconstruction from nonuniform samples is presented along with the primer on the sampling and reconstruction in Hilbert spaces (Chapter 3). 3. The new method of approximate signal reconstruction from nonuniform samples is introduced. Slepian functions are used to obtain linear computational complexity, instead of quadratic or cubic, as for existing methods (Chapter 3). 4. The possibility of using the implicit information stemming from the fact of absence of event in the intervals between consecutive samples to improve the reconstruction of event-triggered sampled signal is shown. Implicit information has a form of inequality constraints bounding values of signal in the continuous subintervals of time axis. 5. Four algorithms of reconstruction using implicit information were proposed, differing in the method of imposing the constraints (in time: discrete/continuous, in amplitude: exact bounding/fuzzy attraction). In the algorithms based on projections onto convex sets (Iterative POCS, one-step POCS) the guidelines for designing initial guess signal are proposed. For the algorithms based on the discrete-time constraints (Gaussian regression, quadratic programming), the distribution of constraining points is studied (Chapter 4). 6. The relation between bandwidth of stochastic signal and mean level-crossing rate is studied in detail. So called mean bandwidth is proportional to the mean level-crossing rate, but it is not usefull for the signal reconstruction. On the other hand, absolute bandwidth is proportional to the mean level-crossing rate, but the proportionality coefficient depends entirely on the spectrum shape, which is not usually known before sampling. It is found that the proportionality coefficient can be bounded for all spectra using Chebyshev (5.40) and Gauss inequalities (5.43), if the power bandwidth (the span of frequencies containint given fraction p of total signal powet) is considered. At the same time the power bandwidth for p / 1 is usefull for the signal reconstruction (Chapter 5), and it allows to design event-triggered system which guarantees approximate fulfilling of the Nyquist criterion. 7. Rice formula has been extended to a subclass of non-Gaussian stochastic processes (5.22), using nonlinear transformation of amplitude axis (Chapter 5). 8. Two new methods of bandwidth estimation from the level-crossings of stochastic process are introduced (least-squares estimator and ratio estimator), analyzed and tested numerically, outperforming the existing method of total excursion length (Chapter 5). 9. Rice formula has been extended to class of time-varying stochastic processes (5.100), using nonlinear transformation of time axis (Chapter 5)..

(20) 4. Introduction. 10. The method of varying bandwidth estimation is introduced and applied to the reconstruction of signal from level-crossing samples, in conjunction with methods using implicit information (Chapter 5). The results outperform the reconstruction methods based on the implicit information but without bandwidth estimation, presented in Chapter 4. The publications related to the present dissertation are: Journals: 1. D. Ko´scielnik, D. Rzepka, and J. Szyduczy´ nski, “Sample-and-hold asynchronous sigma-delta time encoding machine,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 63, no. 4, pp. 366–370, 2016. [Ko´scielnik et al., 2016] 2. D. Rzepka, M. Pawlak, D. Ko´scielnik, and M. Mi´skowicz, “Bandwidth estimation from multiple level-crossings of stochastic signals,” IEEE Transactions on Signal Processing, vol. 65, no. 10, pp. 2488–2502, 2017. [Rzepka et al., 2017b] Chapters: 3. D. Rzepka, M. Pawlak, D. Ko´scielnik, and M. Mi´skowicz, “Reconstruction of varying bandwidth signals from event-triggered samples,” in Event-Based Control and Signal Processing. CRC Press, 2015, pp. 529–546.[Rzepka et al., 2015b] Conference papers: 4. D. Rzepka, M. Mi´skowicz, A. Grybo´s, D. Ko´scielnik: „Recovery of Bandlimited Signal Based on Nonuniform Derivative Sampling”, 10th International Conference on Sampling Theory and Applications SampTA 2013, Bremen, Germany [Rzepka et al., 2013] 5. Dominik Rzepka, Marek Mi´skowicz, “Recovery of varying-bandwidth signal from samples of its extrema,” in Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA 2013), IEEE, 2013, pp. 143–148. [Rzepka and Mi´skowicz, 2013] 6. Dominik Rzepka, Marek Mi´skowicz, “Fast reconstruction of nonuniformly sampled bandlimited signal using Slepian functions,” in Proceedings of the 22nd European Signal Processing Conference (EUSIPCO 2014), IEEE, 2014, pp. 741–745.[Rzepka and Mi´skowicz, 2014] 7. D. Rzepka, D. Ko´scielnik, and M. Mi´skowicz, “Recovery of varying-bandwidth signal for level-crossing sampling,” in Emerging Technology and Factory Automation (ETFA 2014), IEEE, 2014, pp. 1–6.[Rzepka et al., 2014].

(21) 5. 8. D. Rzepka, D. Ko´scielnik, and M. Mi´skowicz, “Compressive sampling of stochastic multiband signal using time encoding machine,” in Sampling Theory and Applications (SampTA), 2015 International Conference on. IEEE, 2015, pp. 425–429. [Rzepka et al., 2015a] 9. D. Rzepka, D. Ko´scielnik, M. Mi´skowicz, and N. T. Thao, “Signal recovery from level-crossing samples using projections onto convex sets,” in Second International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP 2016), IEEE, 2016, pp. 1–6. [Rzepka et al., 2016b] 10. D. Rzepka, D. Koscielnik, and M. Miskowicz. “Sposób i układ do rekonstruowania sygnału zakodowanego metoda˛ level-crossing. (Method and apparatus for reconstruction of signal en- coded with level-crossing method)”. Polish patent application P-417 536, 5/2016. [Rzepka et al., 2016a] 11. D. Rzepka, D. Ko´scielnik, and M. Mi´skowicz, “Clockless signal-dependent compressive sensing of multitone signals using time encoding machine,” in 2017 Second International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP 2017). IEEE, 2017, pp. 1–8 [Rzepka et al., 2017a] 12. N. T. Thao, D. Rzepka, "Operator-theoretic approach to minimal-norm bandlimited interpolation of nonuniform samples", in 12th International Conference on Sampling Theory and Applications (SampTA) (2017).[Thao and Rzepka, 2017].

(22) 6. Introduction.

(23) 7. 2. Chapter. Fundamentals of event-triggered sampling and reconstruction at sub-Nyquist rates. 2.1 2.1.1. Event-triggered sampling systems Background. The event-based approach were considered in the early days of control, communication, and signal processing systems design, as “probably one of the earliest issues to confront with respect to the control system design is when to sample the system so as to reduce the data needing to be transported over the network” [Kumar et al., 2014]. First event-based systems (called also aperiodic or asynchronous) appear in feedback control, data transmission, and signal processing at least since the 1950s, see [Heemels et al., 2012, Aström, 2008, Mi´skowicz, 2015b]. As a result of the easier implementation and the existence of well-developed theory of digital systems relying on periodic sampling, the event-based design strategy failed to compete with synchronous architectures that have monopolized electrical engineering since the early 1960s. In subsequent decades, the event-driven systems had attracted rather limited attention of the research community, although the topics related to the asynchronous behavior of digital systems, nonuniform sampling etc. have appeared since the origins of computer technology. The renaissance of interest in the event-based paradigm came more than a decade ago with the publishing of a few important research works independently in the disciplines of control [Arzén, 1999, Bernhardsson and Aström, 1999, Heemels et al., 1999] and signal processing [Sayiner et al., 1996, Tsividis, 2003, Tsividis, 2004, Allier et al., 2003]. These works released great research activity and gave rise to a systematic development of new event-triggered approaches [Mi´skowicz, 2015b, Mi´skowicz, 2015a]..

(24) 8. Fundamentals of event-triggered sampling and reconstruction at sub-Nyquist rates. 2.1.2. Classification. Diversity of the event-triggered sampling (ETS) systems makes hard developing unified scheme for their classification. There exist, however, some elements that are common for most of the ETS systems: 1. The event occurs at the time when the input signal x(t) or its transformation y(t) is equal to the test signal Θ(t) (reference signal) or one of L signals Θ(t) = {Θ1 (t), ..., ΘL (t)}, which are known in advance. 2. The output of the ETS system are time instants t = [tn ]n∈N when events occur. 3. (Optional) Each event may be described by an auxiliary information. x(t). x(t) Comparator x(t) = Θ(t)? Θ(t). {tn }. Comparator x(t) = Θ(t)?. {tn }. Θn (t). Figure 2.1: Crossing time-encoding machines: a) without feedback, b) with feedback. The simple class of the ETS system that contains some of the aforementioned elements are crossing time-encoding machines (C-TEM), proposed in [Gontier and Vetterli, 2014] (Fig. 2.1). The events occur at instants t = [tn ]n∈N when input signal x(t) equals test signal Θ(t), so ( x(t) = Θ(t) for t ∈ {tn }n∈N (2.1) x(t) 6= Θ(t) for t ∈ / {tn }n∈N where set N is finite (N = {1, ..., N }) or infinite (N = Z, N = Z+ or N = Z− ) set of event indices. If the test signal Θ(t) depends on the output t (there exists a feedback) then it is denoted as Θn (t). This definition is sufficient to describe for example crossing of a single level θ, by setting Θ(t) = θ, or crossing of sine wave [Bar-David, 1974, Selva, 2012] with Θ(t) = sin(ω0 t). The ETS systems constitute richer class of architectures, which goes beyond using simple comparison of input signal with time-dependent reference. Authors of [Gontier and Vetterli, 2014] point this out for integrate-and-fire sampling [Gerstner and Kistler, 2002], where signal fed to the C-TEM is integrated input signal. Let us give some another examples: 1. In asynchronous sigma-delta modulator (ASDM) [Lazar and Tóth, 2004] events occur at instants given by equation Z 1 tn (x(t) − (−1)n b) dt = 2δ(−1)n , (2.2) κ tn−1.

(25) 9. where κ, b, δ are parameters of modulator, and signal obeys |x(t)| < b. Equation (2.2) can be however decomposed into test signal Θn (t) = (−1)n [2κδ − b(t − tn−1 )] and triggering signal y(t) =. Z. (2.3). t. (2.4). x(τ )dτ.. tn−1. Triggering occurs when y(t) = Θn (t). In the SH-ASDM (the modified version of ASDM) triggering signal is y(t) = x(tn−1 ). In general, in ETS systems also the transformation of the input signal can be used for triggering. 2. In extremum sampling [Marvasti, 2001, Kelly et al., 2012] the extrema values and time instants are recorded. The test signal is Θ(t) = 0 and trigger occurs . In this type of ETS system the when y(t) crosses Θ(t), where y(t) = dx(t) dt auxiliary information x(tn ) is equally important as the event instant tn . 3. In multilevel crossing [Mark and Todd, 1981] an event occurs when a signal crosses one of the multiple levels Θ(t) = {θ1 , ..., θL }. Auxiliary information is the index of level ` ∈ 1, ..., L and (optionally) the direction of crossing (upcrossing, down-crossing). In turn in the on-delta sampling [Mi´skowicz, 2006] event occurs when the signal deviates from the previous sample by more than ∆, which can be represented by test signals Θ(t) = {x(tn−1 ) − ∆, x(tn−1 ) + ∆}. The auxiliary information here is the sign of the deviation. Therefore, in general there can be more than one test signal, and it can depend also on the previous samples.. AUX x(t). y(t). T Test signal. Θn (t). Comparator t : y(t) ∈ Θn (t). {an } {fn } {tn }. generation. {fn }. {tn }. Figure 2.2: Generic ETS system scheme. Complete describtion of those ETS systems is not possible using C-TEM, however they all can be seen as an extensions of C-TEM. On the ground of this comparison the generic scheme of ETS system is proposed in Fig. 2.2. T is a transformation block, while AUX block transforms an input signal and an event instant into feedback.

(26) 10. Fundamentals of event-triggered sampling and reconstruction at sub-Nyquist rates. signal fn and auxiliary information an . Comparator unit outputs the time instants t = [tn ]n∈N when y(t) is equal to one of the functions Θ(t) = {Θ1 (t), ..., ΘL (t)}. Not all of the ETS systems use the features of this model, as can be seen in the Tab. 2.1 summarizing parameters of a few exemplary ETS systems. Included architectures can be grouped into three commonly used classes: reference-crossing sampling, time-encoding machines and threshold-based sampling [Mi´skowicz, 2015b], depending on which features of ETS system model they exploit (Tab. 2.2). However it should be stressed, that such classification is not sharp. This is because the representation of given ETS system with the parameters in Tab. 2.1 is not necessarily unique; for example ASDM can be also represented by the adaptive level-crossing [Senay et al., 2010], and level-crossing sampling with uniform levels can be viewed as a version of threshold based sampling. The important notion is that all ETS methods are based on some type of levelcrossing, which makes the level-crossing the central concept of the event-based signal processing. ETS system. Test signal(s). Trigger condition. Auxiliary information. REFERENCE CROSSING SAMPLING. Level crossing. Θ(t) = θ. x(tn ) = Θ(tn ). an = sign (x0 (tn ))1 (optional). Sine-wave crossing. Θ(t) = sin(ω0 t). x(tn ) = Θ(tn ). an = sign (x0 (tn )) (optional). Multilevel crossing. Θ(t) = {θ` }L `=1. ∃` : x(tn ) = Θ` (tn ). an = n o `, sign (x0 (tn )) {z } | (optional). Extremum sampling. x0 (tn ) = Θ(tn ). Θ(t) = 0. an = x(tn ). TIME - ENCODING MACHINES. Integrateand-fire. Θ(t) = θ (unipolar) Θ(t) = {−θ, θ}(bipolar). ASDM SH-ASDM. Θn (t) = (−1)n [2κδ − b(t − tn−1 )]. R tn. tn−1. R tn. tn−1. R tn. x(τ )dτ ∈ Θ(tn ) or. eα(tn −tn−1 ) x(τ )dτ ∈. ∈ Θ(tn ) (leaky). Θn (t) = (−1) [2κδ − b(t − tn−1 )]. x(τ )dτ = Θn (tn ) R tntn−1 x(tn−1 )dτ = Θn (tn ) tn−1. Θ(t) = {x(tn−1 ) − ∆, x(tn−1 ) + ∆}. x(tn ) ∈ Θ(tn ). n. an = sign (Θ(tn ) − Θ(tn−1 )) -. THRESHOLD - BASED SAMPLING. On-delta sampling. an = sign (x(tn ) − x(tn−1 )). Table 2.1: Summarized parameters of the exemplary ETS systems..

(27) 11. ETS system class Reference crossing TEM Threshold-based. Feedback No Yes Yes. Auxiliary information Optional No Yes. Multiple test functions Optional No Yes. Table 2.2: Comparison of the ETS system classes.. 2.2 2.2.1. Sub-Nyquist sampling Overview. In the classical paradigm of uniform sampling the signal is sampled with the predetermined sampling rate. This type of sampling is standard method in analog-to-digital conversion, since most of the continuous-time signals can be modeled as bandlimited, and according to Shannon-Nyquist sampling theorem, such signals can be described using discrete-time samples taken above the Nyquist rate fs . There is, however, a large class of signals for which sampling at the rate twice higher than the Nyquist frequency fNYQ = Ωπ of the highest component is not effective, because they can be represented with the smaller number of samples. In general this refers to two classes of signals: 1. Subset of bandlimited signals, whose frequency content does not occupy the whole range f ∈ [0, fNYQ ]. 2. Non-bandlimited signals, which can be modeled approximately as bandlimited, but there exist another model which allows for representing such signals more efficiently in terms of the sampling density. The following subsections provide a short survey on such signal classes and the methods of their sampling and reconstruction, with the special consideration of eventtriggered methods.. 2.2.2. Sparse bandlimited signals. The well-known phenomenon in uniform sampling of bandlimited signals is aliasing the frequency components above the Nyquist frequency fNYQ are folded into the band [0, fNYQ ] if the sampling rate does not exceed Nyquist rate. For the signal occupying whole band [0, fNYQ ], this does not allow for the valid representation of signal with samples of such small density. If, however, the signal does occupy only some of the subsets of [0, fNYQ ] (for details see, for example [Mishali and Eldar, 2011]), it is represented uniquely despite the fact that sampling does not exceed Nyquist rate. This technique is caller undersampling. 1. For the simplicity, the signal x(t) was assumed to be differentiable..

(28) 12. Fundamentals of event-triggered sampling and reconstruction at sub-Nyquist rates. Undersampling can be used effectively only for the bandpass signals with proper bandwidth, however it shows that sampling under the Nyquist rate is possible for signals which are sparse, i.e. some of their frequency-domain representation is zero. This observation can be generalized using Landau theorem: the sampling rate required for the unique representation of bandlimited signal is twice the summary bandwidth of its all non-zero subbands [Landau, 1967] and it is called the Landau rate (fLND ). Such sampling requires though nonuniform sampling pattern and special reconstruction functions [Lin and Vaidyanathan, 1998]. The need for dedicated sampling pattern is a considerable limitation of such sampling scheme, since it depends on the given location of subbands and it is problematic from the implementation point of view. Universal sampling method was required, with the rate dependent only on the summary subband width. This, in turn, calls for a method for determining the actual subband arrangement from sub-Nyquist samples to allow reconstruction. The answer for this problems are methods of compressive sampling (CS). Pioneering works [Candes et al., 2006] and [Donoho, 2006] shown that is it possible to solve the underdetermined system of equations Φ s = x M ×N N ×1. (2.5). M ×1. using l1 -norm minimization, given that vector s is K sparse (has only K non-zero components) and matrix Φ follows the Restricted Isometry Property (RIP). RIP is fulfilled with high probability for some types of random matrices, such as Gaussian, Bernoulli and Fourier matrices. The matrix equation (2.5) can be considered as encoding of the K-sparse vector s ∈ RN into the form of dense vector x ∈ RM using encoding matrix Φ ∈ RM ×N [Bryan and Leise, 2013]. The amount of information required to describe locations of K non-zero components at N possible positions is  N M = log2 K bits. If each sample in x is treated as a carrier of a single bit of information, then the  number of samples required for the recovery of the support of N s is M ≥ log2 K (commonly used lower bound which avoids use of binomial coefficient is M = O(K log2 (eN/K)). For the wider introduction to CS principles see [Bryan and Leise, 2013, Hayashi et al., 2013]. A few approaches to extending finite dimension CS to the continuous-time signal sampling were presented. In [Mishali and Eldar, 2009] the use of universal nonuniform periodic sampling pattern is proposed. The sampling rate 2fLND is required to determine the location of non-empty subbands. The drawback of this sampling type is a need for the sample-and-hold circuit with the input bandwidth adequate not to the summary bandwidth of subbands but to the input signal bandwidth. Alternative approach is multiplication of input signal with pseudo-random sequence and the integrate-and-dump filter [Tropp et al., 2010]. This approach can be further parallelized, as in modulated wideband converter [Mishali and Eldar, 2011], where signal is multiplied by a few periodic pseudo-random sequences with a slower chip rate. Comprehensive survey on the practical CS methods for analog signals can be found in [Mishali and Eldar, 2011] and [Eldar and Kutyniok, 2012]. CS was also used jointly with event-based sampling. In [Sharma and Sreenivas, 2012] the level-crossing sampling is used as a source.

(29) 13. of nonuniform samples. To increase randomness of samples, reconstruction uses random subset of level-crossings. In [Mashhadi and Marvasti, 2016] 1-bit CS approach is exploited to recover signal from level-crossings (or zero-crossings). Another approach is the use of TEM as sampler with quasi-random sampling instants. In [Kong et al., 2011] the use of ASDM with additional randomization was proposed. Drawbacks of using ASDM for wideband signals were pointed out and eliminated in SH-ASDM [Ko´scielnik et al., 2016]. CS with SH-ASDM and the impact of randomization on the reconstruction was analyzed in [Rzepka et al., 2015a] and [Rzepka et al., 2017a].. 2.2.3. Signals with the finite rate of innovation. The bandlimited signals owe their reconstructability to the possibility of representing their spectrum via Fourier series using samples with finite density. There exist, however, other models of signals which require finite number of parameters per time interval. One of such models is a signal with finite rate of innovation (FRI), X x(t) = an h(t − tn ), (2.6) n∈Z. where h(t) is a kernel function (possibly non-bandlimited). Each kernel is parametrized by amplitude an and time shift tn , so the number of freedom degrees is 2. In the noiseless case the number of samples required for the reconstruction of the signal is equal to the number of freedom degrees. Example of FRI signal is shown in Fig. 2.3. Periodic approximation, where an = an+N and tn = tn+N (N is a period of signal parameters) allows for the reconstruction using well-known methods of spectral estimation (Prony’s method, annihilating filter), designed for finding parameters of frequency spikes (sine’s frequency, amplitude, phase, damping). The reconstruction of FRI signal is aimed to find the parameters of spikes (convolved with h(t)) in the time domain, so the spectral estimation methods are applied in the frequency domain [Blu et al., 2008]. The survey on reconstruction methods dedicated to the FRI signals can be found in [Eldar and Kutyniok, 2012]. Although FRI signals can represent bursty signals quite well, the literature on the use of event-based sampling with FRI reconstruction methods is very limited. In fact, the only notable attempt to use both techniques together is [Guan, 2012], where a few successful experiments with sequential recovery methods are presented. It seems, however, that classical methods of FRI signals recovery may be also usable in the context of event-based sampling, since it is possible to use spectral estimation techniques with nonuniform samples [Wei et al., 2012].. 2.2.4. Signals with varying bandwidth. The composition of the two classes mentioned in the Section 2.2.1 are the signals with time-varying bandwidth..

(30) 14. Fundamentals of event-triggered sampling and reconstruction at sub-Nyquist rates. 1.5 FRI signal Components. x(t). 1 0.5 0 -0.5 -1 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. t. Figure 2.3: Example of signal with finite rate of innovation, composed of the Gaussian kernels h(t). The first approach to representation of this class of signals is bandlimited, sparse time-frequency representation. The example is evolutionary spectrum representation [Oh et al., 2010, Senay, 2011]. The maximum bandwidth Ωmax of the signal is divided into subbands and in each subband it is represented using Slepian functions (see Section 3.5.1). This functions provide the best possible time localization (fastest time-decay), allowing for local representation of signal components and zeroing coefficients corresponding to the time-frequency areas where the signal is absent. However it is not possible for bandlimited function to decay to zero. This means that even time-frequency areas with zero-valued coefficients are not empty and formally the signal has the bandwidth Ωmax for all t ∈ R, despite the fact, that energy in some bands is very low. The second approach constitutes use the class of signals with truly varying bandwidth. One of the possible extensions is time-warping [Clark et al., 1985], where the time axis is subject to the transformation t0 = ζ(t) (with the assumption of monotonically increasing function ζ(·)). Intuitively, this allows to force time to fly slower or faster in some regions. The bandlimited signal y(t), warped in time using x(t) = y (ζ(t)) has also varying bandwidth. Interestingly, the varying bandwidth signal x(t) is formally non-bandlimited if function ζ(t) is not linear (and linear ζ(t) corresponds to uniform, constant change of bandwidth). For details see Section 5.4.1. The third approach to representing signal with varying bandwidth is Empirical Mode Decomposition (EMD) [Huang et al., 1998], where signal is decomposed into orthogonal Intrinsic Mode Functions (IMF) - the sine waves modulated in amplitude and in frequency, J J X X x(t) = cj (t) ≈ aj (t) sin (ζj (t)) (2.7) j=1. j=1. (for details see Section 5.4.2). Similarly as in time-warping, the signal x(t) is not bandlimited (apart from the case where ζj (t) are linear functions, and aj (t) are bandlimited)..

(31) 15. Event-triggered sampling methods are particularly useful in application to the signals with varying bandwidth, because the local level-crossing rate is related to the local bandwidth of the signal. The relation between those two quantities and the possibility of using it to reconstruct the signal is one of the subjects of the present thesis. For the detailed analysis of the state-of-the-art event-triggered sampling methods see Chapter 5..

(32) 16. Fundamentals of event-triggered sampling and reconstruction at sub-Nyquist rates.

(33) 17. 3. Chapter. Reconstruction of signals from generalized samples. 3.1. 3.1.1. Introduction. Overview. Reconstruction of a bandlimited signal from uniform periodic samples taken at sufficient rate can be described using Shannon sampling theorem, which constitutes a fundamental theorem for discrete representation of analog signals. Although Shannon was not the first one who proved this theorem (see the history of the invention [Higgins, 1985]), but he introduced it to the engineering community in his milestone paper [Shannon, 1949], along with the description using commonly known Fourier series and Fourier transform. Another contribution of Shannon was an insight that the sampling theorem can be generalized for sampling not only the signal, but also its linear transformations, such as derivatives. This idea, mentioned only briefly in [Shannon, 1949] was further developed by Fogel [Fogel, 1955], Jagerman [Jagerman and Fogel, 1956], Linden and Abramson [Linden and Abramson, 1960], and finally generalized by Papoulis [Papoulis, 1977]. The Fourier analysis framework used in this works was appropriate for the extensions of sampling theorem where the periodic samples were used. Unfortunately, in the case of aperiodic, irregular samples, basic Fourier analysis tools are insufficient and more advanced methods are required. The analysis of the nonuniform sampling and reconstruction will be started from the theoretical case of perfect recovery. The development in this area is the domain of mathematical community and it is dominated by the functional analysis, frame theory and non-harmonic Fourier series introduced by Duffin and Schaffer [Duffin and Schaeffer, 1952]. The perfect reconstruction requires infinite number of samples fulfilling not only the counterpart of the Nyquist rate condition, but also other constraints on the samples’ time-arrangement. From the perspective of practical reconstruction of signal from the nonuniform samples, the setup with finite.

(34) 18. Reconstruction of signals from generalized samples. number of samples is of higher interest, yielding however only approximate recovery of the sampled signal. A few strategies can be used to handle the problem of such reconstruction, from which the two are most commonly used. If the sampled signal can be modeled as periodic, then a finite set of periodically repeated samples describes the infinite set of samples. In such case signal can be represented using the Fourier series with the finite number of components [Thao, 2015]. Another way is to use approximate reconstruction of infinitely long signal using finite number of samples and the criterion of energy minimization [Yen, 1956]. The aim of this chapter is to present the unified description of methods from aforementioned three groups (Table 3.1) using basics of Hilbert spaces and operator theory. Number of samples Infinite Finite Finite. Type of bandlimited signal Arbitrary Periodic Arbitrary. Signal recovery Perfect or approximate Perfect or approximate Approximate. Table 3.1: Categories of sampling and reconstruction The unification allows also to easily obtain generalization of sampling/reconstruction methods towards sampling of linear transformations of signal. Finally, the chapter focuses on the practical issues related to the reconstruction such as the sensitivity to noise and computational efficiency of the recovery algorithms. A new reconstruction algorithm, using Slepian functions with complexity depending linearly on the samples count is introduced [Rzepka and Mi´skowicz, 2014].. 3.1.2. Signals in Hilbert spaces. The general and convenient framework for the sampling and reconstruction are the Hilbert spaces, which are generalization of the Euclidean spaces. The Hilbert space H is defined as a complete, normed, linear space endowed with inner product (see e.g. [Shima and Nakayama, 2010]). The objects of the Hilbert space can be vectors and functions, here called commonly “signals”. The important operation on the signals from the Hilbert space is inner producth~x, ~y i which allows to define L2 -norm of p − the signal k→ x k = h~x, ~xi. Quantity k~xk2 in the interpretation of signal processing − corresponds to the energy of a signal → x. The subspaces of the Hilbert space which will be in common use in this thesis are (Fig. 3.1): • RN - Euclidean space of N -dimensional real-valued vectors1 x = [x1 , ...., xN ] • CN - space of N -dimensional complex-valued vectors x = [x1 , ...., xN ] 1. All vectors denoted using brackets [. . .] are by default column vectors..

(35) 19. RZ x, y. L2 (R) x(t), y(t). `2 (Z). BΩ. length. infinite. time discrete continuous. finite. H ~x, ~y RN x, y. L2 (T ) x(t), y(t) BΩT. Figure 3.1: Hilbert space H and its commonly used subspaces • RZ - space of infinite-dimensional, real-valued vectors x = [..., x−1 , x0 , x1 , ...] • `2 (Z) - subspace of RZ with finite energy, kxk2 < ∞ • L2 (R) - space of real-valued functions x(t) defined for all t ∈ R, with finite energy kx(t)k2 < ∞ • L2 (T ) - space of real-valued functions x(t) with finite energy kx(t)k2T < ∞ in the interval t ∈ T := [τ0 , τ0 + T], which is closed subinterval of R Important class of the Hilbert spaces are the bandlimited signals, defined by their properties in the Fourier (frequency) domain. Fourier transform of finite-energy signal x(t) is given by Z+∞ F[x(t)] = x(t)e−jωt dt = X(ω) (3.1) −∞. and inverse Fourier transform of X(ω) is 1 F [X(ω)] = 2π −1. Z+∞ X(ω)ejωt dω = x(t).. (3.2). −∞. For the periodic signals (which do not have finite energy in domain t ∈ R) discrete frequency components are defined as 1 FT [x(t)] = T. ZT 0. t. x(t)e−jk2π T dt = Xk. (3.3).

(36) 20. Reconstruction of signals from generalized samples. and periodic signal x(t) can be written as a sum F−1 T [Xk ]. =. +∞ X. t. Xk e−jk2π T .. (3.4). n=−∞. Two other Hilbert spaces defined using Fourier transform are: • BΩ - Ω-bandlimited signals, with finite total energy (BΩ ⊂ L2 (R)), whose Fourier transform is zero outside interval ω ∈ [−Ω, Ω] rad/s; ∀x(t) ∈ BΩ : x(t) ∈ L2 (R), X(ω) = F[x(t)], ∀ω ∈ / [−Ω, Ω], X(ω) = 0. (3.5). Angular Nyquist frequency is Ω = 2πfNYQ rad/s; fNYQ is Nyquist frequency in Hz; • BΩT - Ω-bandlimited, T-periodic signals, with finite energy in the interval t ∈ T , whose Fourier transform is zero outside interval ω ∈ [−Ω, Ω] rad/s; ∀x(t) ∈ BΩT : x(t) ∈ L2 (T ), x(t) = x(t + T), X(ω) = FT [x(t)], ∀ω ∈ / [−Ω, Ω], X(ω) = 0 (3.6) The definitions of the inner product and energy of the signals from the above spaces are described in Tab. 3.2 (x denotes complex conjugate of x). Space RN , CN. Inner product N X xn y n hx, yi = n=1. RZ , `2 (Z) L2 (R), BΩ L2 (T ), BΩT. hx, yi =. +∞ X. n=−∞ Z+∞. hx(t), y(t)i = hx(t), y(t)iT =. x n yn. x(t)y(t)dt. −∞ τZ 0 +T. 1 T. x(t)y(t)dt. τ0. Table 3.2: Inner products in the subspaces of Hilbert space. 3.2 3.2.1. Generalized sampling and perfect reconstruction Signal analysis. Using the definitions from the previous section, classical sampling is a mapping of a continuous-time signal x(t) ∈ BΩ to the infinite vector of samples x ∈ RZ . For.

(37) 21. the sampling instants determined by the infinite vector t ∈ RZ , the mapping can be denoted as St (Fig. 3.2). x(t). St. BΩ. x RZ. Figure 3.2: Sampling of signal x(t) at times t as a mapping St : BΩ 7→ RZ . Sampling can be represented using the inner product. Notice that the x(tn ) is a sample of a signal x(t) filtered using the ideal lowpass filter with impulse response ) s(t) = Ωπ sinc(Ωt) (where sinc(Ωt) := sin(Ωt) Ωt Z+∞ Ω (x ∗ s)(tn ) = x(tn ) = x(t) sinc (Ω (t − tn )) dt π. (3.7). −∞. which, by the definition of the inner product (Tab. 3.2), is equivalent to x(tn ) = hx(t), s (t − tn )i. (3.8). The function s(t) used for sampling in the inner product (3.8) is called a sampling function. Equations (3.7) and (3.8) show the fundamental link between convolution (used commonly in the engineering as a filtering) and inner product, which can provide a highly convenient mathematical model of measurement formulated in Hilbert spaces. While the formulation of classical sampling as an inner product seems to be a bit artificial if a signal x(t) is already in BΩ , its usefulness is much more evident in the is case of generalized sampling. For example, the sample of x0 (t) := dx(t) dt x0 (tn ) =. Z+∞. dx(t) Ω sinc (Ω (t − tn )) dt dt π. (3.9). −∞. which in the frequency domain becomes Z+∞ Z+∞  ω ∗ ω 0 −jωtn x (tn ) = jωX(ω) Π e dω = X(ω) −jωΠ e−jωtn dω | {z } | | {z } Ω {z Ω } | {z } −∞ F[x0 (t)] −∞ F[x(t)] F[sinc(Ω(t−tn )]∗. where. (3.10). −F[sinc0 (Ω(t−tn ))]. ( 1, ω ∈ [−0.5, 0.5] Π(ω) = . 0, ω ∈ / [−0.5, 0.5]. (3.11). Z+∞ Ω dsinc (Ω (t − tn )) x (tn ) = − x(t) dt. π dt. (3.12). In the time domain it gives 0. −∞.

(38) 22. Reconstruction of signals from generalized samples. Therefore for the derivative sampling, a sampling function is − dtd Ωπ sinc (Ω (t − tn )). In general, sampling of all signal linear transformations can be conveniently presented using the inner product with relevant sampling function. Sampling operator (called also analysis operator) can be therefore defined as a vector composed of the inner products h. i x = St x(t) = x(t), s (t − tn ) (3.13) n∈Z. or, more generally. x = Sx(t) =. h. i x(t), sn (t). (3.14). n∈Z. where {sn (t)}n∈Z is certain set of sampling functions from BΩ . Using the analogy with matrix-vector multiplication Ax = b, where the components of the output vector b are inner products between vector x and matrix’ A rows, sampling (3.14) can be presented in the quasi-matrix form as     .. .. . .   .   |  x(t), s−1 (t)   — s−1 (t) —      (3.15) x =  — s0 (t) —   x(t)  =  x(t), s0 (t)  .     |  x(t), s1 (t)   — s1 (t) —  | {z } .. .. . signal . {z } | | {z } S. 3.2.2. samples. Uniqueness of the representation. With the given samples x we can ask what can be said about the original signal x(t). Let the starting point be the uniform sampling, where the sampling instants are u ∈ RZ , un = nT , and the sampling frequency is fs = 1/T . To emphasize the difference between the actual sampling period T and the Nyquist rate sampling period, the latter will be denoted as TΩ . Relation of the sampling frequency to signal bandwidth fNYQ (Nyquist frequency) determines three classes of the reconstruction capability: 1. Critical sampling: sampling frequency equal to Nyquist rate (fs = 2W ). The samples determine signal completely and uniquely. Every vector of samples x determines exactly one signal x(t) and Su is bijection (Fig. 3.3a) 2. Oversampling: sampling frequency above the Nyquist rate (fs > 2fNYQ ). The samples x are able to determine completely all the signals with bandwidth Ω, but also a signals whose bandwidth is higher than Ω. As some vectors x determine signal x(t) ∈ / BΩ , Su is injection but not surjection (Fig. 3.3b) 3. Undersampling: sampling frequency below the Nyquist rate (fs < 2fNYQ ). The same vector of samples x corresponds to multiple signals x1 (t) 6= x2 (t) (which in context of uniform sampling is called aliasing). Su is surjection but not injection (Fig. 3.3c).

(39) 23. BΩ. Su. (a). RZ. BΩ. Su. RZ. (b). BΩ. RZ Su. (c). Figure 3.3: Sampling operator Su as a mapping between signal x(t) ∈ BΩ and samples x ∈ RZ : a) critical sampling, b) oversampling, c) undersampling. In this section we will focus on the cases 1) and 2), where the vector of samples x determines the signal x(t) perfectly. Case 3) is a subject of ongoing research [Thao and Rzepka, 2017]. The properties of the general sampling operator S depend on the choice of functions {sn (t)}n∈Z . In the case of mapping St these functions are shifted copies of Ω sinc(Ωt) and the ability for description of bandlimited signal depends on the choice π of sampling instants [tn ]n∈Z . Properties of the space BΩ give a possibility to abandon the uniformity of sampling, while preserving the ability to represent a signal uniquely. The conditions which assure such ability in the case of sampling function sn (t) = Ωπ sinc (Ω(t − tn )) can be defined with the reference to the uniform sampling sequence: 1. All the time deviations between corresponding sampling instants from uniform sequence u = [nT ]n∈Z (T ≤ TΩ ) and nonuniform sequence t are bounded2 with 0 < τmax < ∞ and α > 1 (sufficient condition). τmax tn − un ≤ p . α |n|. (3.16). 2. Samples do not overlap (necessary condition):. tn 6= tm , ∀n 6= m.. (3.17). The alternative sufficient conditions can be found in [Young, 2001, Higgins, 2004, Kadec, 1964, Beutler, 1966]. The above formulation gives a useful intuition that nonuniform samples are not allowed to be bunched only in some finite interval (or intervals) of time, but can be irregular only locally and become uniform as n → ∞. Since τmax is required only to be finite, then arbitrary large but finite gap between samples is allowed. The constraints (3.16) and (3.17) are related to the frame bounds, a condition which describes the set of functions able to represent any element of the relevant linear space [Christensen, 2008]. A frame is a generalization of the notion of basis for the space (bases ⊂ frames). In contrast to the basis, a frame 2. The formal condition is tn −un ≤ M n−α for some 0 < M < ∞, α < 1 and T = TΩ [Higgins, 2004].

(40) 24. Reconstruction of signals from generalized samples. can be overcomplete - contain more vectors than it is necessary for the representation. The set of functions {sn (t)}n∈Z is a frame for the space BΩ if for all x(t) ∈ BΩ there exists 0 < A ≤ B < ∞ such that X Akx(t)k2 ≤ |hx(t), sn (t)i|2 ≤ Bkx(t)k2 (3.18) n∈Z. The bounded expression is in fact the energy of the samples obtained using sampling functions {sn (t)}n∈Z , and (3.18) can be equivalently written as Akx(t)k2 ≤ kxk2 ≤ Bkx(t)k2. (3.19). Due to the upper bound kxk2 ≤ Bkx(t)k2 , the energy of the samples x cannot be infinite for the signal x(t) with finite energy (and all signals from BΩ have by definition (3.5) finite energy). Such forbidden case could occur if the whole infinite sequence of samples belonged to the finite interval of time, which explains the constraints on the sampling instants (3.16). The upper bound limits also the space to which the signal samples x belong to `2 (Z), instead of the wider space RZ . The lower bound Akx(t)k2 ≤ kxk2 assures that all non-zero signals have non-zero samples (x 6= 0). If it was not met for some signal x(t) 6= 0, then the same, zero samples would be as well a description of another signal y(t) = Cx(t), C ∈ R, which excludes uniqueness of the representation.. 3.2.3. Signal synthesis. If functions {sn (t)}n∈Z constitute a frame, then the samples vector x = h the sampling i x(t), sn (t) perfectly represents a signal x(t). But the functions set {sn (t)}n∈Z n∈Z. can be also used for reconstruction of signal. The synthesis of a signal can be written generally as X x(t) = S ∗ c = cn sn (t) (3.20) n∈Z. and represented as in Fig. 3.4.. c ` (Z) 2. S∗. x(t) BΩ. Figure 3.4: Reconstruction of signal x(t) using mapping S ∗ : `2 (Z) 7→ BΩ . The vector c ∈ `2 (Z) is a vector of weights, proper for the signal x(t). S ∗ is called reconstruction operator (or synthesis operator) and adjoint of S [Christensen, 2008]. Using the analogy with matrix-vector multiplication Ax = b, where the result b is a.

(41) 25. sum of matrix’ A columns weighted by the components of the vector x, reconstruction can be presented in the quasi-matrix form as  .  .   .  | | |  c−1    x(t) =  · · · s−1 (t) s0 (t) s1 (t) · · ·   c0  =   | | |  c1  {z } | .. S∗ . | {z }     c   | | | = . . . + c−1  s−1 (t)  + c0  s0 (t)  + c1  s1 (t)  + . . . . | | |. 3.2.4. Reconstruction using inverse Gram matrix. A naturally arising question is what is the relation between c and x? Using the fact, that sampling using frame {sn (t)}n∈Z is a mapping S : BΩ → `2 (Z), we can apply S to both sides of (3.20), S ∗ c = x(t) SS ∗ c = Sx(t) Sc = x.. (3.21). SS ∗ = S is an infinite-dimensional matrix (called Gram matrix), since SS ∗ maps from `2 (Z) to `2 (Z). If the frame {sn (t)}n∈Z is a basis then S is always invertible. Otherwise (in the case of oversampling), to maintain invertibility, x must contain samples of the signal with bandwidth Ω3 [Strohmer, 2000]. Therefore we can write c = S −1 x S ∗ c = S ∗ S −1 x −1 x(t) = S ∗ S | {z x}. (3.22). c. which can be represented in the form of block diagram in Fig. 3.5. The operator S ∗ S −1 : `2 (Z) → BΩ , belongs to the family of pseudoinverse operators S + , which fulfills x(t) = S + Sx(t). (3.23) 3. In the case of oversampling, vector x may contain representation of the signal with bandwidth exceeding Ω, which cannot be represented using synthesis operator S : `(Z) → BΩ . Therefore for such signals also mapping SS ∗ = S cannot be inverted, since it is not able to produce representation of the signal with the bandwidth higher than Ω..

(42) 26. Reconstruction of signals from generalized samples. x(t) BΩ. S. x c (SS ∗ )−1 2 S∗ ` (Z) ` (Z) 2. x(t) BΩ. Figure 3.5: S&R (sampling and reconstruction) of signal x(t) using inverse Gram matrix S −1. 3.2.5. Whittaker-Shannon interpolation formula. Let us go back to the well-known Whittaker-Shannon interpolation formula [Shannon, 1949] (Fig. 3.6) X x(t) = x(nTΩ )sinc (Ω (t − nTΩ )) . (3.24) n∈Z. x(t). Su. x. π ∗ S Ω u. x(t). Figure 3.6: Uniform S&R using Whittaker-Shannon interpolation One may ask why in the (3.24) no inversion is required and the only weighting coefficients of the signal. The reason for this is that sampling  Ωused are the samples functions π sinc (Ω (t − nTΩ )) n∈Z (see (3.7)) form not only a frame for bandlimited signals from BΩ , but also an orthogonal basis. This implies that Su is scaled identity mapping αI, with α = Ωπ , which can be shown as follows x = Su x(t) π ∗ π ∗ x(t) = Su x = Su Su x(t) Ω |Ω {z }. (3.25). I. I =. Ω Su . π. The factor Ωπ in (3.25) must be used to obtain reconstruction function sinc (Ωt) in (3.24) from sampling function Ωπ sinc (Ωt). The relation between samples and weighting coefficients can be written in the form x = Su c =. 3.2.6. Ω Ω Ic = c. π π. (3.26). Reconstruction using frame operator and dual frame. In the case of nonuniform sampling it is also possible to design a reconstruction formula using samples as a coefficients, similarly to the Whittaker-Shannon interpolation formula. Using the fact that S ∗ : `2 (Z) → BΩ we can apply S ∗ to both sides of.

(43) 27. (3.14), Sx(t) = x S ∗ Sx(t) = S ∗ x Mapping S ∗ S : BΩ → BΩ is called frame operator, and it is invertible if only {sn (t)}n∈Z is a frame [Christensen, 2008]: x(t) = (S ∗ S)−1 S ∗ x.. (3.27). Since operator (S ∗ S)−1 S ∗ fulfills (3.23), then it is also pseudoinverse S + . Notice that S + = (S ∗ S)−1 S ∗ is mapping `2 (Z) → BΩ so it is the type of synthesis operator which uses some functions {˜ sn (t)}n∈Z , X x(t) = (S ∗ S)−1 S ∗ x = xn s˜n (t) (3.28) n∈Z. The set {˜ sn (t)}n∈Z constitutes a dual frame of {sn, (t)}n∈Z [Christensen, 2008]. Sampling and reconstruction using a dual frame is shown in Fig. 3.7. x(t) BΩ. S. x 2 ` (Z). S∗. BΩ. (S ∗ S)−1. x(t) BΩ. Figure 3.7: S&R of signal x(t) using dual frame. For the bandlimited signals under the critical sampling, the functions {˜ sn,t (t)}n∈Z are given explicitly as Y t − tm s˜n,t (t) = (3.29) tn − tm m∈Z/{n}. and the expansion (3.28) coincides with Lagrange interpolation [Yao and Thomas, 1967]. For the oversampling and the samples x = St x(t) containing representation of the signal with bandwidth not exceeding Ω (x(t) ∈ BΩ ), the functions {˜ sn,t (t)}n∈Z have also closed-form expression based on (3.29) with additional compensation factor [Yao and Thomas, 1967]. Functions s˜n,t (t) can be also called impulse responses of the operator S + = (St∗ St )−1 St∗ , since s˜n,t (t) = (St∗ St )−1 St∗ δn. (3.30). where δn is a RZ vector of all zeros with the exception of single 1 at n-th position (Kronecker delta function). The sinc function is a special case of s˜n,u (t), where the sampling instants u are equidistant..

Cytaty

Powiązane dokumenty

Also the results of qualitative and quantitative tests for different temperature conditions, showing the cycle temperature to be a factor contributing to a decrease

Każde badanie składało się z 3 pomiarów: po rozgrzewce (4-minutowy trucht dookoła boiska oraz gra w tzw. „dziadka”), po intensywnym 8-minutowym biegu oraz po teście Coopera i

Due to the fact that the model of the system is nonlinear, as stressed in [12], we adopt a ran- domized approach, namely Scenario-Based MPC (SBMPC). The main idea of this approach is

Semi-empirical methods are suggested for estimates of the force and moment derivatives. Special consideration is given to added masses and rudder forces in view of their

czych Leon Nowodworski (w roku 1925 sekretarz Rady); subtelny pięknoduch F ra n ­ ciszek Paschalski, który czuł się dotknięty tym , że jeszcze nie m iał jakiejś

66 pkt 4 ustawy, w myśl którego wymagań ustawowych odbycia aplikacji adwokackiej i złożenia egzaminu adwokackiego (art. do osób, które odbyły aplikację

PRACE O GEORGIUSIE AGRICOLI I NOWE WYDANIE JEGO DZIEŁ Do niedawna jeszcze liczba prac o znakomitym humaniście wieku XVI. wielkim przyrodniku, mineralogu i geologu, górniku i

Olizarowski non ci fa aspettare prima di svelare il motive, per cui si richiamö alle esigenze dello jus gentium e all'uso della parabola delle api e del miele; tutto questo gli