## Held in NEB 409 Wednesdays 3:00

Seminars in Spring 2015 | |
---|---|

13 Feb. 2015 | Vestibular prosthetics to restore the 6th sense |

Jack DiGiovanna—The vestibular system informs the brain of the body’s position, rotation, and acceleration. One organ in this system is the group of semi-circular canals within each inner ear. These canals detect rotation of the head, which is critical to stabilize vision, e.g. while walking. In patients with vestibular loss, these organs are non-functional; this can lead to symptoms such as vertigo and oscillopsia (blurred vision). | |

Seminars in Fall 2014 | |
---|---|

10 Dec. 2014 | Seizure Detection via State-space Model and Deep Neural Network |

Qi Yu —Epilepsy is a serious brain disorder that affects about 50 million people worldwide. Despite the effectiveness of traditional medicine and surgery method, one third of patients remain untreated. One of the promising options for these patients is responsive electronic stimulation, which is also the main goal of my work. The responsive system detects seizure onsets from EEG in the early stage, and then triggers treatment stimulations to suppress the ongoing seizures. For effective seizure detection, I adopted a state-space model and a deep learning model with correntropy cost to obtain high sensitivity and low false alarm rate; based on the detection, I built a closed-loop brain machine interface system to deliver responsive stimulation for seizure control. Our system could reduce seizure duration by 32%, which demonstrates effectiveness in seizure suppression. | |

19 Nov. 2014 | A computational model for animal navigation in turbulent odor plumes |

In Jun Park —Animals can detect chemical cues using their olfactory systems and use these cues to locate an odor source. Based on animal behavior studies, the intermittency, which is the time interval between odor encounters inherent in the turbulent odor plume structure, was suggested as a potential directional cue for localizing the odor source. Moreover, recent work has shown that the spiny lobster, a species that relies heavily on olfactory search, has specialized populations of neurons within its olfactory organ that are capable of peripherally encoding temporal information about odor encounters. However, it is still unclear how animals relying on olfaction locate odor sources particularly in a turbulent environment. In this work, we first enhance the model for intermittency coding that can incorporate the effect of the stimulus amplitude. Then, by studying the odor plume dynamics using real plume videos, we show that recurrence theory can be applied in our studies. Specifically, the T1 and T2 recurrence time of the odor signal dynamics yields a directional cue and animals can estimate it using the developed neural model mechanism. We demonstrate and quantify in a synthetic environment a gradient search strategy for olfactory search based on bilateral sensing of intermittency. As the neural model is based on a population of neurons, calcium imaging is a very efficient data recording technique. Unfortunately, its temporal resolution is low, so we also propose a methodology to automatically extract spike burst timings based on maximum entropy blind deconvolution. This will simplify further data collection, with a minimal decrease of quality versus single cell recordings. | |

1 Nov. 2014 | Marine Animal Classification Using Combined CNN and Hand-designed Image Features |

Zheng Cao—The classification of marine animals from visual data is usually based on hand-designed image features. These features come in forms of shape, color and texture descriptors, or sometimes biological characteristics pertaining to certain animal species. Recently, DeCAF, a deep convolutional neural network (CNN) trained by millions of natural images, has shown to be highly effective in a variety of computer vision tasks. This paper studies two real-world marine animal classification problems, and discovers that the combination of CNN and conventional hand-designed image features will result in better accuracy than applying just one of either. The strategy for choosing hand-designed features is also discussed. | |

22 Oct. 2014 | Kernel classifiers of C-loss via half-quadratic optimization |

Guibiao Xu —Correntropy induced loss function (C-loss) has already been developed. And Rosha has successfully investigated the kernel adaptive classifier of C-loss. In this presentation, I’m going to propose another optimization method, namely half-quadratic optimization algorithm, to optimize the kernel classifier of C-loss. Some new insights can be obtained from this perspective. | |

Seminars in Spring 2014 | |
---|---|

2 Apr. 2014 | Kernel Adaptive Recurrent Filtering |

Kan Li—We present a novel kernel adaptive recurrent filtering (KARF) algorithm by performing stochastic gradient descent in the reproducing kernel Hilbert Spaces (RKHSs) on a recurrent network. This kernelized recurrent system bridges the gap between the theories of adaptive signal processing and recurrent neural networks (RNNs), and extends the current theory of kernel adaptive filtering (KAF) to include feedback. We demonstrate its capabilities by solving a set of NP-complete problems involving grammatical inference. Simulation results show that the KARF algorithm is a compelling online learning solution for the identification and synthesis of deterministic finite automata (DFA). | |

26 Mar. 2014 | Perirhinal cortical circuit disruptions in a rat model of normal aging |

Dr. Sara Burke and Dr. Andrew Maurer—No abstract | |

19 Mar. 2014 | Challenges of analog to digital converters: exceeding process boundaries |

Dr. Nima Maghari—Analog to digital converters are essential in many systems ranging from wideband wireless communication system to narrowband medical instrumentation. From users’ perspective, the most important merits of performance are often power consumption, speed and resolution. From designers’ perspective, there are numerous challenges to overcome the limitations of CMOS processes to achieve the desired performance. In this talk, I will present some of these challenges and how innovative circuit ideas overcame many process limitations. | |

19 Feb. 2014 | The neurodynamics of seeing what matters: Methodological aspects of human in-vivo neurophysiology |

Dr. Andreas Keil—A plethora of studies suggest that the motivational or biological relevance of external stimuli facilitate sensory processing. Beyond their phenomenological demonstration however little is known about the cognitive processes or the neurophysiological mechanisms that mediate these effects. In this presentation, we discuss conceptual and methodological issues regarding the characterization of perception and attention in the human brain, such as the bottom-up and top-down dichotomy. We then turn to experimental explorations of behavioral and neurophysiological dynamics as observers learn that a novel, initially uninteresting, stimulus is behaviorally relevant. Capitalizing on neuroimaging techniques with fine-grained temporal resolution we find that activity in lower-tier visual cortex changes as a function of motivational relevance acquired through Pavlovian fear conditioning. The temporal rate of these changes, their extent, and the brain regions involved reflect the specific behavioral and environmental contingencies to which observers adapt. We discuss requirements for time series analyses and measures sensitive to spatio-temporal dynamics. | |

12 Feb. 2014 | Pitch Estimation Using Correntropy Based Non-Negative Matrix Factorization |

Ryan Burt—The problem of pitch detection consists of estimating the dominant frequency present in a certain time window. This paper demonstrates and analyzes the use of a non-negative matrix factorization technique with a frequency basis formed with a correntropy kernel. This offers the advantage that the frequency basis is adaptable, allowing the matrix factorization to fit the data precisely, as well as including a dictionary specifically to account for noise. Using non-negative matrix factorization also allows an increase in dimensionality, which increases the frequency resolution of the algorithm. The method is tested on a database of trumpet notes and compared to other current methods, improving on their performance for noisy signals. It also produces encouraging results in relatively small window sizes, suggesting increased time resolution for frequency knowledge. | |

5 Feb. 2014 | Brain on a Chip: From Patterns to Circuits with Information Transfer |

Dr. Bruce Wheeler—Building a Brain on a Chip, while certainly a wild idea, is closer to reality than is reasonable to expect, thanks to applications of both engineering and applied biology. Applications of traditional engineering technologies – signal processing, electronics, microlithography, materials science – make possible the controlled growth, recording, and stimulation of nerve cells, making addressable the goal of engineering a working biological construct. This presentation gives an overview of the effort, including developments in electrode array technology, lithographic and microtunnel techniques that can control the patterns of growth of neurons in culture, and the creation of microcircuits of discrete brain structures. Discussion of the challenges in data processing is included. | |

22 Jan. 2014 | An Information Theoretic Downscaling Framework for Remotely Sensed Soil Moisture |

Subit Chakrabarti—Hydrometeorological models simulate the atmospheric and hydrological processes at scales of 1- 10 km that are significantly influenced by the local and regional availability of soil moisture. Microwave observations at frequencies < 10 GHz are highly sensitive to changes in near-surface moisture and have been widely used to retrieve soil moisture information. While satellite-based active microwave observations are available at spatial resolutions of hundreds of meters, with temporal resolutions of several weeks, passive observations are obtained only at tens of kilometers with temporal resolutions of sub daily to 2-3 days. The European Space Agency-Soil Moisture and Ocean Salinity (ESA-SMOS) and the near-future NASA-Soil Moisture Active Passive (SMAP) missions will provide unprecedented passive microwave observations of brightness temperatures (TB) at the L-band frequency of 1.4 GHz. These products will be available at spatial resolutions of about 40-50 km and need to be downscaled to 1 km to merge them with models for data assimilation and to study the effects of land surface heterogeneity such as dynamic vegetation conditions . In this study, a downscaling methodology was developed using the Principle of Relevant Information to downscale observations of TB from 50 km to 200 m using observations of land surface temperature, leaf area index, and land cover at 200 m. The PRI provides a hierarchical decomposition of image data that is optimal in terms of the transfer of information across scales and is therefore a better alternative to methods that use second-order statistics only. Non-parametric probability density functions and Bayes’ rule was used to transform information from the RS products into TB . An Observing System Simulation Experiment was developed under heterogeneous and dynamic vegetation conditions to generate synthetic observations at 200m to evaluate the downscaling methodology and the transformation functions. | |

Seminars in Fall 2013 | |
---|---|

4 Dec. 2013 | Quantifying functional brain connectivity in a conditional avoidance study |

Mehrnaz Hazrati—Cognition processes in human brain require intensive communication between brain regions. It raises the question that how the functional connectivity and the inter-dependency between consequently scalp-recorded signals vary in different cognitive situations. To date, several statistical approaches have been exploited in order to measure the coordinated activation between specialized neural networks. Measures of dependence have been shown to be promising tools to extract the quantitative characteristics of the brain functional interactions using high temporal resolution Electroencephalogram (EEG) signals. In a collaborative project with Center for the Study of Emotion & Attention, University of Florida, we investigated the feasibility of the generalized measure of association, versus “traditional” set of linear methods, to quantify the brain connectivity over the scalp electrodes in an avoidance conditioning scenario, when subjects learn to emit a response that prevents the occurrence of an aversive event. | |

20 Nov. 2013 | A HIERARCHICAL DYNAMIC MODEL FOR OBJECT RECOGNITION |

Rakesh Chalasani—This work focuses on building a hierarchical dynamic model for object recognition in video. It is inspired from predictive coding framework that is used to explain the working of sensory signal processing in the brain. Using this framework, we propose a new architecture that embodies one of the important characteristics of biological vision; namely, finding a causal representation of the visual inputs while using contextual information coming in from various sources. The proposed model is a deep network to process an input video sequence in a hierarchical and distributive manner, and includes several innovations. At the core of the model is a dynamic system at each level in the hierarchy encoding time-varying observations. We propose a novel procedure called dynamic sparse coding to infer sparse states of a state-space model and extend it further to model locally invariant representation. These dynamic models endow the network with long term memory (parameters) and short-term contextual information and lead to invariant representation of the observations over time. Another important part of the proposed model is bidirectional flow of information in the hierarchy. The representation at each level in the hierarchy is obtain from data driven bottom-up signals and top-down expectations, which are both driving and modulatory. The top-down expectations are usually task specific, which bias and modulate the representations at the lower layers to extract relevant information from the milieu of noisy observations. We also propose a convolutional dynamic model that allows us to scale the proposed architecture to large problems. We show that this model decomposes the inputs in a hierarchical manner, starting from low level representations to higher level abstractions, mimicking the processing of information in the early visual cortex. We evaluate the performance of the proposed model on several benchmark data-sets for object and face recognition and show that it performs better than several existing methods. | |

30 Oct. 2013 | Bounds on Relative Entropy Derivatives |

Dr. Pablo Zegers—Lower and upper bounds for the derivative of the relative entropy with respect to its parameters will be presented. This result will be used to explain when the minimum relative entropy and the maximum log likelihood approaches are equivalent. It will be showed that these approaches naturally activate in the presence of large data sets, and that they are inherent properties of any density estimation process involving large numbers of random variables. | |

23 Oct. 2013 | State Representations in Reinforcement Learning |

Evan Kriminger—Reinforcement learning (RL) solves optimal control problems when the dynamics of the controlled system are unknown. Most RL methods are based on value functions, which quantify the rewards that will be amassed while following a control policy. RL has found limited use in important problems because the value functions are difficult to estimate in high dimensional state spaces. The ability of RL to learn optimal policies depends heavily on the choice of the features representing the state. We develop a method for learning a state representation which captures only the information that is relevant to the control task. The resulting state mapping facilitates the learning of the value functions and allows the agent to generalize when previously unseen states are encountered. | |

9 Oct. 2013 | Intelligent Health & Well-being Systems |

Dr. Parisa Rashidi—In recent years, we have witnessed a rapid surge in intelligent computational approaches for solving various health and biomedical problems. One important factor is the availability of enormous amounts of complex data from various sources. Data mining, natural language processing, and machine learning are central to all these systems, and constitute a major step towards developing more intelligent healthcare solutions. These techniques not only make it possible to process and transform data into actionable knowledge, but also facilitate decision making and reasoning. This talk will discuss the role of data mining, natural language processing, and machine learning in building health & wellbeing systems by discussing several ongoing projects. These projects include: automatic identification of depressive states in major depressive disorder (MDD) patients using Natural Language processing (NLP), home-based assisted living solutions based on ambient and mobile sensor technology, and context-aware augmentative and alternative communication tools for individuals with communication disabilities. | |

13 Aug. 2013 | Universal binary models: distribution and entropy estimation for population spike words |

IL Memming Park—Entropy estimators and probabilistic models for binary spike patterns are key tools for studying the neural code and variability. Neural responses have characteristic statistical structure that generic entropy estimators fail to exploit, and popular binary distribution models such as the Ising model have rigid assumptions and are computationally intractable for large populations. To overcome these limitations, we propose a family of ”universal” models for binary spike patterns, where universality refers to the ability to model arbitrary distributions over all possible binary patterns. We construct universal binary models using a Dirichlet process centered on a well-behaved parametric base measure over the binary words, which naturally combines the flexibility of a histogram and the parsimony of a parametric model. For distribution estimation, we derive computationally efficient inference methods using Bernoulli and cascaded logistic base measures, which scale tractably to large populations. We also establish a condition for the equivalence between the cascaded logistic and the Ising model, making cascaded logistic a reasonable choice for base measure in a universal binary model. For entropy estimation, we devise a compact representation of the data and two simple priors that allow for computationally efficient Bayesian least squares estimator for large populations. We demonstrate the flexibility and performance of those estimators on simulated and real neural recordings. | |

Seminars in Spring 2013 | |
---|---|

10 Apr. 2013 | Dynamic Signal Analysis by Kernel based Frameworks |

Andres Alvarez—The Signal Processing and Recognition Group (SPRG) of the Universidad Nacional de Colombia has been working in the analysis of biosignal data, in order to propose machine learning methodologies to support the development of automatic systems for diagnostic assistance. Moreover, the SPRG is also interested in the dynamic analysis of video data to support motion and biomechanical analysis tasks for both health and interactive purposes. To suitable analyze the information of such kind of signals, some feature representation, selection and extraction frameworks have been developed based on variability analysis and time scale/frequency techniques. In addition, some kernel and multi-kernel based frameworks have been proposed to unfold the nonlinear structures of the data. However, according to the attained results, some computational limitations and estimation drawbacks of the proposed methods were found when dealing with nonstationary signals, in both offline and online systems, due to the proposed methods do not consider directly the signal temporal structure. Regarding this, here it is presented a brief description about the main ideas of how the Information Theoretic Learning (ITL) and the adaptive filters for online environments developed by the Computational NeuroEngineering Group (CNEL) at University of Florida, could be used as tools to deal with the above mentioned problems. Therefore, it is considered the use of the Kernel Least Mean Square (KLMS) adaptive filter and its quantized version (QKLMS) to reveal the main dynamics of the signals. According to the preliminary studies, the need of suitable frameworks that could infer different dynamics (i.e. nonstationary signal) in online environments is still an open issue, because of the finite number of samples and the memory and adaptive properties that must be taken into account carefully. CNEL currently works in some methods to estimate the free parameters of KLMS and QKLMS aiming to adapt the system for nonstationary signals. The use of ITL (e.g. correntropy) is also proposed in CNEL to improve the robustness and the convergence of adaptive filters for both Gaussian and non-Gaussian noise perturbations. Also, multiple kernel based frameworks are studied in CNEL as tool to enhance KLMS performance against nonstationary signals. Therefore, the initial ideas of how other multiple kernel learning schemes could be used in KLMS and QKLMS, and how they could be incorporated in an ITL framework are described. Finally, as an alternative solution, the use of clustering methods to build the codebook in QKLMS could be consider to reveal the main dynamics of the signals, which can be very useful for nonstationary time series prediction, and for biosignal and video analysis. | |

27 Mar. 2013 | Robust Kernel Adaptive system for Supervised learning |

Rosha Pokharel—Kernel methods are widely popular in the field of machine learning where the need is to predict desired information from data. Its popularity lies in the fact that it allows the implicit mapping of data into high dimension feature space where the non-linear problem in input space can be formulated as a linear problem and thus can be solved using linear algorithms. Such a mapping is produced by the use of continuous, symmetric and semi positive definite kernel. Therefore, the proper selection of kernel and thus the kernel size is one of the major problem in kernel methods literature without a widely accepted solution to it. This work proposes methods to address this issue for Kernel Adaptive Filter(KAF) based on Kernel Least Mean Square (KLMS) algorithm framework. The recently proposed information theoretic learning based loss function called correntropy-loss is applied in KLMS framework by replacing square loss and its working in analyzed in context of classification. Classification can be seen as a mapping problem where some function of $x_n$ predicts the expectation of a class variable $y_n$ and by using c-loss instead of square loss, an adaptive classifier can be obtained that is less affected by noisy data and initial conditions such as step-size, kernel. Although the performance improves with change in the cost or the loss function, the working is still very much dependent on the choice of kernel. This motivates to find a better solution to appropriate kernel selection. Thus, the proposed method is focused on adaptive kernel selection for KAF, where the system can adaptively select appropriate kernel in order to tune in to any changes in the signal in an online fashion. This method is called additive kernel least mean square (AKLMS) algorithm and is able to adaptively select the best kernel size using competitive mixture of models, where each model is a least mean square (LMS) adaptive filter solved in separate reproducing kernel Hilbert spaces (RKHS). By using this method, we are able to obtain the multiple kernel learning (MKL) formulation for KLMS, since the conventional MKL method would otherwise not fit the KLMS framework. The model thus obtained not only selects the best kernel, but also significantly improves the prediction accuracy. Moreover, we impose sparsity in learning the mixing coefficient, by using a non-linear gating function. Thus, the model is also able to select different kernel for different regions of input space, thus modeling the local characteristics in different regions. Such a property is very beneficial, especially for non-stationary system identification, where the model has to continuously cope with abrupt changes. Hence a robust prediction is achieved that is unaffected by sudden changes in signal. By using this method, not only proper kernel is selected for different regions in the input space, but the accuracy of prediction is also increased by a significant amount. | |

20 Mar. 2013 | Mining periodic variable stars in large astronomical databases using an Information Theoretic algori |

Pablo Huijse Heise—In this seminar an information theoretic approach for periodicity detection in astronomical light curves will be presented. Light curves are astronomical time series of brightness over time, and are characterized as being noisy and unevenly sampled. The proposed metric uses correntropy with a time-frequency kernel and it can be viewed as a generalized periodogram. The correntropy kernelized periodogram (CKP) extracts information of the higher-order moments present in the data and emphasizes the spectral peaks associated with the underlying period of the time series. The CKP is the main part of a fully-automated pipeline for periodic light curve discrimination to be used in large astronomical survey databases. The CKP outperformed the slotted correntropy, Lomb-Scargle periodogram and other conventional methods for periodicity discrimination in a set of light curves drawn from the MACHO astronomical survey. Using general purpose computing for graphical processing units (GPGPU) and the XSEDE GPU cluster Forge we can run the periodicity discrimination pipeline on 30 million light curves in less than a day. The results obtained after processing the 30 million light curves from the EROS-2 astronomical survey will be presented. | |

27 Feb. 2013 | A Scalable RC Architecture for Mean-Shift Clustering |

Stefan Craciun—The mean-shift algorithm provides a unique non-parametric and unsupervised clustering solution to image segmentation and has a proven record of very good performance for a wide variety of input images. The importance of accelerating this algorithm has significant potential, as it can both complement and provide a safety net for experts in domains that require careful analysis and supervision of images. Image segmentation is essential to image processing because it provides the initial and vital steps to numerous object recognition and tracking applications. However, image segmentation using mean-shift clustering is widely recognized as one of the most compute-intensive tasks in image processing, and suffers from poor scalability with respect to the image size (N pixels) and number of iterations (k): O(kN2). Long execution times on conventional computing platforms and poor scalability have prohibited its impact in the image processing domain with runtimes that have precluded real-time applications. As images continue the trend of increasing resolution and depth, the need for a faster and more scalable solution is critical. Our novel approach focuses on creating a scalable hardware architecture fine-tuned to the computational requirements of the mean-shift clustering algorithm. By efficiently parallelizing and mapping the algorithm to reconfigurable hardware, we can effectively cluster thousands of pixels independently. Each pixel can benefit from its own dedicated pipeline and can move independently of all other pixels towards its respective cluster. This paper presents a parallelizable mean-shift algorithm that is ideally amenable to a reconfigurable computing (RC) architecture. By using our mean-shift FPGA architecture, we achieve a speedup of three orders of magnitude with respect to our software baseline. | |

20 Feb. 2013 | Quantifying bursting neuron activity from calcium signals with blind deconvolution |

In Jun Park—Advances in calcium imaging have enabled studies of the activity dynamics of both individual neurons and neuronal assemblies. However, challenges such as the unknown nonlinearity of spike-calcium relationship, noise, and the often relatively low temporal resolution of the calcium signal compared to the time-scale of spike generation restrict the accurate estimation of action potentials from the calcium signal. Complex neuronal discharge, as in the case of bursting or rhythmically active neuronal activity represents an even greater challenge for reconstructing spike trains based on calcium signals. Here we propose a method using blind calcium signal deconvolution based on an information-theoretic approach. The basic idea is to maximize the output entropy of a nonlinear filter where the nonlinearity is defined by the cumulative distribution function of the spike signal. We tested this maximum entropy (ME) algorithm on bursting olfactory receptor neurons (bORNs) in the lobster olfactory organ. The advantage of the ME algorithm is that the filter can be trained online based only on the statistics of the spike signal with no assumptions made about the unknown transfer function from the spikes to the calcium image. We show that the ME method is able to reconstruct the timing of the first and the last spike of a burst with higher accuracy compared to other methods, and improves the temporal precision fivefold compared with direct estimation from calcium imaging landmarks. | |

13 Feb. 2013 | Beyond DSP: Bio-inspired pulse computing and the future |

Gabriel Nallathambi—Digital computing and communication technology during the latter half of 20th century brought forth the digital revolution. Microcontrollers and DSPs are at the core of the devices that have enabled the digital revolution. However emerging devices such as wireless body area networks and sensors, body-worn monitors and implantable devices require very low power consumption and computational complexity. These demands cannot be resolved by designing faster digital chips with increased functionality. In this talk, we will focus on a bio-inspired ultra low power technology for novel sensors that will continuously monitor the physiological vital signs of the human body. The sensor technology is based on integrate-and-fire (IF) analog to pulse converter developed at CNEL and it has an ultra-low power hardware implementation that serves as a substitute for power hungry DSPs. In IF implementation, the signal is encoded in a series of time events rather than the uniformly spaced amplitude values and it substitutes digital bits by the timing between positive and negative pulses. This pulse representation is as precise as conventional A/Ds because it provides also an injective mapping between analog, real world signals and the pulses. Techniques from formal language theory such as attribute grammars will be used to describe and represent the pulse train generated by IF. Finally, we will demonstrate from a real-world example that this methodology is feasible and offers accurate results comparable with the state-of-the-art. Combined with wireless technology and telemetry, such devices will open a vast range of possibilities in the prognosis and diagnosis of disease. | |

6 Feb. 2013 | Compressive Sensing Underwater Laser Serial Imaging System |

Bing Ouyang—Underwater imaging systems have been under continual development for use aboard undersea platforms such as manned submersibles, Autonomous Underwater Vehicles (AUVs), and Remotely Operated Underwater Vehicles (ROVs); in many cases used in conjunction with sonar to perform the final electro-optical target identification (EOID). The most highly regarded extended range underwater laser imaging technique to date is the serial laser line scan (LLS) system. However, the mechanical and high-precision optical components in the LLS system lead to high production and maintenance costs, consume a significant portion of the system size, weight and power (SWAP) budget, and the many moving parts can reduce system reliability. The optical and mechanical parts also lead to bulky system packaging, where extended range undersea laser imagers must be compact to be compatible with current and future classes of man- portable AUVs. This talk will study a CS based active underwater laser serial imaging system concept as a more compact, more reliable and lower cost alternative to the LLS system. The challenges of the scattering medium such as turbid coastal water on such a CS based active imaging system and techniques to mitigate such limitations will be discussed. Simulation results to study the performance of the proposed technique will be presented. Experimental results from over-the-air and underwater tests are also presented. Finally, the potential for extending the proposed frame– based imaging technique to the traditional line-by-line scanning mode is discussed. | |

Seminars in Fall 2012 | |
---|---|

28 Nov. 2012 | A minimalist model for olfactory sensing and decision making |

Andrew Hein—Many organisms locate resources in environments in which sensory signals are rare, noisy, and lack directional information. Recent studies of search in such environments model search behavior using random walks (e.g., Lévy walks) that match empirical movement distributions. We extend this modeling approach to include a very simple searcher responses to noisy sensory data. We explore the consequences of incorporating such sensory measurements into search behavior using simulations of a visual-olfactory predator in search of prey. I will discuss our model of how searchers measure a scent field and how they process this information. Finally, I’ll show some new results that suggest that even this minimal model for information acquisition and decision-making can qualitatively change the rate at which searchers find targets. Much of what I’ll talk about is from a recent paper that you can find here: http://andrewhein.org/here/wp-content/uploads/2012/04/Hein2012Proceedings-of-the-National-Academy-of-Sciences.pdf | |

7 Nov. 2012 | From Fixed to Adaptive Budget Robust Kernel Adaptive Filtering |

Songlin Zhao—Recently, owning to universal modeling capacity, convexity in performance surface and modest computational complexity, kernel adaptive filters have attracted more and more attention. Even though these methods achieve powerful classification and regression performance in complicated nonlinear problems, they have drawbacks. This work focuses on how to improve kernel adaptive filters performance both on accuracy and computational complexity. After reviewing some existing adaptive filters cost functions, we introduce an information theoretic objective function, Maximal Correntropy Criterion (MCC), that contains high order statistical information. Here we propose to adopt this objective function for kernel adaptive filters to improve system accuracy performance in nonlinear and non-Gaussian scenario. To determine the free parameter, kernel width in correntropy, an adaptive method based on the statistical property of the prediction error is proposed. After that we propose a growing and pruning method to realize a fixed-budget kernel least mean square (KLMS) algorithm based on improvements of the quantized kernel least mean square algorithm and a new significance measure. The end result is to control the computational complexity and memory requirement of kernel adaptive filters while preserving the accuracy as much as possible. This balance between accuracy and filter model order is explored from the perspective of information learning. Indeed the issue is how to deal with the trade-off between system complexity and accuracy performance, and an information learning criterion called Minimal Description Length (MDL) is introduced to kernel adaptive filtering. Two formulations of MDL: batch and online model are developed and illustrated by approximation level selection in KRLS-ALD and center dictionary selection in KLMS respectively. The end result is a methodology that controls the kernel adaptive filter dictionary (model order) according to the complexity of the true system and the input signal for online learning even in nonstationary environments. | |

31 Oct. 2012 | Self-Organized Computational Perception in the Time Frequency Domains |

Goktug T Cinar—We know that the auditory pathway in the human cortex is a fairly complicated structure. Lots of attempts have been made to model the auditory perception. It will be fair to state that most of the work concentrated on the modeling of cochlea and the auditory nerves linking to the basilar membrane in the cochlea. This kind of approach will not be able to capture the overall organization of the auditory cortex. The auditory cortex in the brain does effortlessly a better job of extraction information from the acoustic world than our current generation of signal processing algorithms. Also, it is a common belief that the cortex is stereotyped in its layers and internal organization. The proposed architecture is based on Kalman filters with hierarchically coupled state models that stabilize the input dynamics and provide a representation space. This model can be the building block of a larger computational model that provides more biologically plausible representations of auditory information. An important characteristic of the methodology is that it is adaptive and self-organizing, i.e. previous exposure to the acoustic input is the only requirement for learning and recognition. Some preliminary results will be shown. | |

24 Oct. 2012 | Learning Nonlinear Generative Models of Time Series with a Kalman Filter in RKHS |

Pingping Zhu—This talk presents a novel generative model for time series by implementing the Kalman filter algorithm in a reproducing kernel Hilbert space (RKHS) based on the conditional embedding operator. The end result is a RKHS algorithm that quantifies the hidden state uncertainty and propagates its probability density function (pdf) forward as in the Kalman algorithm. The embedded dynamics can be described by the conditional embedding operator constructed directly from the training data. Using this operator as the counterpart of the state transition matrix, we reformulate the Kalman filter algorithm in RKHS. For the state model, the hidden states are the embeddings of the measurements pdf, while the measurement model serves to connect the embeddings with the current mapped measurements in the RKHS. This novel algorithm is applied to noisy time series estimation and prediction, and simulation results show that the algorithm outperforms other existing algorithms. In addition, some techniques are proposed to reduce the size of the operator and reduce the computation complexity. | |

17 Oct. 2012 | Bio-signals: Motivating Scenarios and Applications |

Hugo Silva—Bio-signal analysis has far exceeded the medical practice scenarios to which it was traditionally associated with, to find novel applications in areas so diverse as Human-Computer Interaction, Affective Computing, among many others. The growing interest by both research communities and industry leaders throughout different activity sectors, together with the profusion and usability of modern acquisition technologies, has enabled a true revolution in the field of biosignal research. In this talk we will provide an overview of past an ongoing work focused on identity recognition, machine learning targeted at the detection of psycophysiological markers, and recent advances in biosignal acquisition technologies towards non-intrusiveness and real-world deployment of research outcomes. An example of the addressed topics is the use of Electrocardiographic (ECG) signals for biometric purposes; the ECG has several properties that can greatly complement the existing, and more established, biometric modalities. Some of the most prominent properties are the fact that the signals can be continuously acquired using minimally intrusive setups, are not prone to produce latent patterns, and provide intrinsic liveliness detection, opening new opportunities within the area of biometric system development. The potential impact of this technique extends to a broad variety of application domains, ranging from the entertainment industry to digital transactions; results so far, further reinforce the feasibility and interest of the method in a multibiometrics approach. | |

3 Oct. 2012 | Kernel Based Machine Learning Framework for Neural Decoding |

Lin Li—Brain machine interfaces (BMI) have attracted extensive attention as a promising technology to aid disabled humans. However, the neural system itself is a highly distributed, dynamic and complex system, which contains millions of neurons that are functionally interconnected. How to best interface neural system with human engineeried technology is critical and challenging problem. These motivate our research in neural decoding that is a significant step to realize useful BMI. In this dissertation, we aim to design a kernel-based machine learning framework to address a set of challenges in characterizing neural activity, decoding the information of sensory or behavioral states and controlling neural spatiotemporal patterns. | |

19 Sept. 2012 | Overview on CNEL |

Dr. Jose C. Principe—Dr. Principe will give brief overview on different projects that students are working on. | |

12 Sept. 2012 | Multi-linear Structure in Neural Signals and Exploiting Recurrent Structure in Natural Images. |

Austin J. Brockmeier—Multi-electrode recordings of neural electrical potential signals---e.g., EEG, LFP etc.---provide high-dimensional datasets; efficiently dealing with the dimensionality is the first challenge in analyzing the data for neuroscientific or clinical applications. Inherently two-dimensional across channels and time, the number of modes is increased by time-frequency analysis. Multiple trials and, in certain cases, multiple subjects are additional modes for analysis. The multimodal nature of these signals is naturally represented with tensors. This seminar covers approaches for tensor analysis that have been developed in Dr. Andrzej Cichocki’s Lab for Advanced Brain Signal Processing at RIKEN where I worked this summer in Japan under a joint NSF/JSPS fellowship. Multiway arrays or tensors are general data organization structures generalizing matrices above two dimensions. Tensor storage is handy for bookkeeping of multiway data, but the true advantage is really in terms of tensors decompositions that approximate the data in terms of reduced set of coefficients organized separately along each of the different modes and, possibly, smaller core tensors of coefficients relating the modes. The former is the generalization of singular value decompositions for tensors, and it is known as Candecom Polyadic or CP; it approximates the tensor by the summation of outer products of vectors defined along each mode. Tensor decompositions are related to their counterparts for matrices and are commonly estimated by a series of alternating matrix decompositions. Just as the redundancy of multivariate time-series appears may across channels, images have recurrent patterns that appear in patches. By analyzing images in patches—modeling the image as a combination of Kronecker products—large increases in compression can be achieved. Interestingly, we can use standard matrix decomposition techniques since Kronecker product can be realized as a reshaped outer product. Furthermore, images are instructive for their visualization for understanding tensor and matrix decompositions. | |

Seminars in Spring 2012 | |
---|---|

18 Apr. 2012 | Quantifying cognitive processes in the human brain using measures of dependence |

Bilal Fadlallah—The exquisite human capacity to perceive facial features has been explained by the activity of neurons particularly responsive to faces, found in the fusiform gyrus and the anterior part of the superior temporal sulcus. In this seminar, we demonstrate that it is possible to detect automatically the recognition of faces solely from processed electroencephalograms (EEG) with high temporal resolution using measures of statistical dependence applied on steady-state visual evoked potentials (ssVEPs). Using measures of dependence exploits bivariate distributions among pairwise channel recordings, modeled as an indexed family of random variables belonging to a stochastic process and is a more realistic approach to quantify the joint spatio-temporal data distribution than previous ones just working with the marginal distributions. Correlation, mutual information, besides two novel measures of generalized association and a weighted measure based on permutation entropy, were applied on filtered current source density (CSD) data. Dependencies between channel locations were assessed for two separate conditions elicited by distinct pictures flickering at a rate of 17.5 Hz. Filter settings were chosen to minimize the distortion produced by bandpassing parameters on dependence estimation. Statistical analysis was performed for automated stimuli classification using the Kolmogorov-Smirnov test. Results show active regions in the occipito-parietal part of the brain for both conditions with a greater dependency between occipital and inferotemporal sites for the face stimulus. This aligns with previous evidence suggesting re-entrant organization of the ventral visual system, showing heightened re-entry when viewing meaningful or salient stimuli. | |

11 Apr. 2012 | Overview on Works of Intelligent Information Systems Lab. (IISL), Kyungpook National University |

Prof. Doo Hyun Choi—In this seminar, I will present the past and on-going projects of Intelligent Information Systems Laboratory (IISL). IISL is started at 1996, stopped at 2000, and restarted at 2003 because of my job change. From 2003, I involved many practical projects relating on nondestructive testing and evaluation (NDT & E) fields and got several patents and wrote several text books written in Korean. Some of the projects will be introduced in this seminar. From two years ago, my interest has been changed from machines to person, so I invested some resources of my lab. to researches on person related signal processing, particularly in ECG, EEG, etc. As a result, projects on EEG have been generated with my colleagues in KNU and currently we are trying to get meaningful results. Stimuli used for our research is also shown in the seminar. | |

4 Apr. 2012 | Some Recent Advances in Kernel Adaptive Filtering: Quantization, L1 Regularization, and Kernel-size |

Dr. Badong Cheng—Recently, a family of online kernel-learning algorithms, known as the kernel adaptive filtering (KAF) algorithms, becomes an emerging area of research. The KAF algorithms are developed in reproducing kernel Hilbert spaces (RKHS), by using the linear structure of this space to implement well-established linear adaptive algorithms and to obtain nonlinear filters in the original input space. These algorithms include the kernel least mean square (KLMS), kernel affine projection algorithms (KAPA), kernel recursive least squares (KRLS), and extended kernel recursive least squares (EX-KRLS), etc. When the kernel is radial (such as the Gaussian kernel), they naturally build a growing RBF network, where the weights are directly related to the errors at each sample. The main open challenges of the KAF algorithms are basically two: The first is their growing structure with each sample, which results in increasing computational costs and memory requirements especially for continuous adaptation scenarios. The second remaining problem that should be addressed is how to select a proper Mercer kernel, especially when the training data is small in size. In this talk, I will present some recent advances in addressing these two issues. My talk will cover the quantization, L1 regularization, and kernel size adaptation. Several simulation results will be presented, and some future trends will also be discussed. | |

21 Mar. 2012 | Efficient Kernelized Online Temporal Difference Learning |

Jihye Bae—Value function estimation is a major problem in reinforcement learning, and Temporal Difference (TD) learning is a common approach to this problem. However, most of the algorithms are still limited to linear function approximation, and most of the nonlinear extensions are prone to local minima, which may lead to poor fit in practice. Thus, we propose a kernel version of online Temporal Difference learning to achieve faster and more reliable online nonlinear function approximation. In this talk, the behavior properties and applications of kernel temporal difference (KTD) (λ) for value function estimation will be observed. Furthermore, we address the problem of finding optimal state to action mapping by extending KTD (λ) to Q-learning. Finally, we propose the integration of correntropy criterion to KTD, and show how KTD can be enhanced by this addition. | |

14 Mar. 2012 | A Hierarchical Dynamic Model for Object Recognition |

Rakesh Chalasani—The core problem of visual object recognition is to extract features that are invariant to a wide range of variations on the object. Recently, inspired from the mammalian visual cortex, several hierarchical (deep) models are proposed that are capable of obtaining such features and are shown to improve object recognition in static images. In this talk, I will discuss a new hierarchical Bayesian model that can extract invariant features from time-varying signals like video, audio, etc. The fundamental idea of this model is to obtain local features from the observations, which can then be combined to form a globally invariant representation. The model comprises of state-space models with a sparsity constraint, acting as local feature extractors and introduces temporal coherence. These are arranged in a hierarchical way such that the output of one provides input to another. I show that such a hierarchical model can lead to an invariant representation, which can then be fed to a classifier for object recognition. | |

29 Feb. 2012 | Non-Invasive Brain Computer Interface to control a virtual environment |

Mehrnaz Hazrati— Brain-Computer Interfaces (BCI) technology is a promising yet challenging field of science, moving out of research labs into more serious medical application and costumer field. It establishes a â€śdirectâ€ť interaction between the human central neural system and the outside world. Applications range from helping the injured and disabled communicate, navigate, and rehabilitate to controlling robots, toys, and playing video games. This presentation will cover basic principles behind the science of BCI (hard- and software), as well as the practical consideration and achieved results of a motor imaginary based BCI to control a virtual hand or an E-Puck robot. The purpose of the project is to investigate whether subjects could achieve satisfactory performance online with short-time offline training following by adaptive online training. Results of healthy subjects will be presented. Furthermore future trends will be addressed. | |

22 Feb. 2012 | Unsupervised learning approaches to neural data analysis |

Austin Brockmeier—In this talk, I will cover unsupervised learning approaches to neural (action potentials and electric potentials) data analysis, and concentrate on the role of independent component analysis as applied to the blind source separation problem for neural potentials. Neural recordings provide inherently complex data wherein useful information may be linked across multiple dimensions (e.g. time, space, condition, etc.). In some cases, a priori knowledge (in the form of additional variables) can be used to identify this structure using supervised learning. On the other hand the prior knowledge may be insufficient (single labels for whole trials, or lack of specific hypothesis) to identify structure on all scales. In these cases, it may be necessary to apply unsupervised learning to first identify various structural features in the data, and then apply supervised learning not on the original data but on the identified/extracted features. Unsupervised learning identifies latent variables that describe the structure of observed data while conforming to a set of a priori constraints. Typical forms include clustering, auto-encoders, dimensionality reduction, and independent component analysis. | |

15 Feb. 2012 | Securing our borders: current efforts in landmine detection |

Dr. Seniha Esen Yuksel—Over thirty-nine countries suffer from the threat of currently buried 60 million landmines and around 26000 people a year are wounded or killed by landmines. In this talk, I will cover the challenges faced in landmine detection and introduce the latest advancements in this area. In the first part of the talk, I will describe context-based classification and introduce our mixture of hidden Markov model experts algorithm that can both decompose time-series data into multiple contexts, and learn expert classifiers for each context. In the second part of the talk, I will describe multiple instance learning for analyzing ambiguous data, and I will introduce our multiple instance hidden Markov model that can learn from ambiguous ground penetrating radar data. Apart from landmine detection, these two techniques have applications in many areas including health-care, finance and recommender systems. The talk might be of interest to anyone interested in machine learning, pattern recognition and statistical data analysis. Speaker Bio: Dr. Seniha Esen Yuksel received the Ph.D. degree in Computer Engineering from the University of Florida, Gainesville, in 2011 with specialization in machine learning; the M.Sc. degree in Electrical and Computer Engineering from the University of Louisville, Kentucky, in 2006 with specialization in medical imaging; and the B.Sc. degree in Electrical and Electronics Engineering from the Middle East Technical University, Turkey, in 2003. Currently, she is working as a postdoctoral researcher in the Materials Science and Engineering Department at UF, where she is developing algorithms for explosive detection from hyperspectral imagery. Dr. Yuksel was the recipient of the University of Florida College of Engineering Outstanding International Student Award in 2010, and the recipient of the Phyllis M. Meek Spirit of Susan B. Anthony Award at the University of Florida in 2008. Her research interests include machine learning theory, statistical data analysis, applied mathematics and computer vision. | |

8 Feb. 2012 | Surprise Metric for Sensor Contact Fusion in Sparse Data Environments |

Erion Hasanbelliu—Sensor data collected during mine-hunting missions usually contain numerous contacts detected by either human operators or automated detection and classification algorithms. Many of these contacts are multiple detections of the same object that must be fused at a later step in the data collection and display process. In our previous work, we have demonstrated good performance obtained with a dynamic tree (DT) that captured the full probability structure available across sensors and sensing platforms. The issue is that the approach requires a training set to learn the parameters, and these have to be kept constant during operation. The limited availability of these labeled data hampers performance. In addition, when the system is in operation, thousands of new objects are encountered that could potentially enhance the statistical models and improve performance. However, if this is not handled with care, the models created in the training set can become corrupted and hinder performance. Our objective is to extend the DT learning to the test set by incorporating an information theoretic measure known as surprise that will selectively adapt the model parameters when the data does not agree with the DT probabilistic model of the environment. | |

1 Feb. 2012 | Kernel-Based Matrix functions for Information Theoretic Learning |

Luis Sanchez Giraldo—Information theoretic learning (ITL) was introduced in the context of adaptive systems as a powerful alternative to second order methods to effectively deal with problems where optimality based on linear and Gaussian assumptions no longer hold. An important concept within the ITL framework is the information potential. Originally derived as a physical analogy motivated by the density estimation step underlying the calculations of operational quantities such as entropy and divergence, the information potential plays an important role in establishing connections between ITL and positive definite kernels. Recent work has shown that information potential itself defines a positive definite function and this view can be used to go beyond the Parzen density estimation that initially motivated its study. In this work, we want to understand the role of Gram matrix as a data-dependent positive definite operator and what properties are key to provide an entropy-like quantity without assuming that probabilities of events are known or have been estimated. We develop a matrix-based functional and show how this enable us to extend the information theoretic learning framework beyond the current state where density estimation still plays a significant and perhaps limiting role. We show links to some of the recent work in kernel methods such as kernel ICA, measures of dependence and independence using Hilbert-Schmidt norms, and recent work on quadratic measures of independence. | |

25 Jan. 2012 | Kalman Filtering in Reproducing Kernel Hilbert Spaces |

Pingping Zhu—There are numerous dynamical system applications that require to do estimation or prediction tasks from noisy data, including vehicle tracking, channel tracking, economic forecasting, and so on. Many linear algorithms have been developed to deal with these problems under different assumptions and approximations, such as the Kalman filter, the recursive least squares algorithm, and the least mean squares algorithm. However, these linear algorithms cannot solve nonlinear problems that often occur in real life. To address these nonlinear problems, some nonlinear algorithms have been recently proposed, like kernelized version of the linear algorithms. Our research follows this line and seeks to develop novel algorithms using kernel methods to deal with the nonlinear problems. Specifically, our goal is to derive the Kalman filter in the reproducing kernel Hilbert space, which is a space of functions. In this proposal, two algorithms are presented to address different nonlinear problems: a novel extended kernel recursive least squares algorithm and a kernel Kalman filter with conditional embeddings in RKHS. The former is a novel version of the extended kernel recursive least squares algorithm with more flexible state model, while the latter is the Kalman algorithm constructed in the RKHS based on the conditional embeddings operator. The two algorithms are developed under different assumptions and have different applications. In this proposal, their performance are also tested and compared with other existing algorithms. | |

18 Jan. 2012 | Overview of research in CNEL |

Dr. Jose Principe—Overview of research in CNEL | |

Seminars in Fall 2011 | |
---|---|

26 Oct. 2011 | Evolving Spiking Neural Networks for Predicting Transcription Factor Binding Sites |

Dr. Heike Sichtig, Riva Bioinformatics lab at UF Genetics Institute—The computational identification of regulatory elements in genomic DNA is key to understanding the regulatory infrastructure of a cell. We present an innovative tool to identify Transcription Factor Binding Sites (TFBSs) in genomic sequences. We show that our Positional Pattern Detection tool is able to attain high sensitivity and specificity of TFBS detection by capturing dependencies between nucleotide positions within the TFBS, thereby elucidating complex interactions that may be critical for the TFBS activity. Further, we unveil a combination of two biologically realistic information processing methods that underlie our tool: third-generation neural networks (spiking neural networks) are used to represent the structure of TFBSs, and a genetic algorithm is used for optimization of network parameters. Initially, the networks are trained to distinguish known TFBS binding sites from negative examples in the learning phase. Then, the evolved network is used as a classifier to detect novel TFBSs in genomic sequences. Moreover, we show an application of our method to GAL4 binding sites in yeast. A two-neuron network topology is trained with real data from TRANSFAC and SCPD and evaluated through simulation. We show how neuron and synapse parameters can be evolved to improve classification results. The networks’ predictions were compared against MAPPER, TFBIND and TFSEARCH. Our results reveal that our innovative tool has the potential to attain very high classification accuracy, with a very small number of false positives. These results show that information processing methods are able to capture important positional information in TFBSs and should be explored further to look at complex relationships underlying transcriptional and epigenetic regulation. | |

25 Oct. 2011 | Toward a measure of visual complexity |

Andre Cavalcante, Nagoya Univeristy—Affective appraisal of visual compositions is known to be strongly influenced by the perception of complexity. How humans perceive or decide that a scene â€śAâ€ť is more complex than a scene â€śBâ€ť is however yet to be understood. In this study, streetscape images from four cities (including daytime and nighttime scenes) were ranked for complexity by 30 subjects. An objective measure was proposed to account for complexity of the streetscapes. This measure is based on the statistics of contrast and spatial-frequency content of the streetscape image. The proposed measure shows significantly higher correlations with subjective ranks than other measures of visual complexity. | |

19 Oct. 2011 | Remote Transcription for Hearing Impaired |

Dr. Yoshinori Takeuchi, Nagoya University—Remote transcribers are presented with a unique challenge when transcribing classroom presentations, because they must transcribe based on the visual context of the blackboard or presentation. This system seeks to identifying presenter focus using computer vision to identify items and words to aid remote transcription. This is especially important when presenter using ambiguous pronouns during speech so transcriber must refer back to video; the system will copy the item indicated. | |

14 Sept. 2011 | Computational Aspects Underlying the Transformation of Neuroregulatory Messages in the Heart |

Fausto Lucena—Much has been discussed about the exquisite capacity of the nervous system in adjusting the cardiac rhythm to a wide variety of physiological and psychological demands. Yet, a theory that can explain the principles underlying the coding of neuroregulatory messages in the heart has not yet emerged. Understanding the strategy used by the heart to code incoming stimuli is essential to this issue. One hypothesis holds that specialized cells evolved to minimize the statistical redundancy of input messages (as strategy) to enhance important behavioral information. Using a neural network (based on an independent component analysis) adapted to maximize information of heartbeat intervals, we learn a population code that resembles finite impulse response (FIR) filters. We show that modeling these filters as independent contributions of the cardiac rhythm yields a response similar to the mammalian heart. This result suggests that the heart processes neuroregulatory messages according to fundamental principles derived from information theory. In this talk, I will briefly discuss the implications of this result for modeling and extracting hidden (hemodynamic) patterns of the cardiovascular control, as well as possible directions for further research. Fausto Lucena is a researcher in the Department of Media Science at Nagoya University, Japan. He is currently visiting Computational NeuroEngineering Laboratory as part of a Nagoya University program for international cooperation. | |

Seminars in Spring 2011 | |
---|---|

16 Mar. 2011 | Image Guidance Methods in Deep Brain Stimulation |

Atchar Sudhyadhom—Deep Brain Stimulation (DBS) has shown promise as an alternative therapy for medication refractory neurological disorders (such as Parkinsonâ€™s disease, essential tremor, and dystonia). DBS requires millimeter accuracy in the targeting of specific deep brain structures. Unfortunately, standard imaging methods (CT, T1 and T2-weighted MRI) have not previously shown significant anatomic contrast of structures that are targeted in DBS. In order to enhance surgical targeting in DBS, we have developed several tools to aid in both indirect and direct targeting of subcortical brain regions. The tools that we have created include the application of a deformable brain atlas, a novel MRI scan (the Fast Gray Matter Acquisition T1 Inversion Recovery, FGATIR) for differentiation of subcortical structures, application of diffusion tractography for localization of functional subregions of the brain, and a complete clinical platform to provide image guidance integrating all the tools mentioned previously. The development and application of these novel methods for targeting will be discussed and presented. | |

9 Feb. 2011 | Survival information potential: a new criterion for adaptive system training |

Dr. Badong Chen, our new post-doc, will talk on Survival information potential: a new criterion for adaptive system training. | |

7 Feb. 2011 | Sampling - reconstruction in spline-type spaces and frames |

Jose Luis Romero works in sampling theory, reconstruction in spline-type spaces and frames. | |

Seminars in Spring 2007 | |
---|---|

13 Mar. 2007 | Hierarchal Decomposition of Neural Data Using Boosted Mixtures of Independently Coupled Hidden Markov Chains |

Seminar by Shalom Darmanjian, graduate student at the CNEL. | |

13 Feb. 2007 | Feature Selection through Local Learning: Theories, Algorithms, and Applications |

Seminar by Yijun Sun, Interdisciplinary Center for Biotechnology Research, University of Florida | |

6 Feb. 2007 | An Information Measure of Temporal Structure for Multichannel Spike Trains |

Seminar by AntĂłnio R. C. Paiva, graduate student at the CNEL. | |

30 Jan. 2007 | Single-Trial ERP Estimation Based on Spatio-Temporal Filtering |

Seminar by Ruijiang Li, graduate student at the CNEL. | |

Seminars in Spring 2006 | |
---|---|

10 May 2006 | Automatic speech recognition using an echo state network |

A talk by: Dr. Mark Skowronski, CNEL, UFL | |

19 Apr. 2006 | How to build a time machine |

A talk by: Dr. John G. Harris , Professor, Department of Electrical & Computer Engineering, University of Florida | |

12 Apr. 2006 | Bio-Medical Image Analysis |

A talk by: Dr. Aurelio Campilho, INEB - Biomedical Engineering Institute, University of Porto, Portugal | |

5 Apr. 2006 | A review of electronic nose sensor technologies and signal processing techniques |

A talk by: Mr Vikas Meka, Convergent Engineering, Gainesville, Florida | |

29 Mar. 2006 | Special two sessions seminar |

Session-1: Correntropy as a Novel Measure for Nonlinearity Tests By Aysegul Gunduz Session-2: A Monte Carlo Sequential Estimation for Point Process Optimum Filtering By Yiwen Wang | |

8 Mar. 2006 | A Lyapunov-Based Control of Engineering Systems |

A talk by Dr. Warren Dixon Assistant Professor, Department of Mechanical and Aerospace Engineering, University of Florida | |

22 Feb. 2006 | Micro Air Vehicle Research at the University of Florida |

A talk given by: Dr. Peter Ifju .Professor of Mechanical and Aerospace Engineering Department, University of Florida | |

8 Feb. 2006 | Investigation of Spatio-Temporal Dependencies in Epileptic ECOG |

A talk by: Anant Hegde, CNEL, UFL | |

1 Feb. 2006 | GENERALIZED BOOSTING ENSEMBLES |

A talk by: Prof. AnĂbal R. Figueiras-Vidal, DTSC, Universidad Carlos III de Madrid, Spain | |

Seminars in Fall 2005 | |
---|---|

30 Nov. 2005 | A coronary circulation model for the Human Patient Simulator |

A lecture by: Dr. Johannes H. van Oostrom Associate Professor, College of Medicine â€“ Anesthesiology and Biomedical Engineering | |

16 Nov. 2005 | COMPUTATIONAL MODELING OF KNEE MECHANICS |

A talk by: Dr. B.J. Fregly Department of Mechanical & Aerospace Engineering, Department of Biomedical Engineering, and Department of Orthopaedics and Rehabilitation, University of Florida | |

2 Nov. 2005 | Correntropy: A new similarity measure in kernel spaces |

A talk by:Dr. Jose Principe Distinguished Professor, Department of Electrical & Computer Engineering University of Florida | |

26 Oct. 2005 | Advances in Cochlear Implants |

A talk by: Dr. Alice E. Holmes Professor, Department of Communicative Disorders University of Florida | |

28 Sept. 2005 | Brainprints: A biological approach to understanding learning disability |

A talk by: Dr. Christiana M Leonard Professor, Department of Neuroscience University of Florida | |

21 Sept. 2005 | AN IMPROVED MINIMUM ERROR ENTROPY CRITERION WITH SELF ADJUSTING STEP-SIZE |

A talk by: Seungju Han and Sudhir Rao Computational NeuroEngineering laboratory, Department of Electrical and Computer Engineering | |

14 Sept. 2005 | Bandwidth Extension of Telephone Speech Using Frame-based Excitation and Robust Features |

The standards that are still in use for telephone commu-nications since the 1950s limit the information bandwidth to 300-3400Hz. However, in normal conversational speech, the frequency content is mainly between 0-8000Hz. This constraint degrades not only the sound quality but also the intelligibility of the transmitted signal. Instead of modifying the present telecommunication infrastructures, which would cost billions of dollars, many researchers have been studying more efficient methods to increase the quality of telephone speech. This paper develops an innovative solution to bandwidth extension, which is based upon the Linear Source Filter Model that breaks speech up into two parts: the excita-tion and the spectral envelope. Novel approaches are used to extend the frequency information for both parts. This algorithm particularly emphasizes low frequency reconstruc-tion without neglecting high frequencies. Furthermore, dif-ferent feature sets to model the spectral envelope are em-ployed for better performance under noisy conditions | |

7 Sept. 2005 | Evolving into Epilepsy: Multiscale Electrophysiological Analysis and Imaging in an Animal Model |

A talk by: Dr. Justin C. Sanchez Assistant Professor, Department of Pediatrics, Division of Neurology University of Florida | |

31 Aug. 2005 | Electrohysterography |

First CNEL seminar of semester, on intrauterine pressure prediction from electrohysterography using the Wiener filter. | |

Seminars in Spring 2005 | |
---|---|

20 Apr. 2005 | Some connections between kernel methods and Information theoretic learning |

Information theoretic learning and kernel methods are two research topics emerged from signal processing and machine learning fields. ITL is a signal processing technique that combines information theory and adaptive system. It utilizes information theory as a criterion to update the structure of adaptive system. Kernel-based learning algorithms have been developed in the machine learning community. They are nonlinear versions of linear algorithms where the data has been nonlinearly transformed to a high dimension feature space. In this talk, I will discuss some intriguing connecitons between kernel methods and information theoretic learning. This new discovery enables us to haver different perspectives towards both reseach topics. | |

19 Jan. 2005 | Generalized Correlation Function: Definition, Properties and Examples |

abstract of the 40 minute seminar | |

9 Mar. 2005 | Spatio-temporal brain dynamics in early affective perception: clues from time-frequency analyses |

Keil, Andreas—Perception and evaluation of emotionally arousing scenes is essential for the organization and regulation of an organism’s behavior. Recent work in the field of the cognitive neuroscience of emotions has provided a dimensional framework for emotional perception being based on the two dimensions of valence (appetitive versus defensive/aversive system) and arousal (amount of activation in either system or degree of co-activation of both systems). This work is based on network approaches to the organization of emotional perception and memory. Affective modulation of visual processing, for instance, may be effected by afferent projections to visual cortices, resulting in a facilitation of the neural tissue in cases where perceptual content is emotionally arousing. While interactions and overlaps between emotional arousal systems and attentional systems have recently been studied extensively, the dynamics of acquisition of affective dispositions remain largely unclear. The present talk presents a series of experiments using large-scale high-frequency and low-frequency oscillations as measured using Electro- and Magnetoencephalography, examining the time course of learning aversive reactions to conditioned stimuli. Using measures of within-site and between-site phase-locking, the functional organization of various subsystems was examined that act to bind together the activity of the distributed areas and finally establish emotional erception. Both induced and evoked (driven) oscillations showed time-dependent sensitivity to classical conditioning, with phase-locking and amplitude being increased as a function of (i) duration and (ii) motivational significance of a given stimulus. | |

9 Feb. 2005 | Filter Based Encoding Model of Retinal Ganglion Cells |

Boykin, P. Oscar—Using uniform field input, we model mouse retinal ganglion cells as linear filters with a stochastic rate code to encode the filtered value. For each cell, the genetic algorithm is used to find the linear filter of the input which has the optimal mutual information with the output of the cell. Based on the shape of the filters (the encoding time course of the neurons), 3 broad classes of cells: FAST-ON, SLOW- ON, and OFF (as well as smaller sub-clusters) are identified. | |

23 Feb. 2005 | Wireless Interface Electronics for Implantable BMI devices |

Bashirullah, Rizwan—Wireless power transfer and bi-directional data links are often required in biomedical implants used for long term neural recording/stimulating systems. This talk will provide an overview of key factors to achieve high data rate transmission and internal power supply regulation for implant electronics that are limited by size, power and operational range. System level specifications, architectural considerations and circuit topologies specific to brain machine interface devices will be presented. | |

Seminars in Fall 2004 | |
---|---|

1 Dec. 2004 | CNEL Seminar |

Statistical automatic identification of microchiroptera from echolocation calls: Lessons learned from human automatic speech recognition | |

3 Nov. 2004 | Local and Global Behavior of Neural Error Correction |

Neural information coding is often characterized as noisy and unreliable, typically because spike trains are irregular in appearance and experimental stimuli presented at different times produce different responses. This paper describes ongoing work investigating the effects and significance of errors in spike trains. More specifically, here we test the dependence of error correction in neural coding on a postsynaptic neuron’s extant behavior, described in nonlinear dynamical terms. We show that the time for a neuron to recover from an error varies significantly, depending on its preceding stationary behavior and that behavior’s position within the cell’s global bifurcation structure. This implies a model of neural computation based not solely on attractors or motion within a state space, but rather motion within a global response space. | |

20 Oct. 2004 | Competitive Mixture of Local Linear Experts |

This presentation investigates phased-array magnetic resonance imaging (MRI) as an important contemporary research field propelled by expected clinical gains. Although many algorithms have been proposed for phased-array MR image reconstruction, in addition to perhaps the most commonly used sum-of-squares (SoS) algorithm, these approaches are not based on an optimal signal processing framework. In this presentation, the problem of combining images obtained from multiple MRI coils is investigated from a statistical signal processing point- of-view with the goal of improving signal-to-noise-ratio (SNR) in the reconstructed images. However, some weak points including statistical assumptions limit the reconstruction performance. Adaptive learning strategy provides a new scheme for phased-array MRI. To date there are no reported results from the literature in this area. The purpose of this research is to study the performance of an adaptive signal processing approach based on competitive mixtures of local linear experts. The proposed method has the ability to train on a set of images and generalize its performance to previously unseen images. Performance evaluations on real data validate the effectiveness of this method. Future work is also pointed out for further research. | |

6 Oct. 2004 | Gaussian Mixture Model for System Identification & Control |

This proposal investigates a new methodology of combining the improved Gaussian mixture models (GMM) with local linear model (LLM) for dynamical system identification and control. In order to understand the advantage of the mixture model, mixture model structure and training method are discussed in detail, along with growing self-organizing criterion as an improving initial setup for Gaussian distribution which makes the GMM converge more stable. To increase local modeling capability and decrease modeling error, local linear models are trained based on GMM to make one-step prediction. Following the local modeling approach, dynamic inverse controllers are designed to realize a tracking application. Four application systems with different dynamics are simulated to verify the modeling and control capability of the improved Gaussian mixture model. Through experiments and comparison with self-organizing maps and radial basis functions, it is shown that the improved GMM-LLM is a more flexible modeling approach with higher computation efficiency and higher global-minimum-converge performance. | |

22 Sept. 2004 | Spatio-temporal synchronizations in Epileptic EEG |

Synchronization and de-synchronization between the cortical-columns of the brain are believed to be one of the plausible reasons for most neurological disorders including epilepsy.It is believed that synchronization occurs due to both local and global discharges of the neurons. In quantifying this phenomenon, one of the main difficulties is that the brain is a highly complex, non-linear system. The spatio- temporal changes in information across different regions of the brain are rapid and often very subtle. Therefore, one way to understand how physiological activities are coordinated in the brain is to understand how subsystems are coupled and how information propagates through them. From the epilepsy perspective, quantifying the changes in spatio- temporal interactions could potentially help us develop seizure- warning systems. This quantification would also help us identify the regions that actively participate during epileptic seizures. Various linear and nonlinear techniques have been developed to quantify the degree of synchronization. A brief overview a few of the existing techniques will be discussed . In detail, we will discuss the SOM-based Similarity Index (SI) measure, a non-linear technique to quantify synchronization in multivariate time-structures. Simulation results will be presented followed by results on epileptic EEG data. We will also discuss a couple of clustering techniques used to quantify the spatial-distribution of the channel interactions. | |

15 Sept. 2004 | Spatio-temporal synchronizations in ECOG data |

Synchronization and de-synchronization between the cortical-columns of the brain are believed to be one of the plausible reasons for most neurological disorders including epilepsy.It is believed that synchronization occurs due to both local and global discharges of the neurons. In quantifying this phenomenon, one of the main difficulties is that the brain is a highly complex, non-linear system. The spatio- temporal changes in information across different regions of the brain are rapid and often very subtle. Therefore, one way to understand how physiological activities are coordinated in the brain is to understand how subsystems are coupled and how information propagates through them. From the epilepsy perspective, quantifying the changes in spatio- temporal interactions could potentially help us develop seizure- warning systems. This quantification would also help us identify the regions that actively participate during epileptic seizures. Various linear and nonlinear techniques have been developed to quantify the degree of synchronization. A brief overview a few of the existing techniques will be discussed . In detail, we will discuss the SOM-based Similarity Index (SI) measure, a non-linear technique to quantify synchronization in multivariate time-structures. Simulation results will be presented followed by results on epileptic EEG data. We will also discuss a couple of clustering techniques used to quantify the spatial-distribution of the channel interactions. | |

15 Sept. 2004 | Research Progress in the CNEL |

Review of the Silicon Cortex Project, Motorola Project, Brain Dynamics Project, Bio-Nano Project, Information Theoretic Learning Project, and LoFLYTE Project. | |

10 Nov. 2004 | Merging temporal and pdf information: from kernel matched |

Ignacio Santamaria—In this talk we will present some recent work that exploits the link between kernel-based methods and ITL concepts. We start by solving the matched filtering problem in the feature space. To this end we apply kernel Fisher discriminant analysis (K-FDA). The interesting point is that the obtained test statistic uses time-domain information (through a set of projectors from the feature space to a lower dimensional space), as well as the pdf or statistical information about the problem. In this way, the structure of the K-FDA solution gives us some insight on how to obtain an autocorrelation function in the feature space, which is a much more ambitious goal. A very simplistic kernel autocorrelation function is presented and its performance is illustrated with some simulation examples. | |

13 Oct. 2004 | Knowledge Discovery Through a Fisher Game |

Venkatesan, Ravi C.—Extreme Physical Information (EPI) is a self contained theory to elicit physical laws from a system/process (Nature) based on a measurement-response framework. A specific form of the Fisher information measure (FIM) known as the Fisher channel capacity (FCC) is employed as a measure of uncertainty. The FCC is the trace of the FIM. EPI may be construed as being a zero-sum-game between a gedanken observer and a system under observation (characterized by a demon, reminiscent of the Maxwell demon, residing in a conjugate space). The payoff of the competitive game results in a variational principle that defines the physical law that generates the observations made by the gedanken observer, as a consequence of the response of the system to the measurements. A principled formulation for reconstructing pdfâ€™s from arbitrary discrete time independent random sequences based on an invariance preserving extension of the Extreme Physical Information (EPI) theory, is presented. Invariances are incorporated into the invariant EPI (IEPI) model through a Discrete Variational Complex inspired by the seminal work of T. D. Lee. A quantum mechanical connotation is provided to the Fisher game. This is accomplished through the IEPI Euler-Lagrange equation that acquires the form of a time independent SchrĂ¶dinger-like equation, and, the quantum mechanical virial theorem. The concomitant constraints of the IEPI variational principle are consistent with the Heisenberg uncertainty principle. The ansatzâ€™ describing the state estimators are obtained so as to selfconsistently satisfy an analog of the Fisher game corollary. The game corollary permits the demon to make the closing move in the Fisher game, by minimizing the FCC. This corresponds to a state of maximum uncertainty, and, is in keeping with the demonâ€™s strategy of minimizing the information made available to the observer. A fundamental tenet of the EPI/IEPI model is the collection of statistically independent data by the observer. A principled IEPI Fisher game formulation guaranteeing the statistical independence of the quantum mechanical observables is presented, utilizing statistical analyses commonly employed in Independent Component Analysis (ICA). Specifically, correlations are first eliminated using a whitening process (facilitated by a linear filter, or PCA), in conjunction with Givens rotation (a unitary transform). Next, the IEPI Fisher game is played between the gedanken observer and the process inhabiting the conjugate system space. Finally, an inverse whitening filter is applied to the observables corresponding to the reconstructed state vectors obtained from the Fisher game. This yields a novel form of ICA based on minimizing the FCC. The prospect of obtaining an optimal whitening filter based on the Fisher game corollary is investigated into. Qualitative analogies and distinctions between the Fisher game ICA model and other prominent ICA theories are briefly discussed. Reconstruction of time independent random sequences generated from Gaussian mixture models demonstrates the efficacy of the Fisher game/ICA formulation. | |

Seminars in Spring 2004 | |
---|---|

31 Apr. 2004 | Analysis of Multivariate structures: a SOM-based Similarity Index Measure |

Synchronization and de-synchronization between the cortical-columns of the brain is believed to be one of the plausible reasons for most neurological disorders including epilepsy . It is believed that synchronization occurs due to both local discharges of the neurons as well as at the global levels.Various linear and nonlinear techniques have been developed to quantify the degree of synchronization. I plan to give a brief overview to some of the existing techniques for analyzing Multi-variate structures and then propose SOM-based Similarity Index measure, which is mainly a state-space approach based on nearest neighbors. Simulations performed on toy-data will be presented. I would be concluding with some preliminary results obtained from applying the SOM-based SI on the epileptic data. | |

8 Apr. 2004 | Spiking Freeman, final report |

My final presentation to the CNEL group, covering the spiking Freeman research. | |

7 Apr. 2004 | Freemanâ€™s K Sets as Dynamic Computational Models |

Recently, the liquid state machine and the echo state network have been shown to possess interesting computational properties (universal approximation) in the class of functionals with exponential decaying memory. In this talk, we briefly review these models and contrast them with another dynamical, biologically plausible model of the olfactory cortex proposed by Walter Freeman. Here we will restrict the architecture to a distributed interconnection of reduced KII sets or a KI network. Our approach models the states of the Freeman network as a representational infrastructure that stores in its dynamics a short-term history of the input patterns. Then at any time, with a simple instantaneous read-out, information related to the input history can be accessed and read out. This work provides two important additions to Freeman networks. First, it emphasizes the need for optimal readout, and show how to adaptively derive them. Second, it shows that the Freeman model is able to process continuous, time varying inputs. We will provide theoretical results for the conditions on the system parameters of Freeman model resulting in echo states. We will also present experimental results that verify the validity of this approach. Finally, we will discuss the implications of the echo state property of the Freeman model on how brain might be processing information. | |

3 Mar. 2004 | Information Theoretic Spectral Clustering |

It this talk, we show that there is a close connection between an information theoretic pdf distance and the so-called graph-cuts used in spectral clustering. Specifically, we develop a new spectral clustering algorithm where the solution is given as a weighted sum of certain top eigenvectors of the data affinity matrix | |

18 Feb. 2004 | Gaussian Mixture Model for system identification & control |

In this talk, I will outline the GMM on system ID and control. In particular I will discuss the benifit of GMM methods compared with other approaches. These issues are of interest to people who are dealing with system identification, prediction or interpolation. Finally, results on some applications like Chaos control, SISO control, MIMO control are shown. | |

11 Feb. 2004 | Design and Performance of Nanoscale Content-Addressable Memories |

Nanoscale components are currently being investigated by many researchers in order to create smaller and potentially faster logic and memory devices. There are many challenges to using components at this scale, such as interfacing these components with microscale components, and mitigating the high defect rate inherent in the assembly of these devices. In this presentation, we propose a content-addressable memory architecture that uses a stochastic interface between the micro- and nano- scales. We show how this techinque provides a solution to both the interface problem and the high defect rate of nanoscale components. Some theoretical performance results are given and high-level simulations are presented. | |

28 Jan. 2004 | Phased-Array MRI Image Combination by Mixture of Local Linear Experts |

Phased-array magnetic resonance imaging is an important contemporary research field in terms of the expected clinical gains in medical imaging technology. Recent research focused on heuristic coil image recombination methods as well as statistical signal processing approaches. In this presentation, we investigate the performance of an adaptive signal processing approach, namely mixture of local linear experts. The proposed method has the ability to train on a set of images and generalize its performance to previously unseen images. Performance evaluations on real data validate the effectiveness of this method. | |

21 Jan. 2004 | A spiking neural network |

A spiking neural network implementing an associative memory is proposed. The corresponding circuit with eight neurons is designed with the AMI 0.5um CMOS process. Willshaw-type or Palm-type binary synapses and integrate-and-fire (IF) neurons are used to simplify the circuit design. Time thresholding is also used in the circuit. A large- scale network with 64 neurons is simulated and its storage capacity is calculated and analyzed. | |

21 Apr. 2004 | Graphical Models and Their Application in Computer Vision |

Todorovic, Sinisa—In recent years, it has become increasingly evident that researchers, concerned with systems that function under uncertainty, seek solutions within the probabilistic graphical-model framework. A graphical model represents a graph, whose nodes are characterized with random variables, and whose joint distributions are defined as a product over functions specified on connected subsets of nodes. The graph formalism provides general algorithms for computing marginal and conditional probabilities of interest, as well as control over computational complexity. A review of the literature offers two broad classes of models -- namely descriptive and generative models, which differ in structural complexity and difficulty of inference. We will present and overview of these models and illustrate their advantages and shortcomings on examples related to the object-recognition problem in computer vision. | |

Seminars in Fall 2003 | |
---|---|

20 Nov. 2003 | Implementing the Silicon Cortex |

CNEL Day poster. Spiking Freeman model. | |

19 Nov. 2003 | Modelling and Detection of Limit Cycle Oscillations in Thin-Wing Aircraft Using Adaptable Linear Models |

A method for modeling the flutter response of a thin winged aircraft is presented. Flutter modeling traditionally relies on complicated mathematical approximations or elaborate adaptive systems. In order to simplify the task, this work proposes a two-part adaptable hybrid physical model. The first part models the structure from its known physical properties. The output from the first part is fed into an adaptive model that provides the necessary signal changes to reflect the forces encountered by the wing during flight. The result is an accurate synthesis of flight test data for a period of time where conditions remain nearly constant. In order to cope with changing conditions during flight, test data is segmented with statistical methods using numerical estimates of the error distributions. Models are trained for each segment. The overall sequence of models provides a complete synthesis of the nonlinear flutter response as flight conditions change. | |

18 Nov. 2003 | Graphical Models: Bayesian Networks |

Brief Overview of Graphical Models (continuation of seminar1) | |

4 Nov. 2003 | Block Turbo Codes (BTC) using Graphical Models |

Application of Graphical Models (e.g.: Bayesian Networks) to Error Control Coding (ECC) methods, specifically to Block Turbo Codes | |

1 Oct. 2003 | Optimal signal processing in brain-machine interface |

The research of Brain-Machine Interfaces (BMIs) has recently demonstrated promising results of estimating the transfer function from the neural firing activities to the hand positions of a primate, with relatively simple Wiener filters. It is rather surprising that a simple linear projection shows an ability to approximate the operation of the highly complex neural-motor system between cortical neurons and muscle fibers, with the correlation coefficient around 0.8 between the estimated and the actual hand trajectories. In this work, we investigate the challenges and opportunities of this class of models for BMIs based on the optimal signal processing framework, and possibly shed light on the direction to go beyond the present level of estimation performance. | |

24 Sept. 2003 | The Spike Activity of Neocortical Columns: A Dynamical Systems Analysis |

I shall begin by introducing an abstract dynamical system for networks of spiking neurons that is formulated based on a general model of the biological neuron. I shall then present simulation as well as analytical results for the class of instantiations of the system that model typical neocortical columns. Based on these results I shall argue that the spike activity of neocortical columns is profoundly influenced by attractors that are not only almost surely chaotic but are also potentially anisotropic. | |

10 Sept. 2003 | Synchronization Analysis on Coupled Reduced KII sets |

We discuss the synchronization of two identical reduced KII sets. Analytical solutions of the conditions on the coupling strength between the two identical nonlinear dynamical systems are presented. When determining the strength according to their computed boundaries, the two systems are guaranteed to be synchonized and presenting the dynamical behavior of an individual set. The results provide theoretical information for understanding higher level structures in Freeman’s model as well as for constructing applications of KII networks. | |

3 Sept. 2003 | CNEL Overview: BMI |

Justin’s review of the Brain Machine Interface project. | |

3 Sept. 2003 | CNEL Overview: ITL |

Deniz’s review of the Information Theoretic Learning project. | |

3 Sept. 2003 | CNEL Overview: SiCortex |

Dongming’s review of the Silicon Cortex project. | |

3 Sept. 2003 | CNEL Overview: BDL |

Anant’s review of the Brain Dynamics Lab project. | |

3 Sept. 2003 | CNEL Overview: LoFlyte |

Jeongho’s review of the LoFlyte project. | |

3 Sept. 2003 | CNEL Overview: Speech |

Mark’s review of the speech-related projects. | |

Seminars in Spring 2003 | |
---|---|

25 July 2003 | BMI Hardware July 03 |

Overviwe of hardware for DARPA meeting. | |

3 June 2003 | Interpreting Neural Activity Through Linear and Nonlinear Models for Brain-Machine Interfaces |

NeuroNoon talk about interpreting neural activity through linear and nonlinear models | |

22 Apr. 2003 | Information Theoretic Self-Organization of Multiple Agents |

Talks about the application of Information Theoretic Interactions for self-organizing a group of robots | |

21 Jan. 2003 | Information Theory: A Brief Introduction |

An overview of the basic principles of information theory is presented. To motivate an application to blind source separation is demonstrated. In order to view the slides, change the file extension from pdf to ppt and open using Powerpoint. | |

Seminars in Fall 2002 | |
---|---|

7 Dec. 2002 | Minimax ICA |

In this presentation, we have seen how Jaynes’ maximum entropy could be applied to ICA and blind deconvolution/equalization. | |

20 Nov. 2002 | Blind Equalization by Sampled PDF fitting |

This is a new blind equalization technique for multilevel modulations. The proposed approach consists of fitting the probability density function (pdf) of the corresponding modulation at a set of specific points. The symbols of the modulation, along with the requirement of unity gain, determine these sampling points. The underlying pdf at the equalizer output is estimated by means of the Parzen window method. A soft transition between blind and decision directed equalization is possible by using an adaptive strategy for the kernel size of the Parzen window method. The method can be implemented using a stochastic gradient descent approach, which facilitates an on-line implementation. The proposed method has been compared with CMA and Benveniste-Goursat methods in QAM modulations. | |

16 Oct. 2002 | Understanding the KII |

Recent progress in SiCortex group is presented. Based on bifurcation analysis, we find regions where a reduced KII set (in the computational model of the olfactory cortex) could be linearized. This helps us to control the dynamical behavior of a reduced KII set and KII network. Hardware measurement of an analog VLSI chip that contains one reduced KII set is also presented. | |

9 Oct. 2002 | Phased Array MRI Image Reconstruction |

The seminar Introduces some background with respect to Magnetic Resonance Imaging (MRI) and phased-array coil. A data model of phased-array coil MRI is set up based on the coil senistivity constancy. The analysis of three new Algorithms are compared to the conventional sum-of-squares (SoS). | |

18 Sept. 2002 | Human Factor Cepstral Coefficients: Improving on a generation's speech feature extraction algorithm |

First described in 1980, mel frequency cepstral coefficients (mfcc) have become the premier speech front end for today’s state-of-the-art speech recognition systems. The key aspect of the algorithm is a filter bank inspired by the human auditory system: triangular filters are spaced along the perceptually-motivated mel frequency scale. Yet the original 1980 algorithm does not adequately describe filter _bandwidth_. This ambiguity has led to the creation of popular variations seen today as well as tragically-flawed implementations. This talk will present a solution that clarifies this ambiguity--human factor cepstral coefficients--by decoupling filter bandwidth from other filter design parameters. Recognition experiments using a Hidden Markov Model and various additive noise sources will be presented to validate the new algorithm. | |

9 Sept. 2002 | HMM-based Neural Spike Analysis |

Goal: Classify arm movement into two classes: (1) stationary and (2) moving Talk outline: â€˘ Classifier overview â€˘ HMM/VQ discussion â€˘ Generating training data â€˘ Results â€˘ Current/future work | |

6 Sept. 2002 | Modified Kalman Filter Based Method for Training State-Recurrent Multilayer Perceptrons |

NNSP presentation on using Kalman based methods for training RMLPS | |

4 Sept. 2002 | Error whitening criterion for adaptive filtering: Theory and Algorithms |

MSE has been the criterion of choice in many function approximation tasks including adaptive filter optimization. There are alternatives and enhancements to MSE that have been proposed in order to improve the robustness of learning algorithms in the presence of noisy training data. In FIR filter adaptation, noise present in the input signal is especially problematic since MSE cannot eliminate this factor. A powerful enhancement technique, total least squares, on one hand, fails to work if the noise levels in the input and output signals are not identically equal. The alternative method of subspace Wiener filtering, on the other hand, requires the noise power to be strictly smaller than the signal power to improve SNR. We have proposed an extension to the traditional MSE criterion in filter adaptation, which we have named the error-whitening criterion. This new criterion is inspired from the observations made on the properties of the error autocorrelation function. Specifically, we have shown that using non-zero lags of the error autocorrelation function, it is possible to obtain unbiased estimates of the model parameters even in the presence of white noise on the training data. In this talk, we will present the theory behind the Error-Whitening Criterion along with the adaptive algorithms that work on this criterion. Both the stochastic and recursive (fixed-point type) algorithms will be presented. | |

4 Sept. 2002 | Input-Output Mapping Performance of Linear and Nonliner Models for Estimating Hand Trajectories from Cortical Neuronal F |

NNSP BMI Presentation | |

Seminars in Spring 2002 | |
---|---|

6 Feb. 2002 | Brain Machine Interfaces for Motor Control |

Using the firing patterns of populations of cortical neurons, research has shown that the hand trajectory of owl monkeys can be predicted in real-time. It is believed that the recordings from a fraction of the total number of cortical neurons contain enough information to encode the essential parameters (position, velocity, acceleration, force) of motor tasks. The firing of a neuron can be related to one or a combination of kinematic parameters. Cortical neurons encode the analog information of the environment as a series of membrane depolarizations. Using microwire electrodes, depolarizations can be recorded as voltage spike trains from populations of single neurons. Communicating throughout a 3-D network over the space of the brain cortex, neurons send their information over time. We seek to understand the spatio-temporal encoding and decoding of neuronal firing patterns. This novel research seeks not only to predict the next motor position in time, but to find a model or multiple models which map the firing patters to position over all time. The tools which will exploit the spatio-temporal encoding are adaptive systems. Adaptive systems have many forms but all contain the same components: topology, adaptation algorithm, and optimization criterion. Depending on the problem, certain combinations of components will be better suited to extract information. The mapping of neuronal firing patterns to hand position is a many-to-one. It remains unknown whether this many-to-one relationship is a simple weighted combination of the firing patterns, or a complex combination of many nonlinear functions. We seek to compare the modeling performance of the linear filter with a Time-Delay Neural Network and Recurrent Network. This research is funded by DARPA and represents the collaboration of the University of Florida, Duke University, MIT, and SUNY. | |

13 Jan. 2002 | Hardware Implementation of the LMS Adaptive Filter for a Brain-Machine |

A description of the TI C33 floating point DSP along with comments of implementing adaptive algorithms in DSP. Included is a performance comparison of the normalized LMS algorithm in DSP as well as MATLAB using neural data from the BMI project. | |

17 Apr. 2002 | Using an Auditory Model to Predict Voice Quality |

Srivastav, Rahul—Description and quantification of voice quality is important in many applications, particularly in the assessment and treatment of patients with voice disorders. Several objective measures, based either on the physiological correlates of voice production or on the vocal acoustic signal have been proposed for this purpose. However, these measures often fail to predict subjective ratings of voice quality. Using an auditory model to simulate peripheral encoding of vocal signals appears to provide a better method to understand the acoustic-perceptual relationships for voice quality and can help develop objective measures of voice quality that correspond well with subjective judgments. | |

20 Mar. 2002 | Towards Autonomous Flight for Micro Air Vehicles (MAVs) |

Nechyba, Michael C.—Substantial progress has been made recently towards designing, building and flying Micro Air Vehicles (MAVs), nominally defined by DARPA to have a maximum dimension no greater than six inches. Virtually undetectable from the ground, MAV systems can serve as remote sensors for a number of important applications, including surveillance, chemical-agent detection and target tracking; such applications have taken on special significance in light of the recent terror attacks and the consequent war on terrorism. Thus far, however, progress in overcoming the aerodynamic obstacles to flight at very small scales has not been matched by similar progress in equipping MAVs with autonomous flight capabilities. Practical considerations, such as limited weight, power and payload capacity, do not permit the same approach as on larger Unmanned Air Vehicles (UAVs). To address this challenge, we have developed a vision-guided flight stability and control system, and are moving towards full flight autonomy with on-board GPS navigation. In this talk, I will first give an overview of MAV (and small UAV) research at the University of Florida. Then, I will describe a real-time, vision-based horizon detection and tracking algorithm which lies at the core of our flight stability and control system, and will show video of recent self-stabilized, semi-autonomous MAV flights. Finally, I will conclude with a discussion of our current work in integrating on-board GPS for navigation, and future research directions, including the deployment of MAVs on Unmanned Ground Vehicles for remote sensing, and multiple-MAV deployment for coordinated flight missions. | |

Seminars in Fall 2001 | |
---|---|

19 Sept. 2001 | Introduction to Blind Source Separation |

Blind source separation attempts to restore mutual statistical independence of a set of mixed (originally independent) signals. It is achieved with an adaptive filter, and all adaptive filters have three components, the architecture, criterion, and optimization method. This discussion will focus on the different architectures and criteria used for source separation, as indicated in the source separation literature. In addition, a special emphasis will be placed on criteria developed within CNEL. | |

Seminars in Spring 2000 | |
---|---|

12 Apr. 2000 | Blind source separation using information theory |

Maximizing the output entropy of a nonlinear demixer has been shown to separate a mixture into its independent components. This can also be accomplished by minimizing the mutual information between the outputs. However, the mutual information method uses an estimate of the joint entropy. For accurate estimation of the joint entropy, an exponentially increasing amount of data is required as the dimensionality increases linearly. This is circumvented in the proposed criteria by restricting the structure to be sphering followed by a rotation matrix. In addition, the proposed method utilizes Renyi’s quadratic entropy as a substitute for Shannon’s, which reduces the computational complexity considerably. The proposed method is compared to a second order technique and to the InfoMax criterion proposed by Bell and Sejnowski. | |