Title: Band pass filtering and Wavelets analysis.
Prof. D. Stephen G. Pollock, University of Leicester, UK.
Summary: Wavelets analysis provides a means of analysing non-stationary time series of which the underlying statistical structures are continually evolving. It is an analysis both in the time domain and in the frequency domain.
The tutorial will begin by describing the effects of digital filtering in the time domain and the frequency domain. It will proceed to provide the generalisation of the Shannon sampling theorem that is appropriate to bandpass filtering. This theorem establishes a relationship between continuous signals and their corresponding sampled sequences that is essential to a wavelets analysis. Once this background has been provided, the theories of Dyadic and non-Dyadic wavelets analysis can be described in detail.
Title: A methodology to analyze fuzzy data
Prof. M. Ángeles Gil, University of Oviedo, Spain.
Summary: Fuzzy data are often used to model data associated with intrinsically imprecise-valued magnitudes/attributes (say perceived quality, satisfaction, attitude, and so on) in a random environment. Along the last two decades and on the basis of the concept of random fuzzy numbers and the use of the Zadeh-type fuzzy arithmetic and appropriate metrics, a methodology is being developed to statistically analyze fuzzy data. This tutorial aims to recall the required preliminary tools, and to present some of the already established statistical developments in connection with the central tendency/location and dispersion/scale of random fuzzy numbers. Concerning central tendency, the estimation and testing methods about the population Aumann-type mean(s) are to be exposed. Since this location measure is very sensitive to outliers, some alternate robust location measures are introduced, their estimation is examined and their robustness is discussed. Regarding dispersion, the estimation and testing methods about the population Fréchet-type variance(s) are to be described. Since this scale measure is very sensitive to outliers, some alternate robust scale measures are introduced, their estimation is examined and their robustness is discussed. Some of the presented methods will be illustrated by means of a real-life example. Actually, this example will serve to show the convenience of using the scale of fuzzy numbers instead of other scales like Likert-type ones or their fuzzy linguistic counterpart in dealing with data from these intrinsically imprecise-valued magnitudes/attributes. Related studies as well as some future directions will be shortly commented at the end of the tutorial.
Title: Practical decision making in cluster analysis: Choice of method and evaluation of quality
Prof. Christian Hennig, UCL, UK.
Summary: Cluster analysis is about finding groups in data. There are many cluster analysis methods and on most datasets clusterings from different methods will not agree. Cluster validation concerns the evaluation of the quality of a clustering. This is often used for comparing different clusterings on a dataset, stemming from different methods or with different parameters such as the number of clusters. An overview will be given of techniques for cluster validation, including visualisation methods, methods for assessing stability if a clustering, tests, validity indexes and some new measurements of different aspects of cluster validity. It will be discussed the issue what the true clusters are that we want to find and how this depends on the specific application and the aims and concepts of the researcher, so that these can be connected to specific techniques for cluster validation. In the literature, the problem of cluster validation is often not well defined and there is a focus on automatic methods without providing much understanding of the specific circumstances in which they work (or not). Some insight into these issues will be provided.