Department of Statistics
2010 Seminars
Seminars by year: Current | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012
Speaker: Prof. Harry Haupt
Affiliation: U. Bielefeld
When: Monday, 13 December 2010, 2:00 pm to 3:00 pm
Where: Statistics Seminar Room 303.222, Science Centre
Why study (regression) quantiles? We discuss some intuitive and formal motives for the use of quantile regression (QR) and introduce some of its interesting properties from a practitioners perspective.
This is followed by a brief introduction on QR asymptotics for a class of quite general data generating processes, where we highlight some of the peculiarities of QR in a non-iid setting.
Some data examples illustrate the ways of formal and graphical interpretation of QR empirics.
http://www.wiwi.uni-bielefeld.de/oekonometrie/team/cv.html#c2402
Probability Seminar: Proportional Fairness and its Relationship with Multi-class Queueing NetworksSpeaker: Neil Walton
Affiliation: U. Cambridge
When: Friday, 10 December 2010, 2:00 pm to 3:00 pm
Where: Statistics Seminar Room 303.222, Science Centre
We consider multi-class single server queueing networks that have a product form stationary distribution. A new limit result proves a sequence of such networks converges weakly to a stochastic flow level model. The stochastic flow level model found is insensitive. A large deviation principle for the stationary distribution of these multi-class queueing networks is also found. Its rate function has a dual form that coincides with proportional fairness. We then give the first rigorous proof that the stationary throughput of a multi-class single server queueing network converges to a proportionally fair allocation.
This work combines classical queueing networks with more recent work on stochastic flow level models and proportional fairness. One could view these seemingly different models as the same system described at different levels of granularity: a microscopic, queueing level description; a macroscopic, flow level description and a teleological, optimisation description.
http://www.statslab.cam.ac.uk/~nsw26/
PhD student talksSpeaker: Stats PhD students
Affiliation:
When: Friday, 3 December 2010, 9:00 am to 10:00 am
Where: Eng 3408
Our PhD students will be updating us on their research via either a 20 minute talk (+5 minutes for questions) or a poster.
http://www.stat.auckland.ac.nz/~mholmes/seminars/2010_talks.pdf
Semi-parametric profile likelihood estimation and implicitly defined functionsSpeaker: Dr. Yuichi Hirose
Affiliation: Victoria U.
When: Wednesday, 24 November 2010, 11:00 am to 12:00 pm
Where: Statistics Seminar Room 303.222, Science Centre
The object of talk is the differentiability of implicitly defined functions which we encounter in the profile likelihood estimation of parameters in semi-parametric models.
Scott and Wild (1997, 2001) and Murphy and Vaart (2000) developed methodologies that can avoid dealing with such implicitly defined functions by reparametrizing parameters in the profile likelihood and using an approximate least favorable submodel in semi-parametric models.
Our result shows applicability of an alternative approach developed in Hirose (2010) which uses the differentiability of implicitly defined functions.
http://www.victoria.ac.nz/smsor/staff/yuichi-hirose.aspx
The Totalisator - the Algorithm that led to an IndustrySpeaker: Prof. Bob Doran
Affiliation: U. Auckland
When: Thursday, 18 November 2010, 4:00 pm to 5:00 pm
Where: Statistics Seminar Room 303.222, Science Centre
Almost 100 years ago, at their Ellerslie Easter meeting in 1913, the Auckland Racing Club set operating the world's first automatic totalisator - a truly-enormous computing machine. This talk describes the developments that led to the totalisator - how the simple pari-mutuel algorithm invented by Joseph Oller in the 1860s gave rise to a world-wide industry devoted to its execution. Along the way we will look into workings of the computing machines designed to assist the totalisator, particularly the first machine at Ellerslie, and the special buildings, dotted around the NZ countryside, used to house the totalisator operation.
http://www.cs.auckland.ac.nz/~bob/
Simulation Modelling for Public Policy: Some Statistical IssuesSpeaker: Prof. Peter Davis
Affiliation: COMPASS, The University of Auckland
When: Thursday, 11 November 2010, 4:00 pm to 5:00 pm
Where: Statistics Seminar Room 303.222, Science Centre
Simulation techniques have the attractive features of allowing the construction of realistic, testable and modifiable models of real-world phenomena. This makes them of particular interest in the policy field. Our research group has used micro-simulation techniques to represent the role of primary care in the dynamics of an ageing society and to construct a more generic modelling tool for social policy in New Zealand. This talk will briefly outline these two applications and then open up the discussion on some of the key statistical issues that face us.
http://artsfaculty.auckland.ac.nz/staff/?UPI=pdav008&Name=Peter%20Davis
A random sample of research in statistical ecologySpeaker: A. Prof. Rachel Fewster
Affiliation: U. Auckland
When: Thursday, 14 October 2010, 3:30 pm to 4:30 pm
Where: PLT1, Science Centre
One of the attractions of statistics as a research field is the opportunity to range from the applied to the theoretical, while always addressing questions of practical importance. Furthermore, as renowed statistician John Tukey said, you get to play in everyone's backyard --- meaning that the same statistical theory can be applied in diverse applications, from physics to physiology, or genetics to geology. In my case, I can pursue my childhood interest of ecology without needing to get cold and wet. I will give examples of three problems in statistical ecology from across the spectrum, ranging from the applied to the theoretical, and explain the issues from a statistical standpoint. The featured animals will also come from across the spectrum, ranging from rats to whales, while the theoretical bits can apply to any old elusive critter. A common theme will be that of information --- how we can eke out information about the underlying processes of interest, given the observations available to us. The talk is intended to be accessible to undergraduate students.
NOTE: This talk will be held at the unconventional time of 3:30-4:30 pm.
http://www.stat.auckland.ac.nz/~fewster/
Dry Run for Royal Statistical Society Read PaperSpeaker: Prof. Chris Wild
Affiliation: U. Auckland
When: Wednesday, 13 October 2010, 2:00 pm to 3:00 pm
Where: MLT3/303-101
On World Statistics Day (20 October), and marking the launch of the RSS's 10-year statistical literacy campaign called getstats, Chris is "reading a paper" written with Maxine Pfannkuch, Matt Regan and Nick Horton entitled "Towards More Accessible Conceptions of Statistical Inference". The paper and the discussion it generates from people who are present and contributions sent in from around the world will be published in JRSSA.
http://www.rss.org.uk/main.asp?page=1321&event=1175
http://www.rsscse.org.uk/news/rss-news
http://www.rss.org.uk/main.asp?page=1836#Oct_20_2010_Meeting
This is the dry run for the RSS talk. With all dynamic visuals in the presentation it is hard to imagine anything more unlike "reading a paper", but ...
After the read paper, the Chris W, Maxine, Nick and Chris Triggs will give a large number of presentations in a series of day-long workshops at the RSS in London and the RSSCSE at Plymouth
http://www.rsscse.org.uk/news/rsscse-news/315-getstats
http://www.stat.auckland.ac.nz/showperson?firstname=Chris&surname=Wild
Conservation evaluation with phylogenetic diversitySpeaker: Dr. Steffen Klaere
Affiliation: U. Auckland
When: Thursday, 7 October 2010, 4:00 pm to 5:00 pm
Where: Statistics Seminar Room 303.222, Science Centre
(Note change of date)
In the early 1990s a group of conservation biologists proposed that the diversity of a geographic region should not be restricted to the number of species present in the region but should also incorporate the genetic information of said species. This led to the introduction of phylogenetic diversity. Though the measure was well received its use was limited due to lack of sufficient genetic data and proper software.
In recent years, both limitations have been addressed. With the advent of next generation sequencing, generating massive amounts of data for a geographic region has become feasible while bioinformaticians have provided several packages for computing the phylogenetic diversity for a set of species from a phylogeny.
Here, I will present such a tool which employs linear programming to compute the phylogenetic diversity for a geographic region based on one or more gene for a set of species considered. I will demonstrate the power of the method on a data set for 700 floral species from the Cape of South Africa.
This is joint work with Bui Quang Minh, Arndt von Haeseler (Center for Integrative Bioinformatics, Vienna, Austria) and Felix Forest (Royal Botanic Gardens, Kew, UK)
http://compevol.auckland.ac.nz/dr-steffen-klaere/
Improved efficiency in multi-phase case-control studiesSpeaker: Prof. Chris Wild
Affiliation: U. Auckland
When: Thursday, 16 September 2010, 4:00 pm to 5:00 pm
Where: Statistics Seminar Room 303.222, Science Centre
We will review motivations for multiphase sampling, discuss results in Lee, Scott & Wild (just out in Biometrika) in which we developed efficient methods for fitting regression models to multi-phase case-control data for the special case where all covariates measured at early phases are categorical, and relate that work to some of the methods Alastair discussed in the seminar he gave a few weeks ago.
http://www.stat.auckland.ac.nz/showperson?firstname=Chris&surname=Wild
GLAM: Array methods in statisticsSpeaker: Dr. Iain Currie
Affiliation: Heriot-Watt U.
When: Friday, 10 September 2010, 3:00 pm to 4:00 pm
Where: Statistics Seminar Room 303.222, Science Centre
A generalized linear array model (GLAM) is a low storage, high speed method for fitting a generalized linear model (GLM) when the data are in the form of an array and the model can be written as a Kronecker product. Smoothing arrays is one important application since such smoothing is particularly susceptible to runaway problems with storage and computational time. The GLAM algorithm is very fast and in large problems can be orders of magnitude faster than the usual GLM approach.
We describe the GLAM algorithms and comment on their implementation in R and Matlab. GLAM is more than an algorithm; it is a structure for modelling. We give some examples of smoothing with GLAM: a model for a mortality table subject to shocks (such as Spanish flu); a joint model for insurance data where mortality is measured by lives (claims over time at risk) and by amounts (amount claimed over amount at risk); smoothing 2-dimensional histograms.
New ways of visualising official statisticsSpeaker: Prof. Sharleen Forbes
Affiliation: Stats NZ/Victoria U.
When: Thursday, 2 September 2010, 4:00 pm to 5:00 pm
Where: Statistics Seminar Room 303.222, Science Centre
Official statistics provide the evidence base for much of government policy but these have traditionally been released in simple and standard tables and graphs. The ability to harness the power of the internet together with new graphical techniques has led to a burst of creativity in a number of national statistics offices. New static and dynamic graphs and maps, combined interactive graphs and tables and graphs and maps that allow users to interrogate and interact with data in new ways will be demonstrated. Examples given include multidimensional scatterplots, cartograms, a CPI kaleidoscope, interactive maps, dynamic population pyramids and commuter flows and Hans Rosling's Gapminder. A word or two of warning on the possible limitations of data visualisation will also be given.
http://www.victoria.ac.nz/sog/staff/sharleen-forbes.aspx
Fitting regression models with response-biased samplesSpeaker: Prof. Alastair Scott
Affiliation: U. Auckland
When: Thursday, 19 August 2010, 4:00 pm to 5:00 pm
Where: Statistics Seminar Room 303.222, Science Centre
We are interested in estimating the parameters in a regression model relating a response, y, to a vector, x. of possible explanatory variables in situations where some components of x have missing values for some units and where the probability of being missing may depend on the value of y. Values may be missing by accident, as with non-response in survey data, or by design, as in a case-control study where more expensive covariates are only measured on a subset of the experimental units. We are particularly interested in case-control studies with missing data where both types of mechanisms are involved.
Unlike the situation with ordinary (fully-observed) regression, the full likelihood depends not only on the regression parameters, but also on the unknown covariate distribution. We certainly do not want to have to model this covariate distribution in general, so we look for semi-parametric methods that avoid the need for such modelling. We look at several such methods, firstly in situations where the probability of response given the observed data is known, and secondly in situations where we have to fit a model for this probability. It turns out that the precision of the parameter estimates is always greater in the second situation than in the first so that, somewhat counter-intuitively, it is better to estimate the probability of missing data, even when it is known.
http://www.stat.auckland.ac.nz/showperson?firstname=Alastair&surname=Scott
Estimating parameters of a contact network from epidemic dataSpeaker: Dr. David Welch
Affiliation: Penn State U.
When: Thursday, 12 August 2010, 4:00 pm to 5:00 pm
Where: Statistics Seminar Room 303.222, Science Centre
Networks are commonly used to model disease spread: the nodes of the network represent the hosts, the edges between them represent contacts across which the disease may be transmitted. While there has been lots of work simulating from these models, little has been done to fit these models to data and estimate associated parameters. In the talk, I'll describe efforts to fit data to an Erdos-Renyi (aka Bernoulli) random contact graph with a stochastic SEIR (susceptible-exposed-infectious-removed) epidemic model running over it.
The data are infection and recovery times for all infected individuals in an outbreak. I'll also talk about extending the network model and using other forms of data to aid parameter estimation.
http://www.cidd.psu.edu/people/jdw21/
[Mathematics Colloquium] Random objects, and objects of low complexitySpeaker: Andre Nies
Affiliation: U. Auckland
When: Thursday, 5 August 2010, 3:00 pm to 4:00 pm
Where: MLT1, building 303, Science Centre
Randomness and complexity are closely connected. We briefly consider the meaning of the two concepts in the sciences. Thereafter, we provide mathematical counterparts of the two concepts for infinite sequences of bits. Later on in the talk we discuss mathematical theorems showing the close relationship between the two.
For a mathematician, randomness of a sequence of bits is usually understood probability-theoretically. She may think of a random sequence as the outcomes of a sequence of independent events, such as coin tosses. Theorems about random objects hold outside some unspecified null class; for instance, a function of bounded variation is differentiable at every ''random'' real. It makes no sense to say that an individual real is random.
This approach to randomness was used by Green and Tao (Ann. Math, 2006) when they proved that the primes have arbitrarily long arithmetical progressions. They relied on a dichotomy between (pseudo-)randomness and being of low complexity. In a sense, the primes can be viewed as a pseudorandom subset of a set of non-zero density.
To obtain individual sequences that are random in a formal sense, one introduces a notion of effective null class. A sequence is random in that sense if it avoids each effective null class. For instance, Chaitin's halting probability is random in the sense of Martin-Loef, a concept central in the hierarchy of effective randomness notions. Every computable function of bounded variation is differentiable at any Martin-Loef random real; conversely, if the real is not Martin-Loef random then some computable function of bounded variation fails to be differentiable (Demuth 1975; recent work of Brattka, Miller and Nies).
Effective randomness notions interact in fascinating ways with the computational complexity of sequences of bits. For instance, being far from Martin-Loef random is equivalent to being close to computable in a specific sense (Nies, Advances in Math, 2005).
http://www.cs.auckland.ac.nz/~nies/
Finding needles in a haystack: identification of significant molecular changes in influenza virusesSpeaker: Dr. Catherine Macken
Affiliation: Los Alamos National Laboratory
When: Wednesday, 30 June 2010, 11:00 am to 12:00 pm
Where: Statistics Seminar Room 222, Science Centre
The influenza virus is highly variable. Genetic changes in the virus may have important effects on viral properties. For example, some changes allow the virus to escape from vaccine protection, thereby necessitating frequent updates of the vaccine formulation. However, it appears that most viral genetic variation has no significant effect on viral properties. The challenge is to find effective approaches for identifying the few "important" changes in a background of a large amount of "noise."
Experimental approaches offer some success. However, the practicalities of experiments mean that only a small domain of variation can be explored, and thus the scope of inferences is limited. We have been developing bioinformatics approaches to the problem of finding the needles in the haystack. A potential benefit of bioinformatics approaches is broadly applicable inferences, leading to deeper insights into significant changes in the influenza virus.
There are significant statistical obstacles to these bioinformatics developments. Fundamental to our developments is a precise understanding of the way in which the influenza virus evolves. If we know how the virus evolves, we can assess the relative likelihood that a change is important or insignificant. The influenza virus evolves in complex and unusual ways, necessitating novel statistical models, which we have been developing. Further, in the influenza virus, it appears that important molecular changes rarely operate in isolation. More likely, two co-ordinated changes are needed to achieve a different-and-fitter virus. Finding these co-ordinated changes in the highly variable influenza virus requires new developments.
This talk is intended as a high-level view of our statistical, computational and experimental work toward proposing experimentally testable hypotheses of significant changes in the influenza virus. The talk will not expect any knowledge of influenza virus genetics or evolution.
Meta-Analysis: History and statistical issues for combining the results of independent studiesSpeaker: Prof. Ingram Olkin
Affiliation: Stanford U.
When: Wednesday, 23 June 2010, 11:00 am to 12:00 pm
Where: Statistics Seminar Room 222, Science Centre
Meta-analysis enables researchers to synthesize the results of independent studies so that the combined weight of evidence can be considered and applied. Increasingly meta-analysis is being used in medicine and other health sciences, in the behavioral and educational fields to augment traditional methods of narrative research by systematically aggregating and quantifying research literature.
Meta-analysis requires several steps prior to statistical analysis: formulation of the problem, literature search, coding and evaluation of the literature, after which one can address the statistical issues.
We here review some of the history of meta-analysis and discuss some of the problematic issues such as various forms of bias that may exist. The statistical techniques that have been used are nonparametric methods, combining proportions, the use of different metrics, and combining effect sizes from continuous data.
Prof. Olkin has made fundamental contributions to multiple areas of statistics (including meta-analysis). He is NZSA Visiting Lecturer for 2010. His visit is also sponsored by Statistics NZ.
http://www-stat.stanford.edu/people/faculty/olkin/
Bayesian risk prediction for epidemic-related riskSpeaker: Dr. Chris Jewell
Affiliation: U. Warwick
When: Thursday, 10 June 2010, 4:00 pm to 5:00 pm
Where: Statistics Seminar Room 222, Science Centre
The unpredictable nature of infectious disease epidemics implies that the characteristics of an outbreak are hard to predict in advance of an incursion. Changes in the host, pathogen, and environment over time, may mean that a new outbreak may not behave as expected based on prior information alone. Therefore, predictions based on current epidemic field data are required for tailoring control-policy to the current outbreak.
In trying to gain a more accurate insight into how an outbreak might spread through a population, mathematical simulation modelling has become a popular and established tool. Yet simulations rely on knowing certain parameters governing epidemic dynamics, and estimating appropriate values from outbreak data has been a major challenge to their predictive credibility. Addressing this, this talk will describe a generic Bayesian framework for analysing epidemic data in real time, estimating critical disease parameters, and imputing individuals' necessarily unobserved infection times. Two examples (HPAI in British poultry, and the 2007 UK FMD outbreak) show how this framework can be applied to different disease outbreak scenarios, and how Bayesian inference can be used in conjunction with simulation and GIS techniques to provide information for decision-making during epidemics.
http://www2.warwick.ac.uk/fac/sci/statistics/staff/research/jewell/
Efficient and Robust Control Charts for Monitoring Process DispersionSpeaker: Saddam Abbasi
Affiliation: U. Auckland
When: Wednesday, 12 May 2010, 10:00 am to 11:00 am
Where: Statistics Seminar Room 222, Science Centre
Control charts are widely used to monitor stability and performance of manufacturing processes with an aim of detecting unfavorable variations in process location and dispersion. The major portion of this talk will focus on investing the efficiency and robustness of various Shewhart and EWMA type dispersion control charts under the existence and violation of normality assumption. The effect of time varying control limits and fast initial response feature to further increase the sensitivity of dispersion EWMA charts will also be presented. The talk finally ends with discussion of some future research issues which I will investigate during the rest of my PhD.
http://www.stat.auckland.ac.nz/showperson?firstname=Saddam&surname=Abbasi
Challenges Encountered While Attempting to Reconstruct a Demographic Expansion from Archaeological and Genetic EvidenceSpeaker: Dr. Steven Miller
Affiliation: U. Waikato
When: Thursday, 8 April 2010, 4:00 pm to 5:00 pm
Where: Statistics Seminar Room 222, Science Centre
We aim to reconstruct the demographic expansion of humans across Europe during the Neolithic era by simultaneously analysing information from the fields of archaeology and genetics, allowing a coherent interpretation of the evidence.
A Bayesian framework was adopted to facilitate the combination of data from the disparate sources. However, as is often the case with complex systems such as this, the construction of a likelihood function to permit inference about the parameters of interest was determined to be infeasible, if not impossible.
We are attempting to proceed using techniques from the field of indirect inference. We are developing an approach that incorporates a statistical model for simulation outputs given input parameters, where the statistical model acts as a descriptive approximation to the underlying process of interest. This work is still very much in-progress, but we can illustrate the potential application of the proposed approach with toy examples.
Random Trajectories, some theory and applicationsSpeaker: Prof. David Brillinger
Affiliation: U.C. Berkeley
When: Thursday, 8 April 2010, 11:00 am to 12:30 pm
Where: MLT3
Please note that this seminar is one of two.
It runs from 11am to 12:30pm on April 7 [MLT3]
The second seminar runs from 11am to 12:30pm on April 8 [MLT3]
The paths of moving objects, their trajectories, are basic to classical mechanics. Models developed in mathematical and physical fields may be adapted to explore and model empirical trajectory data that are arising commonly these days.
The methods to be discussed will include: spectrum analysis, ordinary and functional stochastic differential equations, gradient systems, and potential functions. Data from marine biology (elephant seals, monk seals, whale sharks), animal biology (elk and deer), and soccer will be analyzed. Markov and non-Markov processes will be considered as well as the inclusion of explanatory variables in the models.
Prof. Brillinger is an NZIMA Visiting Maclaurin Fellow.
What Seismology, Neuroscience and Seals Have in Common
http://www.stat.berkeley.edu/~brill/
Modelling Inter-Ethnic Partnerships in New Zealand 1981-2006: A Census-Based ApproachSpeaker: Lyndon Walker
Affiliation: U. Auckland
When: Thursday, 25 March 2010, 4:00 pm to 5:00 pm
Where: Statistics Seminar Room 222, Science Centre
My thesis examined the patterns of ethnic partnership in New Zealand using national census data from 1981 to 2006. Inter-ethnic partnerships are of interest as they demonstrate the existence of interaction across ethnic boundaries, and are an indication of social boundaries between ethnic groups. A follow-on effect of inter-ethnic marriage is that children of mixed ethnicity couples are less likely to define themselves within a single ethnic group, further reducing cultural distinctions between the groups.
The main goals of the research were to examine the historical patterns of ethnic partnership, and then use simulation models to examine the partnership matching process. It advanced the current research on ethnic partnering in New Zealand through its innovative methodology and its content. Previous studies of New Zealand have examined at most two time periods, whereas this study used six full sets of census data from a twenty-five year period. There were two key components to the methodological innovation in this study. The first was the use of log-linear models to examine the patterns in the partnership tables, which had previously only been analysed using proportions. The second was the use of the parallel processing capability of a cluster computing resource to run an evolutionary algorithm which simulated the partnership matching process using unit-level census data of the single people in the Auckland, Wellington and Canterbury regions.
http://www.stat.auckland.ac.nz/showperson?firstname=Lyndon&surname=Walker
Adaptive change in sample size --- a free lunch?Speaker: Prof. Bruce Turnbull
Affiliation: Cornell University
When: Thursday, 4 March 2010, 4:00 pm to 5:00 pm
Where: Statistics Seminar Room 222, Science Centre
There has been much recent interest in adaptive designs for clinical trials because of the flexibility they offer. However are they a panacea as they have been touted?
We examine the efficiency of such designs and conclude that their flexibility can come at a price. Conventional group sequential tests with information monitoring, error spending boundaries and possibly unequally spaced looks may be preferable. However, it is still possible to fall back on flexible methods for re-design should study objectives change unexpectedly in mid-course.
(Joint research with Chris Jennison).
http://people.orie.cornell.edu/~bruce/
What is Quantum Field Theory?Speaker: Prof. David Brydges
Affiliation: U. British Columbia
When: Tuesday, 16 February 2010, 3:00 pm to 4:00 pm
Where: MedChem
We will take a brief walk through the history of quantum field theory and then branch off into connections with down to earth problems involving self-avoiding walk.
The connection to quantum field theory is one way to prove results on the end-to-end distance of the typical self-avoiding walk as a function of the number of steps in the walk.
David Brydges is a former president of the International Association for Mathematical Physics and a Fellow of the Royal Society of Canada, with a Canada Research Chair at the University of British Columbia. He has made fundamental contributions to both mathematical physics and probability, including the development of Wilson's Renormalisation Group and the invention of the Lace Expansion.
Degenerate Random EnvironmentsSpeaker: Dr. Mark Holmes
Affiliation: U. Auckland
When: Tuesday, 16 February 2010, 1:30 pm to 2:30 pm
Where: Statistics Seminar Room 222, Science Centre
This is part of a mini-workshop in probability and statistical physics.
We discuss joint work with Prof. Tom Salisbury on certain kinds of random graphs. These models are similar to percolation models but also have important differences, including the notion of "Markolation" versus percolation. We will focus on some of the more interesting examples and see that there are phase transitions as we vary the parameters of the model(s).
http://www.stat.auckland.ac.nz/~mholmes/
Phase transitions in loss networksSpeaker: Dr. Ilze Ziedins/Tong (Benny) Zhu
Affiliation: U. Auckland
When: Tuesday, 16 February 2010, 11:00 am to 12:00 pm
Where: Statistics Seminar Room 222, Science Centre
This talk is part of the mini-workshop in probability and statistical physics. The talk will be given jointly by Ilze and Benny.
Loss networks have been widely used in modelling telecommunications networks. In this talk we consider loss networks with a tree structure, supporting both single-link and multi-link connections. Such networks may exhibit phase transitions as the arrival rates for multi-rate connections increase. We will discuss how the phase transitions are affected by changes in network structure -- changes in capacity, connectivity, some degree of asymmetry, and the addition of controls.
Many-core statistical inference of stochastic processes: a bright computational futureSpeaker: A. Prof. Marc Suchard
Affiliation: UCLA
When: Thursday, 28 January 2010, 4:00 pm to 5:00 pm
Where: Statistics Seminar Room 222, Science Centre
Massive numerical integration plagues the statistical inference of partially observed stochastic processes. An important biological example entertains partially observed continuous-time Markov chains (CTMCs) to model molecular sequence evolution. Joint inference of phylogenetic trees and codon-based substitution models of sequence evolution remains computationally impractical. Parallelizing data likelihood calculations is an obvious strategy; however, across a cluster-computer, this scales with the total number of processing cores, incurring considerable cost to achieve reasonable run-time.
To solve this problem, I describe many-core computing algorithms that harness inexpensive graphics processing units (GPUs) for calculation of the likelihood under CTMC models of evolution. High-end GPUs containing hundreds of cores and are low-cost. These novel algorithms are particularly efficient for large state-spaces, including codon models, and large data sets, such as full genome alignments where we demonstrate up to 150-fold speed-up. I conclude with a discussion of the future of many-core computing in statistics and touch upon recent experiences with massively large and high-dimensional mixture models.
http://www.biomath.ucla.edu/msuchard/
Comparing Trees using Distances, Trees and Multidimensional ScalingSpeaker: Prof. Susan Holmes
Affiliation: Stanford U.
When: Tuesday, 12 January 2010, 4:00 pm to 5:00 pm
Where: Statistics Seminar Room 222, Science Centre
Distances between trees have useful applications in combining phylogenetic trees built from multiple genes and in studying trees built from bootstrap samples and Bayesian posterior distributions.
Until recently, computations of the distance between trees was intractable. We have developed an R package to compute the distance between trees based on a polynomial algorithm by M. Owen and S. Provan.
Using this distance we are able to project trees from data with varying mutation rates, compare hierarchical clustering trees for Microarrays, and study influence functions for the data used to build the trees.
The main tool for using the distances is multidimensional scaling, although the original tree metric delivers a treespace which is not Euclidean, it is itself negatively curved, the Euclidean approximations provided by MDS are very useful for making low dimensional graphics of tree projections.
(This is joint work with John Chakerian)