UNC Probability Seminar

The probability seminar at UNC is organized by the Department of Statistics and Operations Research. There is typically one probabilty seminar every alternate week at UNC (depending on the availability of speakers) while every other week the seminar switches to the Math department at Duke. At UNC the seminar this semester is scheduled to be held at Hanes 130 on Thursday. Please see the links on the right for directions on getting here and for other seminars in the triangle area of interest for probabilists. The speakers for this semester and their abstracts are given below.

Note: This is a partial list of speakers for this semester and it will be updated as more speakers confirm their availability.


Fall 2013

  September 26: Jian Ding (University of Chicago)

Title: Maximum independent sets in random d-regular graphs

Abstract Satisfaction and optimization problems subject to random constraints are a well-studied area in the theory of computation. These problems also arise naturally in combinatorics, in the study of sparse random graphs. While the values of limiting thresholds have been conjectured for many such models, few have been rigorously established. In this context we study the size of maximum independent sets in random d-regular graphs. We show that for d exceeding a constant d(0), there exist explicit constants A, C depending on d such that the maximum size has constant fluctuations around A*n-C*(log n) establishing the one-step replica symmetry breaking heuristics developed by statistical physicists. As an application of our method we also prove an explicit satisfiability threshold in random regular k-NAE-SAT. This is joint work with Allan Sly and Nike Sun.





  September 30 (Monday): Wojbor A. Woyczynski (Case Western Reserve University)

Title: Nonlinear Evolution Equations Driven by Levy Processes and Their Queuing interpretations

Abstract One of the motivations of our program was to develop understanding of the interplay between the nonlinear and nonlocal components in evolution equation driven by the infinitesimal generators of processes with jumps, such as Levy processes and flights. In the process we also studied the probabilistic approximations (propagation of chaos) for several extensions of the classical quasilinear and strongly linear PDEs, including the conservation laws, porous medium equation, and reaction-diffusion type equations for Darwinian evolutionary population models where the hydrodynamic limits may still preserve some "background" random noise. Some queuing theory interpretations will be included.

Location: 120 Hanes Hall.





  October 3rd: Subhamay Saha (UNC Chapel Hill)

Title: Risk-sensitive control of continuous-time Markov chains

Abstract We study risk-sensitive control of continuous time Markov chains taking values in discrete state space. We study both finite and infinite horizon problems. In the finite horizon problem we characterise the value function via HJB equation and obtain an optimal Markov control. We do the same for infinite horizon discounted cost case. In the infinite horizon average cost case we establish the existence of an optimal stationary control under certain Lyapunov condition. We also develop a policy iteration algorithm for finding an optimal control.





  October 24th: David Kelly (UNC Chapel Hill)

Title: Homogenisation for multi-dimensional maps and flows.

Abstract We will discuss diffusion limits for multi-dimensional slow-fast systems, in both discrete and continuous time. In particular, we focus on the homogenisation of "deterministic" slow-fast systems, where the role of the "fast, noisy" process is played by a chaotic signal. A significant obstacle in this area is the case in which the fast "noisy" process appears multiplicatively in the slow process. We will show how to overcome this difficulty using rough path - like machinery. In the case of continuous time dynamics, one heuristically expects to see Stratonovich integrals appearing in the homogenised equations. This heuristic has been thoroughly verified in one dimensional systems. However, we show that in higher dimensions this is almost always wrong, instead one encounters a perturbed version of the Stratonovich integral. This is joint work with Ian Melbourne.





  November 7th: Olav Kallenberg (Auburn University)

Title: Stationary densities and disintegration kernels

Abstract Given two random measures $\xi <<\eta$ on a Borel space $S$, jointly stationary under the action of a suitable group $G$, we are interested in finding a measurable process $X$ such that $(\xi, \eta, X)$ is stationary and satisfies $\xi = X \cdot \eta$ a.s. More generally, when $\xi$ and $\eta$ are jointly stationary random measures on $S \times T$ and $S$ with $\xi(\cdot \times T) << \eta$ a.s., we may look for a stationary disintegration kernel $\zeta$ from $S$ to $T$ such that $\xi = \eta \otimes \zeta$ a.s. Such problems are surprisingly hard, and their solution may involve methods from real analysis, topological groups, differential geometry, and probability theory. In this talk, I shall try to explain some of the basic ideas.





  November 14th: Mustafa Khandelwala (UNC Chapel Hill)

Title: Belief propagation for optimal edge-cover in the random complete graph

Abstract We apply the objective method of Aldous to the problem of finding the minimum cost edge-cover of the complete graph with random independent and identically distributed edge-costs. The limit, as the number of vertices goes to infinity, of the expected minimum cost for this problem is known via a combinatorial approach of Hessler and Wastlund. We provide a proof of this result using the machinery of the objective method and local weak convergence, which was used to prove the zeta(2) limit of the random assignment problem. A proof via the objective method is useful because it provides us more information on the nature of the edges incident on a typical root in the minimum cost edge-cover. We further show that a belief propagation algorithm converges asymptotically to the optimal solution. The belief propagation algorithm yields a near optimal solution with lesser complexity than the known best algorithms designed for optimality in worst-case settings.