I am a Professor of Statistical Science at Duke University and a faculty member of Duke Institute of Brain Sciences. Statistics is an academic discipline that is founded on not one but two philosophical cornerstones: the Bayesian and the Frequentist principles of quantifying uncertainty. Overshadowing the complementary qualities of these two principles, their conflicting aspects have often caused divisions in the practice of statistics for scientific use. By innovating new statistical methods rooted in Bayesian inference and examining their Frequentist behavior, I have made a modest attempt at straddling the fault lines of these divisions.
Brief Bio. I received my statistics education from Indian Statistical Institute, Kolkata (BStat 2000, MStat 2002) and completed my doctoral research at Purdue University (PhD 2006) under the supervision of JK Ghosh. My doctoral thesis won the Leonard J Savage Award (Theory) from ISBA. I spent the next three years at Carnegie Mellon University as the Morris H DeGroot Visiting Assistant Professor, where my statistics thinking and research interests were deeply influenced by Rob Kass and Jay Kadane. I joined Duke University in 2009 as an Assistant Professor and was promoted to Associate Professor in 2016, and to Professor in 2022. I received the Young Statistician Award from IISA in 2016. I have been a member of ISBA, ASA, IMS and IISA and over the years I have served in various elected roles in these academic societies. Click here to download a CV as a PDF file.
Select Publication. Below is a partial list of papers I have authored or coauthored, with a selection bias toward more recent work. A complete list is available at my Google Scholar profile.
I work on Nonparametric Bayes which tangles with the seemingly impossible task of extracting information out of limited data on infinitely many unknown quantities. Although such tasks are made feasible by supposing the unknown quantities arrange themselves into neat geometric shapes such as curves or surfaces, they still prove tricky to both subjective and objective Bayesian viewpoints on the question of prior allocation. The mathematics of objective prior allocation simply breaks down when faced with infinite dimensional geometry, while contemplating or communicating subjective considerations on an infinite number of items quickly overwhelms the human mind. But useful solutions exist in the middle grounds of the subjective–objective divide in the form of intersubjective Bayesian considerations aided by frequentist calculations.
Bayesian Smoothing and Posterior Consistency. Although data smoothing has existed for over three hundred years and formal statistical treatments have existed since at least 1960s, Nonparametric Bayes has made a fundamental contribution to the methodology by resolving a theoretical bottleneck: how to adjust the degree of smoothing so that information from any single data point is extracted away to an adequately sized neighboring space but not to regions at great distance. Bayesian solutions to this just-right smoothing problem have evolved on the theoretical foundation of posterior consistency and a more in-depth variation of it known as optimal posterior contraction, a mathematical construct for evaluating the asymptotic concentration rate of the posterior distribution against benchmarks set by information theoretic limits on statistical learning rates. My research has played a role in establishing the narrative that posterior consistency and optimal posterior contraction could be guaranteed by at least two distinct strategies for prior allocation on smooth function spaces: either by using an infinite Dirichlet process mixture of smooth kernels or by using a smooth Gaussian process.
Quantile Regression. Linear quantile models allow scientists to analyze how predictor influence varies across response quantiles. Such analyses, often of important scientific implication in economic and environmental sciences, require combining separate quantile regression fits from every quantile level of interest, an act of aggregation that is not founded on a coherent probabilistic model. This theoretical gap leads to legitimacy issues such as quantile crossing and quantile cherrypicking, and statistical concerns such as poor standard error estimation and limited model flexibility. My work on quantile regression has offered a comprehensive solution to this problem enabling statistical inference, prediction, model enhancement and model selection [3,4,5]. A major breakthrough of this work has been a loss–less reparametrization of a stack of non-crossing (quantile) hyperplanes in terms of unconstrained smooth functions which are directly amenable to regularized likelihood based statistical estimation . The new joint estimation framework has opened doors to many important advancements of the quantile regression analysis technique to address additional data complications, e.g, censoring , spatiotemporal or longitudinal noise correlation , hierarchical structures and so on.
Semiparametric Density Estimation. Density estimation is a classic smoothing exercise that is mostly considered a visualization tool. But it can substantially improve data analysis when appropriately incorporated within a hierarchical model. In  we argue that one can gain better accuracy and reliability in estimating the tail index of a heavy tailed distribution by fitting a suitable semiparametric density model to the entire data histogram, rather than fitting a parametric model only to thresholded data as is commonly done.  shows that very accurate sufficient dimension reduction, along with dimensionality selection, can be performed within a semiparametric conditional density estimation framework. [3-4] establish that substantial gains are made in power and false positive rate control in large-scale significance testing when the non-null density of the test statistic is estimated from the data.
Starting from the late nineteenth century, direct recordings of neuronal electric discharges and their mathematical and statistical analyses have played a pivotal role in understanding how the brain and the nervous system function in a variety of sensory and cognitive situations. However, many questions still remain unanswered. In particular, it is still a mystery how we perceive multiple objects present in a natural sensory scene. Sensory neurons are broadly tuned and are activated by any of several distinguishable items when presented in isolation. In scenes consisting of several such items, how is the sensory task load divided within a neural population so that information about each item could be retained? In collaboration with Jennifer Groh, we have been examining a radically new hypothesis that the brain might solve this problem via dynamic multiplexing, with each neuron juggling over time the representational tasks it is capable of performing.
We have recently completed Phase 1 of this research in which single cell recordings have revealed that throughout sensory hierarchies (from the auditory midbrain to primary visual cortex and a visual cortical face area), neurons dynamically alternate between encoding each stimuli present in a two-item scene, thus lending evidence to the credibility of our new hypothesis of multiplexing [1-3]. My statistics research interests have contributed intimately in this collaborative effort to design rigorous statistical analysis frameworks which could potentially falsify the hypothesis. Traditionally, statistical analyses of neuronal electric discharges, aka spike trains, proceed by aggregating across time (response window) and trials (replication). Detection of neuronal task juggling and quantifying its probabilistic nature have required new statistical methodology based on mixture models to encode heterogeneity of task selection, and rigorous inverse probability based tests of more than two competing hypotheses via Bayes factor calculation. A truly novel methodological development has been our Dynamic Admixture Point Process  model for an in-depth analysis of the temporal dynamics of task selection. DAPP offers the right statistical framework in answering a fundamental question: for neurons whose overall firing rate under double stimuli is an average of its single stimulus firing rates, do we see, when viewed in high temporal resolution, the neuron to truly average the signals or does it appear to fluctuate between the two tasks? This question could not be answered faithfully with existing hidden Markov model based statistical analysis methods which are good at modeling fluctuation, but cannot carry out rigorous assessment of how they stack up against the possibility of true averaging at high temporal resolution.
Our Phase 1 examination has not falsified our hypothesis of multiplexing despite carefully designed experiments and statistical analyses. But we are still far from producing strong evidence that multiplexing is a primary computing tool that the brain employs in representing multiple items in a crowded sensory scene. We are currently working on Phase 2 with array based spike train recordings from populations of neurons to ascertain the significance of neural fluctuation in solving the multi-item perception problem. A particular computing theory we are currently testing is that neurons within a homogeneous population may coordinate with one another in temporally dividing the task load. Our current statistics research is geared toward developing and testing Bayesian inferential models which can identify such organizational structures of functional coordination from array based recordings. Our approach combines several novel statistical modeling elements, such as stochastic block models that are traditionally used for networks analysis with sparse factor models which are typically used for learning the correlations of high dimensional recordings.
Over the years, I have taught a number of theory heavy core and elective courses at undergraduate and graduate levels (e.g., STA 250 and STA 732). My approach to statistics teaching, at least in recent years, has focused on exploring and understanding what makes statistics an academic discipline of its own. This is not a trivial question in today's world where data awareness and data science/analytic skills are far more pervasive than what could be imagined even twenty years back. Teaching statistics as a cookbook of data analysis methods was never exciting, but now it feels woefully outdated. A more elegant view of statistics as a branch of applied mathematics for taking decisions under uncertainty is more useful and reassuring. But it does not quite capture the full breadth of what it means to think like a statistician. For that, one needs to recognize that statistics is indeed founded on two very well defined principles of how to use the language of probability to quantify and communicate evidence in the face of uncertainty. It is imperative that in teaching statistics we expose students to historical references on how statistical thinking has evolved over centuries and how the conflicting and complementing aspects of the Bayesian and Frequentist principles are equally important to budding statisticians to know and appreciate and apply critically in their own work. Below is a partial list of courses I have taught recently.
A lot of my work involves scientific computing with Bayesian models. I mostly write codes in the R programming language, while using compiled C codes in the background for speed ups in iterative computation, especially for complex Markov chain Monte Carlo based computation. I have authored two R packages that are hosted on The Comprehensive R Archive Network (CRAN).
Additional code pieces associated with other papers are available here. However, their use will require additional effort from the user. Time permitting, I will be happy to offer some assistance with implementation or customization.
I am looking for a postdoctoral researcher to work with me and Professor Jennifer Groh on the neuroscience research outlined above, focusing on the statistical theory and methodology. Please find the job posting here and apply ASAP! This is one of two postdoc positions to be funded by our joint NIH award on "Information Preservation in Neural Codes". Please see here for the other position with a neuroscience lead. Duke University and the triangle area are great places to live and work!
Note: For students interested in graduate research under my supervision, please apply to the MSS or PhD programs offered by Duke Statistical Science. Graduate students are not directly recruited by faculty. Instead, they are admitted to these programs through admission process approved by the department and the university.