8:00-8:30 Breakfast & Registration, Student Success Center 229
Opening Remarks
Student Success Center 229
Chair: Yat Tin Chow (UCR)
8:30-8:45 Welcome and Opening Remarks
- Peter Atkinson (CNAS Dean, UCR)
- Rodolfo Torres (Vice Chancellor for RED, UCR)
- Amit Roy-Chowdhury & Vassilis Tsotras (RAISE Co-Directors, UCR)
- Mark Alber (ICQMB Director, UCR)
Plenary Session I
Student Success Center 229
Chair: Mark Alber (UCR)
8:45-9:25 Bo Li (UCSD)
Computational Investigation of Spatiotemporal Dynamics of a Growing Bacterial Colony
9:30-10:10 Hrushikesh Mhaskar (CGU)
Revisiting the Theory of Machine Learning (AI Special Session)
8:45-9:25 Bo Li (UCSD)
Computational Investigation of Spatiotemporal Dynamics of a Growing Bacterial Colony
The growth of a bacterial colony on hard agar surface exhibits striking patterns and robust expansion kinetics, despite complex interactions among millions of cells under varying chemical gradients. To probe the spatiotemporal dynamics of such a growing colony, we develop a theory of cellular mechanical interactions and construct a hybrid three-dimensional simulation model. This model consists of an agent-based description of the growth, division, and movement of individual cells, and a set of reaction-diffusion equations for metabolic dynamics. Our large-scale simulations and analysis predict that the mechanical interactions, metabolic gradients, particularly the nutrient depletion, and cell maintenance together control the kinetics of a growing colony, in a good agreement with experiment. Our study is a first step toward detailed computational investigation of bacterial biofilms. This is joint work mainly with Mya Warren, Paul Sun, Harish Kannan, Jia-Jia Dong, and Terence Hwa.
9:30-10:10 Hrushikesh Mhaskar (CGU)
Revisiting the Theory of Machine Learning
A central problem of machine learning is the following. Given data of the form $\{(y_i, f (y_i) + \epsilon_i)\}_{i=1}^M$, where $y_i$'s are drawn randomly from an unknown (marginal) distribution $\mu^*$ and $\epsilon_i$ are random noise variables from another unknown distribution, find an approximation to the unknown function $f$ , and estimate the error in terms of $M$. The approximation is accomplished typically by neural/rbf/kernel networks, where the number of nonlinear units is determined on the basis of an estimate on the degree of approximation, but the actual approximation is computed using an optimization algorithm. Although this paradigm is obviously extremely successful, we point out a number of perceived theoretical shortcomings of this paradigm, the perception reinforced by some recent observations about deep learning. We describe our efforts to overcome these shortcomings and develop a more direct and elegant approach based on the principles of approximation theory and harmonic analysis.
10:15-10:30 Coffee Break, Student Success Center 229
Morning Contributed Sessions
(Contributed Session I)
Track 1
Student Success Center 229
Chair: German Enciso (UCI)
10:30-10:50 German Enciso
Statistical Analysis Supports Size Control Mechanism of Chlamydia Development
10:50-11:10 Tung Nguyen
An Epidemic Model on Hypergraphs With Two Modes of Transmission
11:10-11:30 Wei Zhao
Decoding Intercellular Communication in Health and Disease through Single-Cell and Spatial Omics Data
11:30-11:50 Chase Brown
Boundary Accumulation of Active Rods in 3D Microchannels With Elliptical Cross-section
10:30-10:50 German Enciso
Statistical Analysis Supports Size Control Mechanism of Chlamydia Development
Chlamydia trachomatis is a bacterium that infects mammalian cells and reproduces inside them before invading other cells. In order to leave its mammalian host cell it must first convert into a spore-like form called elementary body (EB), which is able to survive outside the host but cannot reproduce. Conversion into this form is a crucial cell fate decision for Chlamydia, and in this work we aim to compare different possible mechanisms that might regulate and drive this decision. Specifically, we use a statistical analysis of the timing of conversion, in order to rule out a broad series of possible mechanisms. We argue that any mechanism that has an external input for conversion will tend to feature a negative correlation between unconverted and converted Chlamydia forms. However the experimental data indicates a positive correlation between the forms. We conclude that a mechanism internal to each Chlamydia is more likely at work. This supports a so-called size control mechanism, where Chlamydia converts once it reaches a sufficiently small size after multiple divisions. An additional analysis of the noise in the number of EB forms for each inclusion further supports this conclusion.
10:50-11:10 Tung Nguyen
An Epidemic Model on Hypergraphs With Two Modes of Transmission
The spread of infectious diseases can be naturally modeled on a graph whose nodes represent individuals and edges represent pairwise interactions through which the disease can propagate. As human interactions can happen in groups larger than two (higher-order interactions), one can use a hypergraph, an extension of graph where a hyperedge can join any number of nodes, to capture both pairwise and higher-order interactions. In this project, we consider the spread of respiratory viruses, such as COVID, on hypergraphs with two modes of transmission. They can spread via large droplets in direct contacts, which we associate with the pairwise interactions, or via infected aerosols in the environments, which we associate with the higher-order interactions. We derive a mean-field approximation of our model and obtain the threshold condition which characterizes whether the disease vanishes or becomes endemic. We also use numerical simulations to examine the impact of various factors on the disease dynamics, such as the number, size, and heterogeneity (in degree or in size) of the hyperedges.
11:10-11:30 Wei Zhao
Decoding Intercellular Communication in Health and Disease through Single-Cell and Spatial Omics Data
Cell-cell communication is fundamental to tissue organization and function, and its dysregulation underlies a wide spectrum of diseases. In this talk, we explore how single-cell and spatial omics technologies can reveal these signaling networks across different biological systems. We begin with NeuronChat, a specialized tool for mapping neuron-neuron communication using single-cell transcriptomics and a curated database of neural signaling interactions. This method captures context-specific and spatially organized neural networks, including changes in disorders like autism. We then examine liver aging and insulin resistance using a combination of single-cell RNA-seq, ATAC-seq, and spatial transcriptomics. This integrative approach reveals shifts in liver zonation, changes in cell populations, and reduced HGF signaling - _insights that are further supported by therapeutic rescue experiments. Finally, we investigate cell-cell signaling in inflammatory and fibrotic diseases, including vulvar lichen sclerosus and inclusion body myositis, identifying disrupted immune-stromal interactions and activation of stress and fibrosis pathways. Together, these studies demonstrate how tailored computational tools and multimodal single-cell datasets can decode key communication signals in both health and disease.
11:30-11:50 Chase Brown
Boundary Accumulation of Active Rods in 3D Microchannels With Elliptical Cross-section
Many motile microorganisms and bio-mimetic micro-particles have been successfully modeled as active rods - elongated bodies capable of self-propulsion. A hallmark of active rod dynamics under confinement is their tendency to accumulate at the walls. Unlike passive particles, which typically sediment and cease their motion at the wall, accumulated active rods continue to move along the wall, reorient, and may even escape from it, resulting in complex and non-trivial distributions. In this talk, we examine the effects of wall curvature on active rod distribution by studying elliptical perturbations of tube-like microchannels, that is, the cylindrical confinement with a circular cross-section, common in both nature and various applications. By developing a three-dimensional computational model for individual active rods and conducting Monte Carlo simulations, we found that active rods tend to concentrate at locations with the highest wall curvature. We then investigated how the distribution of active rod accumulation depends on the background flow, orientation diffusion, elliptical eccentricity, and wall torque. Finally, we used a reduced mathematical model to explain why active rods preferentially accumulate at high-curvature locations. Additionally, we discovered that the wall torque creates a bifurcation in the stability of a rod's angular dynamics at the elliptical wall. This is the joint work with S. D. Ryan (Cleveland State U.) and M. Potomkin (UCR).
Track 2
Student Success Center 216
Chair: Mark Huber (CMC)
10:30-10:50 Mark Huber
Using the Randomness Recycler for Faster Perfect Sampling
10:50-11:10 Henry Schellhorn
Free Market on the Freeway: a Nonlinear Stochastic Hyperbolic PDE Approach
11:10-11:30 Bixing Qiao
A New Approach for the Kyle-Back Strategic Insider Equilibrium Problem
11:30-11:50 Yifan Cao
Semicircle Law for High Dimensional Geometric Random Graphs
10:30-10:50 Mark Huber
Using the Randomness Recycler for Faster Perfect Sampling
The Randomness Recycler is an extension of Acceptance/Rejection sampling that attempts to reuse as much of the generated random variables when rejection occurs. This method will be illustrated with several examples, including drawing uniformly from a circle without using functions more complex than squaring and generating rolls from a fair dies with $ n $ sides using random bits in an optimal fashion.
10:50-11:10 Henry Schellhorn
Free Market on the Freeway: a nonlinear stochastic hyperbolic PDE approach
This paper studies a trading mechanism allowing autonomous cars to change lanes on a freeway in congested traffic. When no room is available on the adjacent lane, the mechanism enables a car to change lanes if it pays a small fee to the car occupying the space it is moving into, compensating it for slowing down. We model the impact of this mechanism as a market for speed, which is free in the sense that it can be implemented by peer-to-peer technology without the intervention of the freeway operator. The term market is warranted by the simultaneous presence of multiple lane buyers and lane sellers. We use the term freeway to emphasize that we are not modeling toll-lanes. We develop conditions that the price system should satisfy. The presence of uncertainty in traffic density as well as price makes us model them as the solution of a system of stochastic partial differential equations. Extensions to other traffic maneuvers are discussed.
11:10-11:30 Bixing Qiao
A New Approach for the Kyle-Back Strategic Insider Equilibrium Problem
The Kyle-Back (KB) model has been widely studied by the mathematical finance literature. Most of the works assume a Markovian structure for pricing rules in order to use the PDE approach. We provide a new formulation for the KB model that does not assume the Markovian structure for pricing function of the market maker. We find an example such that only non-Markovian pricing function exists for equilibrium. When the insider_么s signal follows a discrete distribution, the equilibrium of KB model can be characterized by the solution of the Forward-Backward Stochastic Differential Equations (FBSDE). We further study the set value of the insider payoff. By applying the so-called duality property, we can characterize the set value of the insider payoff through an optimal control problem and further apply the vast results in optimal control theory. The well known bridge strategy with linear pricing function is an element of our set value.
11:30-11:50 Yifan Cao
Semicircle Law for High Dimensional Geometric Random Graphs
The random geometric graph $G(n, d, p)$ is formed by sampling $n$ i.i.d vectors $\{x_i\}_{i=1}^{n}$ uniformly from a $d$-dimensional unit sphere. Edges are added between pairs of vectors if $\langle x_i, x_j \rangle \geq \tau$, where $p$ represents the probability of an edge between two vertices. This random graph model exhibits dependency among edges and is an example of nonlinear random matrices. Let $A$ denote the adjacency matrix of $G$. We show that when $d \gg np\log^{2}(\frac{1}{p})$ and $np\to\infty$, the limiting spectral distribution of $\frac{A}{\sqrt{np(1-p)}}$ is the semicircle law. To prove the semicircle law with the moment method, we developed a novel graph decomposition algorithm to bound the contribution of each subgraph appearing in the normalized trace expansion, which refines the estimates in Liu et al. (2023) and Li and Schramm (2023).
Track 3
Student Success Center 235
Chair: Vincent Martinez (CUNY Hunter)
10:30-10:50 Vincent Martinez
Parameter Reconstruction in Hydrodynamic Equations
10:50-11:10 Jimmie Adriazola
Computer Assisted Discovery of Integrability via SILO: Sparse Identification of Lax Operators
11:10-11:30 Scott Little
Koopman Tori, Knots and Elliptic Curves
11:30-11:50 Hyeonghun Kim
Physically Consistent Predictive Reduced-order Modeling by Enhancing Operator Inference With State Constraints
10:30-10:50 Vincent Martinez
Parameter Reconstruction in Hydrodynamic Equations
This talk will discuss recent developments on the problem of reconstructing unknown parameters in dynamical systems, specifically those arising in the context of hydrodynamics. In particular, we will present an overview of various algorithms that can provably reconstruct unknown parameters in large class of nonlinear systems. These results will be discussed in the paradigmatic context of the Navier-Stokes equations, where rigorous results have been established by the author and collaborators.
10:50-11:10 Jimmie Adriazola
Computer Assisted Discovery of Integrability via SILO: Sparse Identification of Lax Operators
Integrablity in dynamical systems is a mathematically rich topic and often the starting point for analyzing more complex, nonintegrable equations. However, it is difficult to even recognize if a given system is integrable before investing effort into studying it. Therefore, we formulate the automated discovery of integrability in dynamical systems, specifically as a symbolic regression problem. Loosely speaking, we seek to maximize the compatibility between the known Hamiltonian of the system and a pair of matrix/differential operators known as Lax pairs. Our approach is tested on a variety of systems ranging from nonlinear oscillators to canonical Hamiltonian PDEs. We test robustness of the framework against nonintegrable perturbations, and, in all examples, reliably confirm or deny integrability. Moreover, using a thresholded regularization to promote sparsity, we recover expected and discover new Lax pairs despite wide hypotheses on the operators. We will discuss future directions for adapting our framework toward further automated discoveries in mathematical physics.
11:10-11:30 Scott Little
Koopman Tori, Knots and Elliptic Curves
Koopman Operator Theory (KOT) is an Ergodic theory and application of nonlinear dynamics based on the elegant theorem developed by Koopman and Von Neumann in the 1930's. More recently the KOT has been incorporated into Dynamic and Chaotic Systems Theory by Mezic. The linear Koopman Operator Theory includes a state space of infinite dimensions to control a finite dimensional nonlinear dynamic system. Stochastic string theory is referred to as "postmodern" string theory. The strings are treated not as discrete objects but as probabilistic spaces to account for quantum uncertainties and nonlinear effects. The first and second papers in this series contained a proof relating the Anti-de-Sitter Spacetime Conformal Field Theory Correspondence or AdS/CFT Duality to a Feynman-Kac stochastic string solution in Mellin Transform Space. The third paper focused on a proof correlating the Stochastic Feynman-Kac AdS/CFT solution to the Boltzmann Machine. The fourth paper is a proof of the KOT KvN Integral coupled te previous Feynman-Kac stochastic string solutions. The fifth paper is a correlation between the Koopman Operator to the AdS/CFT and Boltzmann Machine analogy mapped to KAM tori wrapped 2D-branes of the Dirac-Born-Infeld magnetic field string action. In this paper we correlate elliptic curves to Koopman D2-DBI branes, using KAM Tori on complex space to define elliptic curves. The correlation between complex tori in BSD Elliptic Curves and Koopman DBI D2 Brane is the focus of this work. The knot-strings nodes on the tori are the meromorphic Weierstrass complex tori poles (singularities) on the tori. Applications of this theory are quantum gravity, fluid dynamics, cloaking, cosmic strings/gravity waves, gamma rays/pulsars, machine learning-AI, neural networks/Boltzmann Machines, Feynman-Kac path integral, Schr_dinger equation, Black Shoals economics/finance, holographic AdS/CFT, chaos/fractals, Koopman non-linear chaotic complexity.
11:30-11:50 Hyeonghun Kim
Physically Consistent Predictive Reduced-order Modeling by Enhancing Operator Inference With State Constraints
Numerical simulations of complex multiphysics systems, such as char combustion considered herein, yield numerous state variables that inherently exhibit physical constraints. This talk presents a new approach to augment Operator Inference掳__ methodology within scientific machine learning that enables learning from data a low-dimensional representation of a high-dimensional system governed by nonlinear partial differential equations掳__y embedding such state constraints in the reduced-order model predictions. In the model learning process, we propose a new way to choose regularization hyperparameters based on a key performance indicator. Since embedding state constraints improves the stability of the Operator Inference reduced-order model, we compare the proposed state constraints-embedded Operator Inference with the standard Operator Inference and other stability-enhancing approaches. For an application to char combustion, we demonstrate that the proposed approach yields state predictions superior to the other methods regarding stability and accuracy. It extrapolates over 200% past the training regime while being computationally efficient and physically consistent.
Track 4
Student Success Center 308
Chair: Georg Menz (UCLA)
10:30-10:50 Georg Menz
Optimal Re-balancing of Portfolios via Non-conservative Optimal Transport
10:50-11:10 Mohandas Pillai
Global, Non-scattering Solutions to the Critical, Quintic, Focusing Semilinear Wave Equation
11:10-11:30 Linfeng Li
Nodal Set for Gevrey Regular Parabolic Equations
11:30-11:50 Therese Landry
Existence Theorems for PDEs Modeling Erosion and the Optimal Transportation of Sediment
10:30-10:50 Georg Menz
Optimal Re-balancing of Portfolios via Non-conservative Optimal Transport
We solve the problem of optimal re-balancing of a portfolio by introducing the notion of non-conservative optimal transport. As in optimal transport, the aim to minimize a cost functional, however the target measure might loose or gain mass during the transport. In the continuous setting we deduce the duality relation and show the existence of an optimal non-conservative transport map under general assumptions.
10:50-11:10 Mohandas Pillai
Global, Non-scattering Solutions to the Critical, Quintic, Focusing Semilinear Wave Equation
We consider the quintic, focusing semilinear wave equation on R^(1+3) , in the radially symmetric setting. Using methods inspired by matched asymptotic expansions, we construct infinite time blow-up, relaxation, and intermediate types of solutions. More precisely, we first define an admissible class of time-dependent length scales, which includes a symbol class of functions. Then, we construct solutions which can be decomposed, for all sufficiently large time, into an Aubin-Talentini (soliton) solution, re-scaled by an admissible length scale, plus radiation (which solves the free 3 dimensional wave equation), plus corrections which decay as time approaches infinity. The solutions include infinite time blow-up and relaxation with rates including, but not limited to, positive and negative powers of time, with exponents sufficiently small in absolute value. We also obtain solutions whose soliton component has oscillatory length scales, including ones which converge to zero along one sequence of times approaching infinity, but which diverge to infinity along another such sequence of times.
11:10-11:30 Linfeng Li
Nodal Set for Gevrey Regular Parabolic Equations
We consider the size of the nodal set of the solution of the second order parabolic-type equation with Gevrey regular coefficients. We provide an upper bound as a function of time. The dependence agrees with a sharp upper bound when the coefficients are analytic.
11:30-11:50 Therese Landry
Existence Theorems for PDEs Modeling Erosion and the Optimal Transportation of Sediment
We prove the existence of weak global weak solutions to equations describing the sediment flow in the evolution of fluvial land-surfaces, with constant water depth. These equations describe the so-called transport-limited situation, where all the sediment can be transported away given enough water. This is in distinction to the detachment-limited situation where we must wait for rock to weather (to sediment) before it can be transported away. Earlier work shows that these equations describe the optimal transport of sediment and the evolution of the surfaces in optimal transport theory.
Track 5
Student Success Center 316
Chair: Jiayin Lu (UCLA)
10:30-10:50 Jiayin Lu
Delaunay Triangulation and Voronoi Tessellation, and Their Applications in Geometry Meshing
10:50-11:10 Dohyeon Kim
Uniform-in-time Mean Field Limit of Consensus-Based Optimization
11:10-11:30 Satish Chandran
On Structure-Preserving Discretization for Fokker-Planck Equations: A Discrete Markov Chain Perspective
11:30-11:50 David Beers
Level Sets of Persistent Homology for Point Clouds
10:30-10:50 Jiayin Lu
Delaunay Triangulation and Voronoi Tessellation, and Their Applications in Geometry Meshing
I will discuss some computational geometry work related to Voronoi tessellation and Delaunay triangulation. Voronoi tessellation is a beautiful and simple mathematical concept. Given a set of discrete points in space, locations in the space are associated with the closest point in the point set. It has important applications in science and engineering. Material scientists can generate Voronoi diagrams on atomistic systems, and analyze the Voronoi cell geometries to study material properties and predict material failure. However, as systems grow in size (e.g. millions of particles), the computational demands increase, necessitating efficient and scalable computational solutions. I will discuss my PhD work on the multithreaded parallel computation of the Voronoi diagrams. A closely related geometry concept is the Delaunay triangulation, which is the duality graph of Voronoi tessellation. It can be constructed by connecting points sharing Voronoi cell walls. It can be used for geometry meshing, which has applications in computer graphics and numerical simulations using the finite element method. I will discuss my PhD work on multithreaded geometry meshing in 2D. Lastly, I will discuss current ongoing work in using the Voronoi diagram for shape reconstruction from multi-view 2D images in for computer graphics applications.
10:50-11:10 Dohyeon Kim
Uniform-in-time Mean Field Limit of Consensus-Based Optimization
Consensus-Based Optimization (CBO) algorithms are a recent family of particle methods for solving complex non-convex optimization problems. In many application settings, the objective function is not available in closed form. Additionally, derivatives may not be available, or very costly to obtain. CBO uses the Laplace principle to circumvent the use of gradients and is well-suited for black-box objectives. Most of the available analysis for this recent family of algorithms studies the corresponding mean-field descriptions of the distribution of particles. Convergence analysis with explicit rates is especially of interest in assessing algorithm performance and has mostly been done on the level of the mean-field PDEs. However, all results currently in the literature connecting the discrete particle system to the mean-field regime are restricted to finite time domains. In this talk, we present recent advances regarding the CBO algorithm and its variants and discuss uniform-in-time mean field limits.
11:10-11:30 Satish Chandran
On Structure-Preserving Discretization for Fokker-Planck Equations: A Discrete Markov Chain Perspective
We present a new approach for deriving structure-preserving numerical discretizations of Fokker-Planck equations by establishing a connection between the Fokker-Planck equation and its semi-discrete master equation at the level of the energy-dissipation law. The main idea is to determine the transition rate in the master equation via the detailed balance condition and the spatial discretization of the continuous energy-dissipation law. This approach ensures that the semi-discrete master equation satisfies the detailed balance condition and converges to the correct equilibrium solution. In addition to recovering existing transition rates proposed in earlier work, our framework uncovers new transition rates that have not been discussed in the current literature. These rates yield positive-preserving and energy stable schemes that are second-order accurate in space when the potential function is smooth and first-order accurate when the potential is discontinuous. This work is joint with Dr. Yiwei Wang (UCR).
11:30-11:50 David Beers
Level Sets of Persistent Homology for Point Clouds
Persistent homology (PH) is an operation which, loosely speaking, describes the different holes in a point cloud. There are two main variants of PH for point clouds: Cech persistence and Vietoris-Rips persistence. How much information is lost when we apply PH to a point cloud? We investigate this question by studying the subspace of point clouds with the same PH. We find that the question of when the Vietoris-Rips persistence map is identifiable has close ties to rigidity theory. For example, we show that a generic point cloud being locally identifiable under Vietoris-Rips persistence is equivalent to a certain graph being rigid on the same point cloud. If time permits, we will also discuss connections between Cech persistence and rigidity theory, and bounds on the dimension of level sets of PH for both variants.
Track 6
Student Success Center 329
Chair: Yuming Zhang (Auburn)
10:30-10:50 Yuming Zhang
Convergence of Free Boundaries in the Incompressible Limit of Tumor Growth Models
10:50-11:10 Jiyoung Choi
Generalized Nash Equilibrium Problems With Quasi-linear Constraints
11:10-11:30 Mahmoud Abdelgalil
On the Sub-Riemannian Geometry of Optimal Mass Transport
11:30-11:50 George Stepaniants
A Spectral Theory of Volterra Equations and its Applications
10:30-10:50 Yuming Zhang
Convergence of Free Boundaries in the Incompressible Limit of Tumor Growth Models
In tumor growth models, two primary approaches are commonly used. The first, described by Porous Medium type equations, models the tumor cells as a distribution evolving over space. The second, based on Hele-Shaw type flows, focuses on the evolution of the domain occupied by the cells. These two models are connected through the incompressible limit. In this talk, I will discuss the convergence of free boundaries in the incompressible limit. As an outcome, we provide an upper bound on the Hausdorff dimension of free boundaries and show that the limiting free boundary has finite $(d-1)$-dimensional Hausdorff measure.
10:50-11:10 Jiyoung Choi
Generalized Nash Equilibrium Problems With Quasi-linear Constraints
We study generalized Nash equilibrium problems (GNEPs) such that objectives are polynomial functions, and each player's constraints are linear in their own strategy. For such GNEPs, the KKT sets can be represented as unions of simpler sets by Caratheodory's theorem. We give a convenient representation for KKT sets using partial Lagrange multiplier expressions. This produces a set of branch polynomial optimization problems, which can be efficiently solved by Moment-SOS relaxations. By doing this, we can compute all generalized Nash equilibria or detect their nonexistence.
11:10-11:30 Mahmoud Abdelgalil
On the Sub-Riemannian Geometry of Optimal Mass Transport
We explore a hitherto unstudied sub-Riemannian structure of the Monge-Kantorovich transport, where the relative position of particles along their journey is modeled by the holonomy of the transportation schedule. We focus on controllability issues and characterize the structure group. The talk will conclude with a flashback on related concepts introduced in quantum theory that pertain to holonomy.
11:30-11:50 George Stepaniants
A Spectral Theory of Volterra Equations and its Applications
Volterra integral equations have been the subject of much study, from the perspective of both pure mathematics and applied science. Developments in analysis have yielded far-ranging existence, uniqueness, and spectral results for such equations. In applied science, Volterra equations are natural models for signal convolution or filtering, and they describe the evolution of a variety of partially-observed, time-dependent systems. In particular, the study of strain-stress dynamics in materials science has inspired closed-form solutions to special classes of Volterra equations with exponentially decaying memory, as well as certain Volterra equations involving fractional derivatives and Prony series. Only a limited number of these results have been proven rigorously, however, and their study in materials science has remained largely disjoint from the broader mathematical community. In this talk, we develop a spectral theory for scalar, linear Volterra equations, showing that a variety of disparate results in applied science (including the aforementioned results in viscoelasticity) are special cases of a more general theory. In particular, we derive analytic solutions for large classes of continuous or discrete-time linear Volterra equations, as well as fractional and delay differential equations. We show how our closed-form solutions can be realized numerically with rational approximation, study the analytical properties of these formulas, and test them on a wide array of problems relating to signal deconvolution, triangular matrix inversion, and interconversion of materials.
Track 7
Student Success Center 335
Chair: Yanxiang Zhao (GW)
10:30-10:50 Yanxiang Zhao
Synchronized Optimal Transport
10:50-11:10 Sebastiaan van Schie
Weighted Proper Orthogonal Decomposition for high-dimensional optimization
11:10-11:30 Anugrah Jo Joshy
Unifying Reduced-space and Full-space Problem Formulations for Faster and More Reliable Optimization
11:30-11:50 Fuqun Han
Regularized Wasserstein Proximal Algorithms for Nonsmooth Sampling Problems
10:30-10:50 Yanxiang Zhao
Synchronized Optimal Transport
Optimal transport has been an essential tool for reconstructing dynamics from complex data. With the increasingly available multifaceted data, a system can often be characterized across multiple spaces. Therefore, it is crucial to maintain coherence in the dynamics across these diverse spaces. To address this challenge, we introduce synchronized optimal transport (SyncOT), a novel approach to jointly model dynamics that represent the same system through multiple spaces. Given the correspondence between the spaces, SyncOT minimizes the aggregated cost of the dynamics induced across all considered spaces. The problem is discretized into a finite-dimensional convex problem using a staggered grid. Primal-dual algorithm-based approaches are then developed to solve the discretized problem. Various numerical experiments demonstrate the capabilities and properties of SyncOT and validate the effectiveness of the proposed algorithms.
10:50-11:10 Sebastiaan van Schie
Weighted Proper Orthogonal Decomposition for high-dimensional optimization
Computing numerical solutions to high-fidelity Partial Differential Equation (PDE) models costs significant computational resources, thereby limiting the tractability of such models for applications such as optimization, where many such solutions may be required. Proper Orthogonal Decomposition (POD) is a widely-used data-driven model order reduction technique which speeds up the solution process by computing approximate, low-dimensional solutions based on known high-fidelity solution data. However, POD does not take into account any underlying parametric model structure and instead attempts to approximate all stored solution data equally. This limits its usefulness for PDE-constrained optimization problems. In this talk we propose a weighted POD approach, where we circumvent this limitation by applying weights to the known solution data. We present an a posteriori POD error bound for parametric models, show how applying weights to the solution data tightens the error bound, and present an algorithm to efficiently carry out weighted POD. Additionally, we use weighted POD to generalize the calculation of reduced basis derivatives to situations with multiple snapshots and multidimensional parameter spaces. We demonstrate the efficacy of these weighted POD methods by applying them to a gradient-based shell thickness optimization problem with over 100 design parameters and a time-dependent PDE. We show that numerical solutions obtained for this problem attain errors that are several orders of magnitude smaller when using weighted POD than those computed with regular POD and Grassmann manifold interpolation, while requiring fewer high-fidelity model snapshots and having competitive wall times. A manuscript detailing this work is currently in preparation.
11:10-11:30 Anugrah Jo Joshy
Unifying Reduced-space and Full-space Problem Formulations for Faster and More Reliable Optimization
Optimization problems with governing-equation constraints (e.g., PDEs/ODEs) are typically formulated as reduced-space (RS) or full-space (FS) problems. While the reduced-space formulation precisely solves the governing equations at every optimization iteration to eliminate the states from the optimization problem, the full-space formulation treats the governing equations as additional constraints and the state variables as additional decision variables in the optimization problem. As a result, the full-space problem is larger with more constraints and decision variables, however, with a reduced cost of function evaluations since it does not require solving the governing equations at every iteration. In practice, neither formulation consistently outperforms the other when considering convergence rates and robustness; the best choice is often problem-specific. Finding the best approach for a new problem can be challenging as switching formulations necessitates a full reformulation and reimplementation, which is burdensome for complex problems. In this talk, we present SURF (strong unification of the reduced-space and full-space formulations), a novel hybrid framework that interpolates between the reduced-space and full-space within a single formulation. SURF enables optimization to converge with partially converged state solves and can exactly replicate the behavior of both RS and FS by simply adjusting solver tolerances. This unlocks a continuous spectrum of hybrid methods between the two extremes, offering the potential for greater efficiencies. We further discuss an adaptive hybrid selection scheme to minimize computational cost while preserving robustness. This scheme uses adjoint-based error estimation to determine the maximal solver tolerance that ensures a descent direction in the line search. Finally, we present numerical results to showcase the beneficial features of the proposed approach.
11:30-11:50 Fuqun Han
Regularized Wasserstein Proximal Algorithms for Nonsmooth Sampling Problems
In this talk, we introduce regularized Wasserstein proximal algorithms for nonsmooth sampling problems. We propose a splitting-based sampling algorithm for the time-implicit discretization of the probability flow associated with the Fokker-Planck equation. In this approach, the score function, defined as the gradient of the logarithm of the current probability density, is approximated using the regularized Wasserstein proximal. We establish convergence towards the target distribution in terms of Renyi divergences under suitable conditions. Finally, we demonstrate the effectiveness of our method through numerical experiments on high-dimensional nonsmooth sampling problems.
11:50-13:20 Lunch and Poster Session, Rivera (Library) Walkway
Plenary Session II
Student Success Center 229
Chair: Siting Liu (UCR)
13:20-14:00 Mihai Cucuringu (UCLA)
Spectral Methods for Clustering Signed/directed Networks and Heterogeneous Group Synchronization (AI Special Session)
14:05-14:45 Maziar Raissi (UCR)
Data-Efficient Deep Learning using Physics-Informed Neural Networks (AI Special Session)
14:50-15:30 Natalia Komarova (UCSD)
Mathematical Modeling of Spatial Evolution
13:20-14:00 Mihai Cucuringu (UCLA)
Spectral Methods for Clustering Signed/directed Networks and Heterogeneous Group Synchronization
Graph clustering problems typically arise in settings where there exists a discrepancy in the edge density within different parts of the graph. In this work, we consider problem instances where the underlying cluster structure arises as a consequence of a signal present on the edges or nodes, and is not driven by edge density. We first consider the problem of clustering in two families of networks, signed (with edge weights taking positive or negative values) and directed, both solvable by exploiting the spectrum of certain graph Laplacian matrices. We consider a generalized eigenvalue problem involving graph Laplacians, and provide performance guarantees under the setting of a Signed Stochastic Block Model, along with regularized versions to handle very sparse graphs (below the connectivity threshold), a regime where standard spectral methods are known to underperform. We also propose a spectral clustering algorithm for directed graphs based on a complex-valued representation of the adjacency matrix, which is able to capture the underlying cluster structures, for which the information encoded in the direction of the edges is crucial. We evaluate the proposed algorithm in terms of a cut flow imbalance-based objective function, which, for a pair of given clusters, it captures the propensity of the edges to flow in a given direction. We analyze its theoretical performance on a Directed Stochastic Block Model. Finally, we discuss an extension of the classical angular synchronization problem that aims to recover unknown angles from a collection of noisy pairwise difference measurements. We consider a generalization to the heterogeneous setting where there exist k unknown groups of angles, and the measurement graph has an unknown edge-disjoint decomposition, where the subgraphs of noisy edge measurements correspond to each group. We propose a probabilistic generative model for this problem, along with a spectral algorithm that comes with theoretical guarantees in terms of robustness against sampling sparsity and noise.
14:05-14:45 Maziar Raissi (UCR)
Data-Efficient Deep Learning Using Physics-Informed Neural Networks
A grand challenge with great opportunities is to develop a coherent framework that enables blending conservation laws, physical principles, and/or phenomenological behaviours expressed by differential equations with the vast data sets available in many fields of engineering, science, and technology. At the intersection of probabilistic machine learning, deep learning, and scientific computations, this work is pursuing the overall vision to establish promising new directions for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data. To materialize this vision, this work is exploring two complementary directions: (1) designing data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and non-linear differential equations, to extract patterns from high-dimensional data generated from experiments, and (2) designing novel numerical algorithms that can seamlessly blend equations and noisy multi-fidelity data, infer latent quantities of interest (e.g., the solution to a differential equation), and naturally quantify uncertainty in computations.
14:50-15:30 Natalia Komarova (UCSD)
Mathematical Modeling of Spatial Evolution
Evolutionary dynamics permeates life and life-like systems. Mathematical methods can be used to study evolutionary processes, such as selection, mutation, and drift, and to make sense of many phenomena in life sciences. In this talk I will discuss how spatial interactions may change the laws of evolution, giving rise to a system of scaling laws that describe the growth of disadvantageous, neutral, and advantageous mutants in growing populations. Applications of these laws to bacterial growth and carcinogenesis will be discussed.
15:35-15:55 Conference Picture & Coffee Break, long stairs at Student Success Center 229
Afternoon Contributed Sessions (AI Special Session)
(Contributed Sessions II)
Track 1
Student Success Center 229
Chair: Badal Joshi (CSUSM)
15:55-16:15 Badal Joshi
Autonomous Computation Using Chemical Reactions
16:15-16:35 Lauren Conger
Multispecies Wasserstein-2 Gradient Flows: Analysis via Game Theory
16:35-16:55 Mst Shamima Hossain
Principled Mining, Forecasting and Monitoring of Honeybee Time Series with EBV+
16:55-17:15 Ray Zirui Zhang
BiLO: Bilevel Local Operator Learning for PDE inverse problems
15:55-16:15 Badal Joshi
Autonomous Computation Using Chemical Reactions
Recent technological advances, such as DNA-strand displacement, have enabled bioengineers to implement arbitrary chemical reactions in a cell. The dynamics of chemical reactions are governed by polynomial differential equations. This makes chemistry an ideal basis for building an analog computer in a cell. Analog computation is naturally suited for computing continuous processes, e.g. solving differential equations. On the other hand, discrete computations require novel algorithms. Since a chemistry-based computer is essentially different from a silicon-based computer, fundamental aspects of computing -- speed, accuracy, and efficiency must be re-imagined. I will discuss the merits of various algorithms for computing arithmetic operations using chemistry. I will describe our novel algorithms that accomplish the computation of a much expanded class of functions than exists in the literature, and at the same time improves on the speed of computation (https://doi.org/10.1016/j.tcs.2024.114983). A key area of application is the construction of a neural network using chemical reactions (https://royalsocietypublishing.org/doi/10.1098/rsif.2021.0031).
16:15-16:35 Lauren Conger
Multispecies Wasserstein-2 Gradient Flows: Analysis via Game Theory
We utilize ideas from game theory to develop conditions under which we prove convergence results for multispecies Wasserstein-2 gradient flows. In particular, our results apply to settings where each species evolves according to a gradient flow, but the joint system is not a gradient flow. We prove existence of and convergence to a unique steady state, convergence of the velocity fields and second moments, and contraction in the Wasserstein-2 metric. To highlight the practical importance of these results, we provide numerical simulations for applications in classification algorithms, economics, and optimal transport.
16:35-16:55 Mst Shamima Hossain
Principled Mining, Forecasting and Monitoring of Honeybee Time Series with EBV+
Honeybees, as natural crop pollinators, play a significant role in biodiversity and food production for human civilization. Bees actively regulate hive temperature (homeostasis) to maintain a colony's proper functionality. Deviations from usual thermoregulation behavior due to external stressors (e.g., extreme environmental temperature, parasites, pesticide exposure, etc.) indicate an impending colony collapse. Anticipating such threats by forecasting hive temperature and finding changes in temperature patterns would allow beekeepers to take early preventive measures and avoid critical issues. In that case, how can we model bees' thermoregulation behavior for an interpretable and effective hive monitoring system? In this work, we propose the principled EBV+ (Electronic Bee-Veterinarian plus) method based on the thermal diffusion equation and a novel sigmoid feedback-loop (P) controller for analyzing hive health with the following properties: (i) it is effective on multiple, real-world beehive time sequences (recorded and streaming), (ii) it is explainable with only a few parameters (e.g., hive health factor) that beekeepers can easily quantify and trust, (iii) it issues proactive alerts to beekeepers before any potential issue affecting homeostasis becomes detrimental, and (iv) it is scalable with a time complexity of O(t) for reconstructing and O(t x m) for finding m cuts of a sequence with t time-ticks. Experimental results on multiple real-world time sequences showcase the potential and practical feasibility of EBV+. Our method yields accurate forecasting (up to 72% improvement in RMSE) with up to 600 times fewer parameters compared to baselines (ARX, seasonal ARX, Holt-winters, and DeepAR), as well as detects discontinuities and raises alerts that coincide with domain experts掳_ opinions. Moreover, EBV+ is scalable and fast, taking less than 1 minute on a stock laptop to reconstruct two months of sensor data.
16:55-17:15 Ray Zirui Zhang
BiLO: Bilevel Local Operator Learning for PDE inverse problems
Calibrating partial differential equation (PDE) models to data is crucial for understanding complex biological systems in the life sciences. We propose a new neural network based method for solving inverse problems for PDE by formulating the PDE inverse problem as a bilevel optimization problem. At the upper level, we minimize the data loss with respect to the PDE parameters. At the lower level, we train a neural network to locally approximate the PDE solution operator in the neighborhood of a given set of PDE parameters, which enables an accurate approximation of the descent direction for the upper level optimization problem. We apply gradient descent simultaneously on both the upper and lower level optimization problems, leading to an effective and fast algorithm. The method, which we refer to as BiLO (Bilevel Local Operator learning), is also able to efficiently infer unknown functions in the PDEs through the introduction of an auxiliary variable. Through extensive experiments over multiple PDE systems, we demonstrate that our method enforces strong PDE constraints, is robust to sparse and noisy data, and eliminates the need to balance the residual and the data loss, which is inherent to the soft PDE constraints in many existing methods.
Track 2
Student Success Center 216
Chair: Hengrong Du (UCI)
15:55-16:15 Hengrong Du
Efficient Sampling in Constrained Domains: Skew-Reflected Non-Reversible Langevin Methods
16:15-16:35 Bohan Chen
Learning Enhanced Ensemble Filters
16:35-16:55 Kishan Panaganti
Sequential Learning under Model Ambiguity: Theoretical Foundations and Algorithm Design
16:55-17:15 Qihao Ye
A Robust Model-Based Approach for Continuous-Time Policy Evaluation with Unknown Levy Process Dynamics
15:55-16:15 Hengrong Du
Efficient Sampling in Constrained Domains: Skew-Reflected Non-Reversible Langevin Methods
Sampling efficiently from constrained probability distributions is crucial for many applications in Bayesian inference, statistical physics, and machine learning. We present a novel class of sampling algorithms based on Skew-Reflected Non-Reversible Langevin Dynamics (SRNLD). These dynamics incorporate skew reflection at the boundary to maintain constraint satisfaction and exploit non-reversibility to enhance convergence speed. We rigorously analyze the non-asymptotic convergence behavior of SRNLD in both total variation and 1-Wasserstein distances. Building on this, we propose a practical algorithm, SRNLMC, and prove its convergence to the desired distribution with bounded discretization error. Our methods outperform existing reversible algorithms in both theory and practice, as demonstrated by experiments on synthetic and real data.
16:15-16:35 Bohan Chen
Learning Enhanced Ensemble Filters
The filtering distribution in hidden Markov models evolves according to the law of a mean-field model in state-observation space. The ensemble Kalman filter (EnKF) approximates this mean-field model with an ensemble of interacting particles, employing a Gaussian ansatz for the joint distribution of the state and observation at each observation time. These methods are robust, but the Gaussian ansatz limits accuracy. This shortcoming is addressed by approximating the mean field evolution introducing novel neural operators that take probability distributions as input: measure neural mappings (MNM). A novel approach to learn the MNM enhanced ensemble filter is then introduced, the MNMEF method. The set transformer, a neural network architecture based on the attention mechanism, is used for this purpose. Like the true filtering distribution, the set transformer itself can be viewed in terms of a mean-field limit, and the limit can be approximated with empirical measures constructed from ensembles. The resulting filtering methodology thus has a mean-field representation which can be approximated with a finite ensemble and is invariant to ensemble permutation; furthermore, the mean-field formulation allows a single parameterization of the algorithm to be deployed at different ensemble sizes. In practice fine-tuning of a small number of parameters, for specific ensemble sizes, further enhances the accuracy of the scheme. The promise of the approach is demonstrated by establishing it as state-of-the-art, with respect to root-mean-square error, in filtering the Lorenz `96 and Kuramoto--Sivashinsky models.
16:35-16:55 Kishan Panaganti
Sequential Learning under Model Ambiguity: Theoretical Foundations and Algorithm Design
Sequential decision-making algorithms such as Reinforcement learning (RL) has achieved remarkable successes, such as outperforming humans in games like Atari and Go, fast chip design, robotics, and fine-tuning large language models. However, these successes are largely confined to structured or simulated environments. My research enables robustness to the standard RL algorithms, allowing them to perform reliably in unstructured and complex environments. Training RL algorithms directly on real-world systems is expensive and potentially dangerous, so they are typically trained on simulators. This leads to a significant issue: the simulation-to-reality gap. Due to modeling approximation errors, changes in real-world parameters over time, and possible adversarial disturbances, there are inevitable mismatches between the training and real-world environments. This gap degrades the real-world performance of RL algorithms. My work addresses this fundamental challenge by developing novel robust reinforcement learning (RL) theories and algorithms. In addition to this, I have also worked on various distribution shift issues from an out-of-sampling viewpoint in many other machine learning settings like contextual bandits, imitation learning, domain adaptation, multi-agent learning, and more recently in alignment for language models. The overarching goal of my research is to actively contribute to the AI revolution in real-world engineering systems by developing the fundamental theory and algorithms for sequential decision-making that will enable it.
16:55-17:15 Qihao Ye
A Robust Model-Based Approach for Continuous-Time Policy Evaluation with Unknown Levy Process Dynamics
Reinforcement learning (RL) is active branch of machine learning focused on learning optimal policies to maximize cumulative rewards through interaction with the environment. While traditional RL research primarily deals with Markov decision processes in discrete time and space, we explore RL in a continuous-time framework, essential for high-frequency interactions such as stock trading and autonomous driving. Our research introduces a PDE-based framework for policy evaluation in continuous-time environments, where dynamics are modeled by Levy processes. We also formulate the Hamilton-Jacobi-Bellman (HJB) equation for the corresponding stochastic optimal control problems governed by L_漏vy dynamics. Our approach includes two primary components: 1) Estimating parameters of Levy processes from observed data, and 2) Evaluating policies by solving the associated integro-PDEs. In the first step, we use a fast solver for the fractional Fokker-Planck equation to accurately approximate transition probabilities. We demonstrate that combining this method with importance sampling techniques is vital for parameter recovery in heavy-tailed data distributions. In the second step, we offer a theoretical guarantee on the accuracy of policy evaluation considering modeling error. Our work establishes a foundation for continuous-time RL in environments characterized by complex, heavy-tailed dynamics.
Track 3
Student Success Center 235
Chair: Samuel Shen (SDSU)
15:55-16:15 Samuel Shen
Democracy of AI Numerical Weather Models: an Example of Global Forecasting With FourCastNetv2 Made by a University Research Lab Using GPU
16:15-16:35 Willem Diepeveen
Latent Diffeomorphic Dynamic Mode Decomposition
16:35-16:55 Angxiu Ni
Differentiating Unstable Diffusions
16:55-17:15 Sarah Marzen
A Dive Into Reservoir Computing: How to View Nonautonomous Dynamical Systems
15:55-16:15 Samuel Shen
Democracy of AI Numerical Weather Models: an Example of Global Forecasting With FourCastNetv2 Made by a University Research Lab Using GPU
This presentation describes the feasibility of democratizing AI global weather forecasting models among university research groups using GPUs. FourCastNetv2 is an NVIDIA掳_s advanced neural network for weather prediction and is trained on a 73-channel subset of the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis v5 (ERA5) dataset at single levels and different pressure levels. Although the training specifications for FourCastNetv2 are not released to the public, the training documentation of the model掳_s first generation, FourCastNet, is available to the public. The training had 64 A100 GPUs and took 16 hours to complete. While NVIDIA掳_s model reduces time and cost for weather predictions compared to traditional numerical weather predictions (NWP), there are still challenges for small research groups to reproduce the same forecasting results with limited GPUs. We demonstrate both (i) leveraging FourCastNetv2 to create predictions through the designated application programming interface (API) and (ii) utilizing NVIDIA hardware to train the original FourCastNet model. Further, this paper demonstrates the capabilities and limitations of NVIDIA A100掳_s for resource-limited research groups in universities. We also explore data management, training efficiency, and model validation, highlighting the advantages and challenges of using limited high-performance computing resources. Consequently, this paper and its corresponding GitHub materials may serve as an initial guide for other university research groups and courses related to machine learning, climate science, and data science to develop research and education programs on AI weather forecasting, and hence help democratize the AI NWP in the digital economy.
16:15-16:35 Willem Diepeveen
Latent Diffeomorphic Dynamic Mode Decomposition
We present Latent Diffeomorphic Dynamic Mode Decomposition (LDDMD), a new approach for the analysis of non-linear systems that combines the interpretability of Dynamic Mode Decomposition (DMD) with the predictive power of Recurrent Neural Networks (RNNs). Notably, LDDMD maintains simplicity, which enhances interpretability, while effectively modeling and learning complex non-linear systems with memory, enabling accurate predictions. This is exemplified by its successful application in stream flow prediction.
16:35-16:55 Angxiu Ni
Differentiating Unstable Diffusions
We derive the path-kernel formula for the linear response, the parameter derivative of averaged observables, of SDEs. Here the parameter controls initial conditions, drift coefficients, and diffusion coefficients. Our formula tempers the unstableness by gradually moving the path-perturbation to hit the probability kernel. It does not assume hyperbolicity but requires (either multiplicative or additive) noise. It extends the path-perturbation formula (or stochastic gradient method), the kernel-differentiation formula (or likelihood ratio method, or Cameron-Martin formula in Malliavin calculus), and the Bismut-Elworthy-Li formula, Then we derive a pathwise sampling algorithm and demonstrate it on the 40-dimensional Lorenz 96 system with noise.
16:55-17:15 Sarah Marzen
A Dive Into Reservoir Computing: How to View Nonautonomous Dynamical Systems
Reservoir computers promise an easy way to remember and predict an input signal-- but how can we optimize how well they work? We start by showing that surprisingly, optimizing memory is not the same thing as optimizing predictive capabilities, ruling out a popular proposal for examining predictive capacity in reservoirs. Given that, we turn to simple reservoir computing architectures, analyze them, and discuss their limitations at prediction. And finally, we turn to a new way of thinking about the computation performed by reservoir computers by relating attractors in the dynamical systems sense to the computation being performed.
Track 4
Student Success Center 308
Chair: Justin Baker (UCLA)
15:55-16:15 Justin Baker
Explainable Neural Operators in Digital Twin Modeling for Ultrafast Optics
16:15-16:35 Ricardo Baptista
Memorization and Regularization in Generative Diffusion Models
16:35-16:55 Ryan O'Dowd
Local Transfer Learning From One Data Space to Another
16:55-17:15 Adrien Weihs
Higher-Order Semi-Supervised Learning on Point Clouds Using Hypergraphs
15:55-16:15 Justin Baker
Explainable Neural Operators in Digital Twin Modeling for Ultrafast Optics
Neural operators are a promising tool for constructing digital twins, yet their latent state dynamics often remain a black box. To address this, we adapt the neural operator architecture to incorporate latent waves_卯a modification that explicitly mirrors the photon transport dynamics defined by the Frantz-Nodvik equations. In doing so, we demonstrate how the latent evolution of the model can be directly linked to, and validated against, the well-understood differential equations governing photon behavior. This connection not only exposes the intrinsic structure of the latent dynamics but also enables us to model multiple passes of a laser through a gain medium, capturing the evolving laser profile with precision. Ultimately, this approach marks a significant step forward in leveraging neural operators for the inverse design of laser gain media.
16:15-16:35 Ricardo Baptista
Memorization and Regularization in Generative Diffusion Models
Diffusion models have emerged as a powerful framework for generative modeling that relies on score matching to learn gradients of the data distribution's log-density. A key element for the success of diffusion models is that the optimal score function is not identified when solving the denoising score matching problem. In fact, the optimal score in both unconditioned and conditioned settings leads to a diffusion model that returns to the training samples and effectively memorizes the data distribution. In this presentation, we study the dynamical system associated with the optimal score and describe its long-term behavior relative to the training samples. Lastly, we show the effect of two forms of score function regularization on avoiding memorization: restricting the score's approximation space and early stopping of the training process. These results are numerically validated using distributions with and without densities including image-based problems.
16:35-16:55 Ryan O'Dowd
Local Transfer Learning From One Data Space to Another
A fundamental problem in manifold learning is to approximate a functional relationship in a data chosen randomly from a probability distribution supported on a low dimensional sub-manifold of a high dimensional ambient Euclidean space. The manifold is essentially defined by the data set itself and, typically, designed so that the data is dense on the manifold in some sense. The notion of a data space is an abstraction of a manifold encapsulating the essential properties that allow for function approximation. The problem of transfer learning (meta-learning) is to use the learning of a function on one data set to learn a similar function on a new data set. In terms of function approximation, this means lifting a function on one data space (the base data space) to another (the target data space). This viewpoint enables us to connect some inverse problems in applied mathematics (such as inverse Radon transform) with transfer learning. In this paper we examine the question of such lifting when the data is assumed to be known only on a part of the base data space. We are interested in determining subsets of the target data space on which the lifting can be defined, and how the local smoothness of the function and its lifting are related.
16:55-17:15 Adrien Weihs
Higher-Order Semi-Supervised Learning on Point Clouds Using Hypergraphs
We propose a higher-order hypergraph method for semi-supervised learning on point clouds. This is motivated by the fact that the classical hypergraph learning algorithm is asymptotically equivalent to p-Laplace Learning on graphs. Our new framework includes additional hypergraph geometric information by penalizing higher-order derivatives on hyperedges. We also preserve the quadratic form structure of Laplace Learning which greatly simplifies numerical implementations and we can reduce computational complexity through spectral truncation. In addition, this allows us to formulate the learning problem in the Bayesian setting. We present numerical results demonstrating the effectiveness of our methodology compared to other graph-based semi-supervised learning methods.
Track 5
Student Success Center 316
Chair: Shu Liu (UCLA)
15:55-16:15 Shu Liu
An Adversarial Deep Learning Approach Using Natural Gradients for Solving Partial Differential Equations
16:15-16:35 Chunyang Liao
Cauchy Random Features for Operator Learning in Sobolev Space
16:35-16:55 Xianjin Yang
Gaussian Process Policy Iteration with Additive Schwarz Acceleration for Forward and Inverse HJB and MFG Problems
16:55-17:15 Zihan Shao
Solving Nonlinear PDEs with Sparse Radial Basis Function Networks
15:55-16:15 Shu Liu
An Adversarial Deep Learning Approach Using Natural Gradients for Solving Partial Differential Equations
We propose a scalable, preconditioned primal-dual algorithm for solving partial differential equations (PDEs) using neural network. By multiplying the PDE with a test function, we reformulate it as an inf-sup problem, yielding a loss function that involves lower-order differential operators. We employ the Primal-Dual Hybrid Gradient (PDHG) algorithm: by incorporating suitable preconditioning operators into the metric terms of the PDHG proximal steps, we derive a natural gradient ascent-descent optimization scheme for updating the primal and adversarial neural network parameters. These natural gradients are efficiently computed using Krylov subspace iterations. Numerical examples on linear and nonlinear equations up to 50 dimensions demonstrate improved accuracy, efficiency, and stability compared to conventional deep PDE solvers.
16:15-16:35 Chunyang Liao
Cauchy Random Features for Operator Learning in Sobolev Space
Operator learning is the approximation of operators between infinite dimensional Banach spaces using machine learning approaches. While most progress in this area has been driven by variants of deep neural networks such as the Deep Operator Network and Fourier Neural Operator, the theoretical guarantees are often in the form of a universal approximation property. However, the existence theorems do not guarantee that an accurate operator network is obtainable in practice. Motivated by the recent kernel-based operator learning framework, we propose a random feature operator learning method with theoretical guarantees and error bounds. The random feature method can be viewed as a randomized approximation of a kernel method, which significantly reduces the computation requirements for training. We provide a generalization error analysis for our proposed random feature operator learning method along with comprehensive numerical results. Compared to kernel-based method and neural network methods, the proposed method can obtain similar or better test errors across benchmarks examples with significantly reduced training times. An additional advantages it that our implementation is simple and does require costly computational resources, such as GPU.
16:35-16:55 Xianjin Yang
Gaussian Process Policy Iteration with Additive Schwarz Acceleration for Forward and Inverse HJB and MFG Problems
We propose a Gaussian Process (GP)-based policy iteration framework for addressing both forward and inverse problems in Hamilton-Jacobi-Bellman (HJB) equations and Mean Field Games (MFGs). Policy iteration is formulated as an alternating procedure between solving for the value function under a fixed control policy and updating the policy based on the resulting value function. By exploiting the linear structure of GPs for function approximation, each policy evaluation step admits an explicit, closed-form solution, eliminating the need for numerical optimization. To improve convergence, we incorporate Additive Schwarz acceleration as a preconditioning step following each policy update. Numerical experiments demonstrate the effectiveness of Schwarz acceleration in improving computational efficiency.
16:55-17:15 Zihan Shao
Solving Nonlinear PDEs with Sparse Radial Basis Function Networks
We propose a novel framework for solving nonlinear PDEs using sparse radial basis function (RBF) networks. Sparsity-promoting regularization is employed to prevent over-parameterization and reduce redundant features. This work is motivated by longstanding challenges in traditional RBF collocation methods, along with the limitations of physics-informed neural networks (PINNs) and Gaussian process (GP) approaches, aiming to blend their respective strengths in a unified framework. The theoretical foundation of our approach lies in the function space of Reproducing Kernel Banach Spaces (RKBS) induced by one-hidden-layer neural networks of possibly infinite width. We prove a representer theorem showing that the solution to the sparse optimization problem in the RKBS admits a finite solution and establishes error bounds that offer a foundation for generalizing classical numerical analysis. The algorithmic framework is based on a three-phase algorithm to maintain computational efficiency through adaptive feature selection, second-order optimization, and pruning of inactive neurons. Numerical experiments demonstrate the effectiveness of our method and highlight cases where it offers notable advantages over GP approaches. This work opens new directions for adaptive PDE solvers grounded in rigorous analysis with efficient, learning-inspired implementation.
Track 6
Student Success Center 329
Chair: Yinglun Zhu (UCR)
15:55-16:15 Yinglun Zhu
Strategic Scaling of Test-Time Compute
16:15-16:35 Ying Jiang
Physics-Driven Generative Models for Content Creation
16:35-16:55 Xianghao Kong
Advancing Diffusion Models with Enhanced Interpretability and Alignment
16:55-17:15 Laixi Shi
Robust Decision Making Without Compromising Learning Efficiency
15:55-16:15 Yinglun Zhu
Strategic Scaling of Test-Time Compute
Scaling test-time compute has emerged as a promising approach for improving the performance of large language models (LLMs). However, existing approaches typically allocate compute uniformly across all queries, neglecting the inherent variability in query difficulty. To address this inefficiency, we view test-time compute allocation as a bandit pure exploration problem, and develop algorithms that adaptively estimate query difficulty and strategically allocate compute accordingly. Compared to static uniform allocation, our algorithms effectively allocate more compute resources to challenging queries while maintaining accuracy on simpler ones. We conduct extensive experiments to validate the effectiveness of our approaches, demonstrating accuracy improvements of up to $11\%$ on the MATH-500 dataset and up to $8\%$ on LiveCodeBench.
16:15-16:35 Ying Jiang
Physics-Driven Generative Models for Content Creation
This seminar explores the advancement of content creation by developing efficient and realistic methods for content generation. As consumer Virtual Reality (VR) and Mixed Reality (MR) technologies continue to gain momentum, there is an increasing demand for more intuitive, accessible, and high-quality content creation workflows. However, traditional techniques for creating, editing, and interacting with virtual content remain highly complex, requiring significant engineering expertise and manual effort. To address these challenges, this research introduces a novel paradigm for multi-dimensional content creation, leveraging the latest advancements in generative models and physics-based simulation. Specifically, it explores anime-style animation synthesis and motion synthesis. By integrating powerful diffusion models with physics-based rules, this approach enables the generation of high-fidelity, physics-plausible dynamics.
16:35-16:55 Xianghao Kong
Advancing Diffusion Models with Enhanced Interpretability and Alignment
Generative AI has attracted unprecedented attention in both academia and industry for its capacity to generate high-quality content across diverse modalities, from text to images. Diffusion models - a foundational technique in this field - have achieved impressive results yet still face pressing challenges: improving data distribution estimation for accurate synthesis, enhancing interpretability for greater insight and control, and refining cross-modality alignment for coherent outputs. To address these issues, I integrate concepts from information theory, focusing on three core objectives: 1. Develop a unified theoretical framework to strengthen model_s density estimation capability; 2. Enhance the model_s interpretability for better insight into its internal mechanisms; 3. Improve cross-modality alignment for more coherent and controllable generation.
16:55-17:15 Laixi Shi
Robust Decision Making Without Compromising Learning Efficiency
Decision-making artificial intelligence (AI) has revolutionized human life ranging from manufacturing, healthcare, to scientific discovery. However, current AI systems often lack reliability and are highly vulnerable to small changes in complex, interactive, and dynamic environments. My research focuses on achieving both reliability and learning efficiency simultaneously when building AI solutions. These two goals seem conflicting, as enhancing robustness against variability often leads to more complex problems that requires more data and computational resources, at the cost of learning efficiency. But does it have to? In this talk, I overview my work on building reliable decision-making AI without sacrificing learning efficiency, offering insights into effective optimization problem design for reliable AI. To begin, I will focus on reinforcement learning (RL) - a key framework for sequential decision-making, and demonstrate how distributional robustness can be achieved provably without additional training data cost compared to non-robust counterparts. Next, shifting to decision-making in strategic multi-agent systems, I will demonstrate that incorporating realistic risk preferences - _a key feature of human decision-making- enables computational tractability, a benefit not present in traditional models. Finally, I will present a vision for building reliable, learning-efficient AI solutions for human-centered applications.
Track 7
Student Success Center 335
Chair: Xue Feng (UCLA)
15:55-16:15 Xue Feng
Convergence Analysis of the Alternating Anderson-Picard Method for Nonlinear Fixed-Point Problems
16:15-16:35 William Sharpless
Linear Supervision for Nonlinear, High-Dimensional Neural Control and Differential Games
16:35-16:55 Chris Camano
Randomized Tensor Networks For Product Structured Data
16:55-17:15 Natanael Alpay
Special Functions as Solutions to The Least Square Problems
15:55-16:15 Xue Feng
Convergence Analysis of the Alternating Anderson-Picard Method for Nonlinear Fixed-point Problems
Anderson Acceleration (AA) has been widely used to solve nonlinear fixed-point problems due to its rapid convergence. This work focuses on a variant of AA in which multiple Picard iterations are performed between each AA step, referred to as the Alternating Anderson-Picard (AAP) method. Despite introducing more ``slow'' Picard iterations, this method has been shown to be efficient and even more robust in both linear and nonlinear cases. However, there is a lack of theoretical analysis for AAP in the nonlinear case. In this paper, we address this gap by establishing the equivalence between AAP and a multisecant-GMRES method that uses GMRES to solve a multisecant linear system at each iteration. From this perspective, we show that AAP ``converges'' to the Newton-GMRES method. Specifically, as the residual approaches zero, the multisecant matrix, the approximate Jacobian inverse, the search direction, and optimization gain of AAP converge to their counterparts in the Newton-GMRES method. These connections provide insights for analyzing the asymptotic convergence properties of AAP. Consequently, we show that AAP is locally $q$-linear convergent and provide an upper bound for the convergence factor of AAP. To validate the theoretical results, numerical examples are provided.
16:15-16:35 William Sharpless
Linear Supervision for Nonlinear, High-Dimensional Neural Control and Differential Games
As the dimension of a system increases, traditional methods for control and differential games rapidly become intractable, making the design of safe autonomous agents challenging in complex or team settings. Deep-learning approaches avoid discretization and yield numerous successes in robotics and autonomy, but at a higher dimensional limit, accuracy falls as sampling becomes less efficient. We propose using rapidly generated linear solutions to the partial differential equation (PDE) arising in the problem to accelerate and improve learned value functions for guidance in high-dimensional, nonlinear problems. We define two programs that combine supervision of the linear solution with a standard PDE loss. We demonstrate that these programs offer improvements in speed and accuracy in both a 50-D differential game problem and a 10-D quadrotor control problem.
16:35-16:55 Chris Camano
Randomized Tensor Networks For Product Structured Data
In recent years, tensor networks have emerged as a powerful low-rank approximation framework for addressing exponentially large data science problems without requiring exponential computational resources. In this talk, we demonstrate how tensor networks, when combined with accelerations from randomized numerical linear algebra (rNLA), can enable the efficient representation and manipulation of large-scale, complex datasets originating from quantum physics, high-dimensional function approximation, and neural network compression. We will start by describing how to construct a tensor network directly from input data. Building on this foundation, we then describe a new randomized algorithm called Successive Randomized Compression (SRC) that asymptotically accelerates the tensor network analog of matrix-vector multiplication using the randomized singular value decomposition. As a demonstration, we present examples showing how tensor network based simulations of quantum dynamics in 2^100 dimensions can be performed on a personal laptop.
16:55-17:15 Natanael Alpay
Special Functions as Solutions to The Least Square Problems
We first introduce the complex counterpart for the representer theorem in machine learning. We then discuss some learning and minimization problems in RKHS, where we look for the correct input and output data set to retrieve a special functions as a solution to the least squares regression minimization problem. In particular, we retrieve the superoscillations in the Fock space, the RBF kernel in the RBF space, and the Blanshke product in the Hardy space. Finally, we give an extension to other minimization problems via different loss functions.
Afternoon Contributed Sessions
(Contributed Session III)
Track 1
Student Success Center 229
Chair: Hector Banos (CSUSB)
17:20-17:40 Hector Banos
Anomalous Phylogenetic Networks under the Coalescent model
17:40-18:00 John Palacios
A Stochastic Lifecycle Model of Intracellular Chlamydia Trachomatis informed by Quantitative 3D Electron Micrograph Analysis
18:00-18:20 Victoria Chebotaeva
Erlang-Distributed SEIR Epidemical Model with symptomatic and asymptomatic individuals
17:20-17:40 Hector Banos
Anomalous Phylogenetic Networks under the Coalescent model
Hybridization plays a vital role in the evolution of certain species, making traditional phylogenetic trees insufficient for capturing species-level relationships. Instead, phylogenetic networks provide a more comprehensive framework for representing evolutionary histories that involve such interactions. The Network Multispecies Coalescent Model (NMSC) describes how gene trees arise within a phylogenetic network under incomplete lineage sorting. An anomalous phylogenetic network is one in which an unrooted tree topology displayed in the network appears in gene trees less frequently (under the NMSC) than a topology not present in the network. Understanding these anomalies is essential, as they can pose challenges for inference methods. In this talk, we present results on anomalous networks within the NMSC framework, focusing on the topological properties that make a 4-taxon network anomalous.
17:40-18:00 John Palacios
A Stochastic Lifecycle Model of Intracellular Chlamydia Trachomatis informed by Quantitative 3D Electron Micrograph Analysis
Chlamydia trachomatis is an obligate intracellular bacterium that causes the most common sexually transmitted bacterial infections and preventable blindness known as trachoma in underdeveloped countries. C. trachomatis undergoes a biphasic life cycle, alternating between the replicative Reticulate Body (RB) phase and the infectious Elementary Body (EB) phase. Understanding the trigger for RB-to-EB conversion is critical. This study utilizes 3D electron micrographs of 14,452 individual Chlamydia observations in 19 inclusions, sampled at 16 (n=2), 24 (n=7), 28 (n=7), 32 (n=2) and 40 (n=1) hours post-infection (h.p.i.) of host cell. Using PyRadiomics, we extracted 35 radiometric features for each bacterium. We used UMAP dimensional reduction to visualize the data in 2D, showing these features can distinguish the four developmental phases of C. trachomatis. Partitioning the data by h.p.i. suggests a developmental trajectory. Focusing on the onset of conversion around 24 to 28 h.p.i., we propose a pseudotemporal ordering based on cell count, grouping samples into developmental stages. This approach suggested a developmental trajectory graduated by a reduction in RB size preceding conversion. Notably, a few large replicative forms were present in each developmental stage with intermediate volume observations, suggesting they are not quiescent and contribute to population volume heterogeneity. Initial analysis supports the size control hypothesis for conversion, but the origin of large replicative forms in each developmental stage is unclear. We propose a Branching Brownian Motion model for the time evolution of C. trachomatis to explain how such nuanced phenomena may arise from stochasticity in morphological phase change timing and growth rate. Augmenting our dataset with literature observations, we derive relevant model parameters, such as generation time, bacterium growth rate, and conversion size window.
18:00-18:20 Victoria Chebotaeva
Erlang-Distributed SEIR Epidemical Model with symptomatic and asymptomatic individuals
We examine the effects of different dynamics in epidemiological models. Our model is focusing on division of exposed and infectious individuals into symptomatic and asymptomatic subclasses. Our findings emphasize the importance of adaptive control measures, such as targeted testing, contact tracing, and isolation, in effectively containing disease spread while minimizing societal and economic impacts. The model highlight the distinct roles of symptomatic and asymptomatic individuals, demonstrating how tailored public health strategies can improve resource management and mitigate the socio-economic effects of outbreaks.
Track 2
Student Success Center 216
Chair: Ben Fitzpatrick (LMU)
17:20-17:40 Ben Fitzpatrick
Evolutionary Modeling of Scientific Research and P-Hacking
17:40-18:00 Stephanie Reed
A Biased Dollar Exchange Model Involving Bank and Debt With Discontinuous Equilibrium
18:00-18:20 Hubeyb Gurdogan
The Quadratic Optimization Bias Of Large Covariance Matrices
17:20-17:40 Ben Fitzpatrick
Evolutionary Modeling of Scientific Research and P-Hacking
In recent years, concern has grown about the inappropriate application and interpretation of P values, especially the use of P<0.05 to denote "statistical significance" and the practice of P-hacking to produce results below this threshold and selectively reporting these in publications. Such behavior is said to be a major contributor to the large number of false and nonreproducible discoveries found in academic journals. In response, it has been proposed that the threshold for statistical significance be changed from 0.05 to 0.005. The aim of the current study was to use an evolutionary agent-based model comprised of researchers who test hypotheses and strive to increase their publication rates in order to explore the impact of a 0.005 P value threshold on P-hacking and published false positive rates. The results supported the view that a more stringent P value threshold can serve to reduce the rate of published false positive results. Researchers still engaged in P-hacking with the new threshold, but the effort they expended increased substantially and their overall productivity was reduced, resulting in a decline in the published false positive rate.
17:40-18:00 Stephanie Reed
A Biased Dollar Exchange Model Involving Bank and Debt With Discontinuous Equilibrium
In this work, we investigate a biased dollar exchange model with collective debt limit, in which agents picked at random (with a rate depending on the amount of dollars they have) give at random time a dollar to another agent being picked uniformly at random, as long as they have at least one dollar in their pockets or they can borrow a dollar from a central bank if the bank is not empty. This dynamics enjoys a mean-field type interaction and partially extends the recent work on a related model. We perform a formal mean-field analysis as the number of agents grows to infinity and as a by-product we discover a two-phase (ODE) dynamics behind the underlying stochastic N-agents dynamics. Numerical experiments on the two-phase (ODE) dynamics are also conducted where we observe the convergence towards its unique equilibrium in the large time limit.
18:00-18:20 Hubeyb Gurdogan
The Quadratic Optimization Bias Of Large Covariance Matrices
We describe a puzzle involving the interactions between an optimization of a multivariate quadratic function and a "plug-in" estimator of a spiked covariance matrix. When the largest eigenvalues (i.e., the spikes) diverge with the dimension, the gap between the true and the out-of-sample optima typically also diverges. We show how to "fine-tune" the plug-in estimator in a precise way to avoid this outcome. Central to our description is a "quadratic optimization bias" function, the roots of which determine this fine-tuning property. We derive an estimator of this root from a finite number of observations of a high dimensional vector. This leads to a new covariance estimator designed specifically for applications involving quadratic optimization. Our theoretical results have further implications for improving low dimensional representations of data, and principal component analysis in particular.
Track 3
Student Success Center 235
Chair: Justin Marks (Biola)
17:20-17:40 Justin Marks
Improved Hill Climbing for the Stable Marriage Problem
17:40-18:00 Brian Ryals
The Schwarzian Derivative of Rational Functions
18:00-18:20 Sally Li
Modeling Millicharged Particles: From Dark Matter to Detection
17:20-17:40 Justin Marks
Improved Hill Climbing for the Stable Marriage Problem
In the Nobel Prize-awarded stable marriage problem, n men and n women rank participants of the opposite gender in order of preference. The goal of our work is to find preference profiles that maximize the number of stable matchings, which is a long-standing research problem. Each participant has n! ways of ranking the opposite gender yielding n!^(2n) possible preference profiles. Using hill climbing and systematic seed selection, we improve on five records that have been in place for decades, for orders n=7, 9, 11, 13, 15. This research is at the intersection of combinatorics and computational algorithm design, and extends the work of beloved Biola University mathematics professor emeritus Ed Thurber.
17:40-18:00 Brian Ryals
The Schwarzian Derivative of Rational Functions
The Schwarzian derivative appears in many seemingly unrelated topics in mathematics, including projective geometry, cocycles, Sturm-Liouville equations, orbits of difference equations, and univalent functions. The talk will start with a gentle introduction to the Schwarzian derivative with a few accessible applications from different branches of mathematics. Then some new theorems will be presented that relate the sign of the Schwarzian derivative to interlacing properties of the zeros and poles of rational functions.
18:00-18:20 Sally Li
Modeling Millicharged Particles: From Dark Matter to Detection
Dark matter is supported by strong observational evidence, but its particle nature remains unknown. In this talk, I introduce millicharged particles (mCPs) as a minimal extension of the Standard Model. I focus on a concrete production mechanism_卯proton bremsstrahlung_卯and outline how such processes can be modeled. The talk concludes with brief remarks on the challenges involved in detecting these weakly interacting particle
Track 4
Student Success Center 308
Chair: Ryan Aschoff (UCR)
17:20-17:40 Ryan Aschoff
Smooth Non-decaying Solutions To The 2D Dissipative Quasi-geostrophic Equations
17:40-18:00 Mustafa Sencer Aydin
Vanishing Viscosity Limit for the Navier-Stokes Equations With Navier Boundary Conditions
18:00-18:20 Evan Davis
Power Law Scaling in a Model for Mass Shedding Dynamics in Thin-film Particle-laden Flows
17:20-17:40 Ryan Aschoff
Smooth Non-decaying Solutions To The 2D Dissipative Quasi-geostrophic Equations
We consider the surface quasi-geostrophic equation in two spatial dimensions, with fractional diffusion of order $2\alpha$ for $\alpha \in (1/2,1]$. We establish existence of solutions without assumption of either decay at spatial infinity or of spatial periodicity. In this setting, the usual constitutive law to find the transport velocity cannot be used. We therefore replace it with a generalized constitutive law of Serfati type. We prove that for data bounded $L^\infty$, there exists a unique finite in time solution. Further, we show for data in $C^{k}$ for $k\geq2,$ the unique solution can be extended for arbitrary time.
17:40-18:00 Mustafa Sencer Aydin
Vanishing Viscosity Limit for the Navier-Stokes Equations With Navier Boundary Conditions
The vanishing viscosity limit concerns the behavior of solutions to the incompressible Navier-Stokes equations as viscosity tends to zero. A fundamental question in fluid dynamics is whether the limit yields a solution of the Euler equations, the governing equations for ideal fluid motion. In the presence of a physical boundary, this question is closely tied to the formation and behavior of boundary layers. In this talk, we review key developments in the field and present our recent results on establishing the vanishing viscosity limit under minimal assumptions. This is joint work with Igor Kukavica.
18:00-18:20 Evan Davis
Power Law Scaling in a Model for Mass Shedding Dynamics in Thin-film Particle-laden Flows
We consider gravity-driven particle-laden flow on a slope in the high volume fraction limit. It is well-known that particles have a tendency to accumulate at the fluid front, and this system can be described by a singular shock with mass accumulation in the shock. Experimental observations show that this system exhibits mass shedding from the shock layer, in which clumps of particles break off and quickly slide down the substrate. Here we show that there is natural extension of the singular shock theory that describes this dynamics in terms of mass shedding events. Basic assumptions such as (a) the speed of the clump is determined by simple lubrication theory (b) coalescence of colliding clumps of mass, and (c) statistical variation in the initial amounts of mass shed, lead to a model system that exhibits power law scaling of the mass distribution with respect to distance along the slope. Moreover this scaling is statistically stationary with respect to time. Such behavior is well-documented in classical models for avalanches and sand-piles but has not been observed or studied in slurry flow.
Track 5
Student Success Center 316
Chair: Jinghao Cao (Caltech)
17:20-17:40 Jinghao Cao
Fast Singular-kernel Convolution on General Non-smooth Domains via Truncated Fourier Filtering
17:40-18:00 Lingyun Ding
Semi-implicit-explicit Runge-Kutta Method for Nonlinear Differential Equations
18:00-18:20 Krishna Yamanappa Poojara
"Truncated Fourier Filtering'' Algorithm for Fast and High-order Numerical Evaluation of Convolution Operators in General Domains
17:20-17:40 Jinghao Cao
Fast Singular-kernel Convolution on General Non-smooth Domains via Truncated Fourier Filtering
The rapid and accurate evaluation of convolutions with singular kernels plays crucial roles in a wide range of scientific and engineering applications. These include convolutions with $ 1/r^{\alpha} $-kernels in fractional diffusion, as well as the $ \log(r) $- and $ 1/r $-type singular kernels encountered in potential theory, acoustics, electromagnetic scattering, and quantum mechanics. Building upon the recently introduced Truncated Fourier Filtering method for smooth kernels, this presentation introduces a fast and high-order numerical convolution methodology that extends to singular kernels and non-smooth domains. Based on the use of truncated Fourier expansions of certain orders $F$ of the characteristic function of the integration domain, as well as corresponding expansions of products of characteristic functions and singular functions, together with specifically selected $F$-dependent numbers of trapezoidal rule integration nodes, the proposed algorithm provides high-order accuracy despite the grossly inaccurate approximations inherent in the truncated Fourier expansions of the discontinuous characteristic functions considered. A full theoretical analysis will be presented that explains the surprising and beneficial properties of the proposed approach. Additionally, several numerical examples will be provided to illustrate the method's performance and effectiveness.
17:40-18:00 Lingyun Ding
Semi-implicit-explicit Runge-Kutta Method for Nonlinear Differential Equations
A semi-implicit-explicit (semi-IMEX) Runge-Kutta (RK) method is proposed for the numerical integration of ordinary differential equations (ODEs) of the form $\mathbf{u}' = \mathbf{f}(t,\mathbf{u}) + G(t,\mathbf{u}) \mathbf{u}$, where $\mathbf{f}$ is a non-stiff term and $G\mathbf{u}$ represents the stiff terms. Such systems frequently arise from spatial discretizations of time-dependent nonlinear partial differential equations (PDEs). For instance, $G$ could involve higher-order derivative terms with nonlinear coefficients. Traditional IMEX-RK methods, which treat $\mathbf{f}$ explicitly and $G\mathbf{u}$ implicitly, require solving nonlinear systems at each time step when $G$ depends on $\mathbf{u}$, leading to increased computational cost and complexity. In contrast, the proposed semi-IMEX scheme treats $G$ explicitly while keeping $\mathbf{u}$ implicit, reducing the problem to solving only linear systems. This approach eliminates the need to compute Jacobians while preserving the stability advantages of implicit methods. A family of semi-IMEX RK schemes with varying orders of accuracy is introduced. Numerical simulations for various nonlinear equations, including nonlinear diffusion models, the Navier-Stokes equations, and the Cahn-Hilliard equation, confirm the expected convergence rates and demonstrate that the proposed method allows for larger time step sizes without triggering stability issues.
18:00-18:20 Krishna Yamanappa Poojara
"Truncated Fourier Filtering'' Algorithm for Fast and High-order Numerical Evaluation of Convolution Operators in General Domains
In this talk, we present a novel "Truncated Fourier Filtering'' (TFF) algorithm for evaluating convolution integrals in general domains. This method can be easily applied for heat equations, wave propagation and other important problems. The TFF approach, which proceeds by replacing the characteristic function of the integration domain with a suitable Fourier series expansion, is extremely simple, delivers high-order accuracy in $O(N \log(N))$ operations, and can easily be applied to general complex domains. Moreover, even for piecewise smooth densities, the proposed strategy yields high-order convergence for smooth kernels. As an application we illustrate the numerical solution of heat equation and the acoustic wave scattering problem in two dimensions.
Track 6
Student Success Center 329
Chair: Mykhailo Potomkin (UCR)
17:20-17:40 Mykhailo Potomkin
Multiscale Analysis of a Kinetic Model for Confined Active Matter
17:40-18:00 Poorva Shukla
On the Existence and Decay Rates of Localized Eigenvectors for Certain Localized Matrices
18:00-18:20 Pedro Abdalla Teixeira
Global Synchonization, Burer-Monteiro and Expanders
17:20-17:40 Mykhailo Potomkin
Multiscale Analysis of a Kinetic Model for Confined Active Matter
The concept of active matter refers to systems composed of many interacting constituents capable of persistent autonomous motion. At the microscale, active matter is exemplified by biological systems, such as suspensions of motile bacteria that consume nutrients and convert them into swimming motion. In addition to exhibiting various types of self-organization, active micro-swimmers display non-trivial individual behaviors in imposed flows with confinement, such as wall accumulation and a tendency to swim upstream. Active matter is typically modeled using agent-based models, which require numerous simulations to reliably conclude the distribution of active matter in confined spaces. The focus of this talk will be on the development of a kinetic approach to this model, which enables the direct computation of the probability distribution function for the location and orientation of active agents. A key feature of this approach is its ability to account for accumulation near walls and include two distinct probability distribution functions: one for agents in the bulk and another for accumulated agents. Specifically, I will present results on the well-posedness and rigorous derivation of the kinetic equations through the singular limit of small spatial diffusion.
17:40-18:00 Poorva Shukla
On the Existence and Decay Rates of Localized Eigenvectors for Certain Localized Matrices
The Laplacians of certain heterogeneous graphs possess localized eigenvectors which leads to low fragility in the dynamics of certain networked systems on the graph. Motivated by this problem, we prove sufficient conditions under which an eigenvector $v$ of a matrix $A$ is localized. The matrices we consider have decay away from a sparse region and are large or infinite. We use techniques from spectral theory, approximation theory and matrix algebras, to prove that when an eigenvalue $\lambda$ is in the discrete spectrum, the corresponding eigenvector $v$ is localized. Moreover, the decay rate of the localized eigenvector is same as the decay rate of the entries of $A^{-1}$. A few examples and applications are discussed.
18:00-18:20 Pedro Abdalla Teixeira
Global Synchonization, Burer-Monteiro and Expanders
The famous Kuramoto model in physics has become a prototypical model to study synchonization of networks. The Burer-Monteiro factorization is a widely used technique for low-rank factorization of Positive Semidefinite Programs arising in synchonization problems in data science. Perhaps surprisingly, these two problems are intimately connected. In this talk, I will describe the connection between these two problems through the lens of non-convex optimization and novel approaches via spectral graph theory (expander graphs) and probabilistic method.
Track 7
Student Success Center 335
Chair: James Alcala (USC)
17:20-17:40 James Alcala
Accelerated Popov's Scheme with a Moving Anchor
17:40-18:00 Minxin Zhang
Tensor Randomized Kaczmarz Methods for Linear Feasibility Problems
18:00-18:20 Zeyi Xu
Adaptive Accelerated Gradient Descent Methods for Convex Optimization
17:20-17:40 James Alcala
Accelerated Popov's Scheme with a Moving Anchor
Recent advancements in algorithms for minimax problems have brought accelerated convergence rates to a variety of problem settings. Many of these algorithms take advantage of the well-known extragradient framework, in which an extrapolation step is introduced in the algorithm that, among other things, aids in convergence for many classes of minimax problems. One variant of these is known as Popov's scheme, which has the advantage over extragradient of only requiring one gradient evaluation at each computation. We introduce a moving anchor scheme to the previously developed anchored Popov's scheme as a parameter for further acceleration.
17:40-18:00 Minxin Zhang
Tensor Randomized Kaczmarz Methods for Linear Feasibility Problems
The randomized Kaczmarz methods are a popular and effective family of iterative methods for solving large-scale linear systems of equations, which have also been applied to linear feasibility problems. In this work, we extend these methods to solve tensor linear feasibility problems defined under the tensor t-product. A tensor randomized Kaczmarz (TRK) method, TRK-L, is proposed for solving linear feasibility problems that involve mixed equality and inequality constraints. Additionally, we introduce another TRK method, TRK-LB, specifically tailored for cases where the feasible region is defined by general linear constraints coupled with bound constraints on the variables. The effectiveness of our methods is demonstrated through numerical experiments on various Gaussian random data and applications in image deblurring.
18:00-18:20 Zeyi Xu
Adaptive Accelerated Gradient Descent Methods for Convex Optimization
We propose a novel adaptive accelerated gradient descent (A$^2$GD) method for convex optimization. Built upon an accumulated Lyapunov framework, the method triggers line search only when the accumulated error exceeds a predefined threshold. By combining adaptivity with acceleration, A$^2$GD outperforms existing adaptive and accelerated first-order methods. We prove an accelerated linear convergence rate via a carefully constructed Lyapunov function. Importantly, A$^2$GD requires no manual hyperparameter tuning, making it broadly applicable in practice.
Poster Awards and Closing Remarks
Student Success Center 229
Chair: Yat Tin Chow (UCR)
18:20-18:30 Poster Awards and Closing Remarks