Skip to main content
16 September

Frequency Bias in Deep Learning

2:00 PM
Online via Zoom

Prof. David Jacobs, Department of Computer Science and UMIACS, University of Maryland

 

Recent results have shown that highly overparameterized deep neural networks act as linear systems.  Fully connected networks are equivalent to kernel methods, with a neural tangent kernel (NTK).  This talk will describe our work on better understanding the properties of this kernel.  We study the eigenvalues and eigenvectors of NTK, and quantify a frequency bias in neural networks, that causes them to learn low frequency functions more quickly than high frequency ones.  In fact, these eigenvectors and eigenvalues are the same as those of the well-known Laplace kernel, implying that these two kernels interpolate functions with the same smoothness properties.  On a large number of datasets, we show that Kernel-based classification with NTK and the Laplace kernel perform quite similarly.

30 September

Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics

2:00 PM
Online via Zoom

Prof. Furong Huang, Department of Computer Science, University of Maryland

 

Poisoning attacks, although have been studied extensively in supervised learning, are not well understood in Reinforcement Learning (RL), especially in deep RL. Prior works on poisoning RL usually either assume the attacker knows the underlying Markov Decision Process (MDP), or directly apply the poisoning methods in supervised learning to RL. In this work, we build a generic poisoning framework for online RL via a comprehensive investigation of heterogeneous types/victims of poisoning attacks in RL, considering the unique challenges in RL such as data no longer being i.i.d. Without any prior knowledge of the MDP, we propose a strategic poisoning algorithm called Vulnerability-Aware Adversarial Critic Poison (VA2C-P), which works for most policy-based deep RL agents, using a novel metric, stability radius in RL, that measures the vulnerability of RL algorithms. Experiments on multiple deep RL agents and multiple environments show that our poisoning algorithm successfully prevents agents from learning a good policy, with a limited attacking budget. Our experiment results demonstrate varying vulnerabilities of different deep RL agents in multiple environments, benefiting the understanding and applications of deep RL under security threat scenarios.

14 October

Solutions to two conjectures in branched transport: stability and regularity of optimal paths

2:00 PM
Online via Zoom

Prof. Antonio De Rosa, Department of Mathematics, University of Maryland

 

Transport models involving branched structures are employed to describe several biological, natural and supply-demand systems. The transportation cost in these models is proportional to a concave power of the intensity of the flow.

In this talk, we focus on the stability of optimal transports with respect to variations of the source and target measures. Stability was known to hold just for special regimes (supercritical concave powers degenerating with the dimension). We prove that stability holds for every lower semicontinuous cost functional continuous in 0 and we provide counterexamples when these assumptions are not satisfied. Thus we completely solve a conjecture of Bernot, Caselles and Morel. 

To conclude, we prove stability for the mailing problem, too. This was completely open in the literature and allows us to obtain the regularity of the optimal networks.

18 November

TBA

Online via Zoom

Prof. Amir Sagiv, Applied Mathematics, Columbia University

02 December

TBA

2:00 PM
Online via Zoom

Prof. Behtash Babadi, Department of Electrical and Computer Engineering, University of Maryland