Tensor networks (TNs) are prominent model classes for the approximation of high-dimensional functions in computational and data science. Tree-based TNs based, also known as tree-based tensor formats, can be seen as particular feed-forward neural networks. After an introduction to approximation tools based on tree tensor networks, we introduce their approximation classes and present some recent results on their properties. In particular, we show that classical smoothness (Besov) spaces are continuously embedded in TNs approximation classes. For such spaces, TNs achieve (near to) optimal rate that is usually achieved by classical approximation tools, but without requiring to adapt the tool to the regularity of the function. The use of deep networks with free depth is shown to be essential for obtaining this property. Also, it is shown that exploiting sparsity of tensors allows to obtain optimal rates achieved by classical nonlinear approximation tools, or to better exploit structured smoothness (anisotropic or mixed) for multivariate approximation. We also show that approximation classes of tensor networks are not contained in any Besov space, unless one restricts the depth of the tensor network. That reveals again the importance of depth and the potential of tensor networks to achieve approximation or learning tasks for functions beyond standard regularity classes.
 M. Ali and A. Nouy. Approximation with Tensor Networks. Part I: Approximation Spaces. arXiv:2007.00118
 M. Ali and A. Nouy. Approximation with Tensor Networks. Part II: Approximation Rates for Smoothness Classes. arXiv:2007.00128
 M. Ali and A. Nouy. Approximation with Tensor Networks. Part III: Multivariate approximation.
 B. Michel and A. Nouy. Learning with tree tensor networks: complexity estimates and model selection. arXiv:2007.01165.
Organiser: David Cohen (email@example.com). Please contact me if you need the Zoom password.
Room MVL:14 and Zoom