Tag: Spivak

  • Learners and Poly

    Brendan Fong, David Spivak and Remy Tuyeras cooked up a vast generalisation of neural networks in their paper Backprop as Functor: A compositional perspective on supervised learning. Here’s a nice introduction to neural networks for category theorists by Bruno Gavranovic. At 1.49m he tries to explain supervised learning with neural networks in one slide. Learners […]

  • Poly

    Following up on the deep learning and toposes-post, I was planning to do something on the logic of neural networks. Prepping for this I saw David Spivak’s paper Learner’s Languages doing exactly that, but in the more general setting of ‘learners’ (see also the deep learning post). And then … I fell under the spell…

  • Deep learning and toposes

    Judging from this and that paper, deep learning is the string theory of the 2020s for geometers and representation theorists. String theory is the 90s answer to the tears of algebraic geometers worldwide trying to write the "Applications" part of their grant proposals. https://t.co/AboZ5WkPtc — algebraic geometer BLM (@BarbaraFantechi) December 17, 2021 If you want…

  • Children have always loved colimits

    If Chad Orzel is able to teach quantum theory to his dog, surely it must be possible to explain schemes, stacks, toposes and motives to hipsters? Perhaps an idea for a series of posts? It’s early days yet. So far, I’ve only added the tag sga4hipsters (pun intended) and googled around for ‘real-life’ applications of…