Anders Hansen: On the foundations of computational mathematics, Smale’s 18th problem and the potential limits of AI

Deep Learning gir universelle ustabile nevrale nettverk, til tross for at man kan bevise (matematisk) eksistens av stabile nevrale nettverk. Hvorfor klarer ikke moderne algoritmer ? fange de gode og stabile nettverkene?

There is a profound optimism on the impact of deep learning (DL) and AI in the sciences with Geoffrey Hinton concluding that 'They should stop training radiologists now'. However, DL has an Achilles heel: it is universaly unstable so that small changes in the initial data can lead to large errors in the final result. This has been documented in a wide variety of applications. Paradoxically, the existence of stable neural networks for these applications is guaranteed by the celebrated Universal Approximation Theorem, however, the stable neural networks are not computed by the current training approaches. We will address this problem and the potential limitations of AI from a foundations point of view. Indeed, the current situation in AI is comparable to the situation in mathematics in the early 20th century, when David Hilbert’s optimism (typically reflected in his 10th problem) suggested no limitations to what mathematics could prove and no restrictions on what computers could compute. Hilbert’s optimism was turned upside down by Goedel and Turing, who established limitations on what mathematics can prove and which problems computers can solve (however, without limiting the impact of mathematics and computer science).

   We predict a similar outcome for modern AI and DL, where the limitations of AI (the main topic of Smale’s 18th problem) will be established through the foundations of computational mathematics. We sketch the beginning of such a program by demonstrating how there exist neural networks approximating classical mappings in scientific computing, however, no algorithm (even randomised) can compute such a network to even 1-digit accuracy with probability better than 1/2. We will also show how instability is inherit in the methodology of DL demonstrating that there is no easy remedy, given the current methodology.

Leselekse

Her rekker du nok ikke alt, men disse fire er de viktigste: 

https://spectrum.ieee.org/ai-failures

https://spectrum.ieee.org/deep-neural-network

https://spectrum.ieee.org/artificial-intelligence-2021

https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/

S? kan du eventuelt lese videre p? denne lista: 

 
2. Non-robustness i AI blir politikk (veldig viktig for diskusjon om regulering av AI):
 
3. Happy mix av avis/magasin-artikler og journal papers:
 
4. Og en liten godbit for hardcore matte entusiaster:
De tre f?rste sidene er viktige. Dette er noe av det dypeste og viktigste i the foundations of mathematics — linken mellom hva man kan bevise og hva man kan beregne p? en computer.  
 
Publisert 17. okt. 2022 13:00 - Sist endret 9. jan. 2023 14:44