I like to train deep neural nets on large datasets
Slovak-Canadian computer scientist and AI researcher specializing in deep learning, computer vision, and artificial intelligence. Founding member of OpenAI, former Senior Director of AI at Tesla, and founder of Eureka Labs. Known for making AI education accessible through YouTube lectures and the influential Stanford CS231n course.
Founded an AI-and-education company focused on teaching AI and large language models. Produces technical and general-audience content on YouTube.
Co-founded OpenAI, contributed to advances in deep learning and computer vision. Returned in 2023 to lead a small team improving GPT-4 for ChatGPT.
Led the computer-vision team behind Tesla Autopilot and FSD. Oversaw data-labeling pipelines, neural-network training, and deployment to millions of cars.
Focused on convolutional and recurrent neural networks applied to vision and language, advised by Fei-Fei Li. Designed and taught CS231n.
Worked on controllers for physically-simulated figures with Michiel van de Panne.
Double major in Computer Science and Physics, minor in Mathematics. Introduced to deep learning in Geoff Hinton's lab.
A tiny scalar-valued autograd engine with a PyTorch-like API.
Early character-level language model built with LSTMs/GRUs in Torch.
One of the first image-captioning systems, later extended to dense captioning.
Deep-learning library written entirely in JavaScript for browser demos.
Stanford's first deep-learning class on CNNs for visual recognition.
Helps researchers discover relevant ArXiv papers and receive recommendations.
Creator of educational AI material on YouTube, including the "Neural Networks: Zero to Hero" series and lectures on large-language models. Designed and taught Stanford's CS231n, followed worldwide by tens of thousands of learners.