Yes, the site logo changed. This is because site stylesheets are all brand-spanking new, and we have 15 new themes for you to choose from! More info here.
Policy Update - Rules changes incoming for AI content - Read Here
Interested in advertising on Derpibooru? Click here for information!
Furry Body Pillows - Preset and Custom Designs

Help fund the $15 daily operational cost of Derpibooru - support us financially!

Comments

Syntax quick reference: **bold** *italic* ||hide text|| `code` __underline__ ~~strike~~ ^sup^ ~sub~

Detailed syntax guide

furrypony
Cosmia Nebula  - For Patreon supporters
Nightmare in the Moon - Had their OC in the 2024 Derpibooru Collab.
Crystal Roseluck - Had their OC in the 2023 Derpibooru Collab.
Elements of Harmony - Had an OC in the 2022 Community Collab
Twinkling Balloon - Took part in the 2021 community collab.
My Little Pony - 1992 Edition
Happy Derpy! - For Patreon supporters
Bronze Supporter - Bronze Patron
Friendship, Art, and Magic (2018) - Celebrated Derpibooru's six year anniversary with friends.
Not a Llama - Happy April Fools Day!

hopelessly sad filly
NeRF synthesizes novel views of a scene with unprecedented quality by fitting a neural radiance field to RGB images. However, NeRF requires querying a deep MultiLayer Perceptron (MLP) millions of times, leading to slow rendering times, even on modern GPUs. In this paper, we demonstrate that real-time rendering is possible by utilizing thousands of tiny MLPs instead of one single large MLP. In our setting, each individual MLP only needs to represent parts of the scene, thus smaller and faster-to-evaluate MLPs can be used. By combining this divide-and-conquer strategy with further optimizations, rendering is accelerated by three orders of magnitude compared to the original NeRF model without incurring high storage costs. Further, using teacher-student distillation for training, we show that this speed-up can be achieved without sacrificing visual quality
furrypony
Cosmia Nebula  - For Patreon supporters
Nightmare in the Moon - Had their OC in the 2024 Derpibooru Collab.
Crystal Roseluck - Had their OC in the 2023 Derpibooru Collab.
Elements of Harmony - Had an OC in the 2022 Community Collab
Twinkling Balloon - Took part in the 2021 community collab.
My Little Pony - 1992 Edition
Happy Derpy! - For Patreon supporters
Bronze Supporter - Bronze Patron
Friendship, Art, and Magic (2018) - Celebrated Derpibooru's six year anniversary with friends.
Not a Llama - Happy April Fools Day!

hopelessly sad filly
Convolutional Neural Networks (CNNs) are the go-to model for computer vision. Recently, attention-based networks, such as the Vision Transformer, have also become popular. In this paper we show that while convolutions and attention are both sufficient for good performance, neither of them are necessary. We present MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs). MLP-Mixer contains two types of layers: one with MLPs applied independently to image patches (i.e. “mixing” the per-location features), and one with MLPs applied across patches (i.e. “mixing” spatial information). When trained on large datasets, or with modern regularization schemes, MLP-Mixer attains competitive scores on image classification benchmarks, with pre-training and inference cost comparable to state-of-the-art models. We hope that these results spark further research beyond the realms of well established CNNs and Transformers.