Key Takeaways
- DALL-E 1 was trained on 250 million image-text pairs scraped from the internet
- DALL-E 2 uses a diffusion model with CLIP for text conditioning trained on repurposed LAION dataset
- DALL-E 3 was trained on synthetic captions generated by GPT-4 for improved prompt adherence
- DALL-E 1 model has 12 billion parameters in total
- DALL-E 2 prior GLIDE model has 3.5 billion parameters
- DALL-E 3 uses a 128x128 to 1024x1024 upscaling decoder with 1 billion parameters
- DALL-E 3 achieves 92% prompt adherence on Evals benchmark
- DALL-E 2 scores 2.0 on 0-4 human preference scale vs DALL-E 1's 1.7
- DALL-E 1 achieves 72.3% nearest neighbor accuracy on retrieval tasks
- DALL-E 3 integrated in ChatGPT Plus with 50 generations/week limit
- DALL-E 2 generated over 2 million images daily at peak in 2022
- Over 1.5 million users accessed DALL-E via ChatGPT by Q1 2024
- DALL-E 3 safety filters block 86% of violent prompts
- C2PA metadata embedded in 100% of DALL-E 3 outputs
- DALL-E 2 rejected 1.5% of generation attempts for policy violations
DALL-E stats include training, compute, safety, performance, and efficiency.
Model Parameters and Architecture
Model Parameters and Architecture Interpretation
Performance Metrics
Performance Metrics Interpretation
Safety and Moderation
Safety and Moderation Interpretation
Training and Data
Training and Data Interpretation
User Engagement and Usage
User Engagement and Usage Interpretation
How We Rate Confidence
Every statistic is queried across four AI models (ChatGPT, Claude, Gemini, Perplexity). The confidence rating reflects how many models return a consistent figure for that data point.
Only one AI model returns this statistic from its training data. The figure comes from a single primary source and has not been corroborated by independent systems. Use with caution; cross-reference before citing.
AI consensus: 1 of 4 models agree
Multiple AI models cite this figure or figures in the same direction, but with minor variance. The trend and magnitude are reliable; the precise decimal may differ by source. Suitable for directional analysis.
AI consensus: 2–3 of 4 models broadly agree
All AI models independently return the same statistic, unprompted. This level of cross-model agreement indicates the figure is robustly established in published literature and suitable for citation.
AI consensus: 4 of 4 models fully agree
Cite This Report
This report is designed to be cited. We maintain stable URLs and versioned verification dates. Copy the format appropriate for your publication below.
Min-ji Park. (2026, February 24). DALL-E Statistics. Gitnux. https://gitnux.org/dall-e-statistics
Min-ji Park. "DALL-E Statistics." Gitnux, 24 Feb 2026, https://gitnux.org/dall-e-statistics.
Min-ji Park. 2026. "DALL-E Statistics." Gitnux. https://gitnux.org/dall-e-statistics.
Sources & References
- Reference 1ARXIVarxiv.org
arxiv.org
- Reference 2OPENAIopenai.com
openai.com
- Reference 3LAIONlaion.ai
laion.ai
- Reference 4THEVERGEtheverge.com
theverge.com
- Reference 5PLATFORMplatform.openai.com
platform.openai.com
- Reference 6TECHCRUNCHtechcrunch.com
techcrunch.com
- Reference 7BUSINESSINSIDERbusinessinsider.com
businessinsider.com
- Reference 8BLOGSblogs.bing.com
blogs.bing.com
- Reference 9DESIGNERdesigner.microsoft.com
designer.microsoft.com
- Reference 10HELPhelp.openai.com
help.openai.com
- Reference 11SOCIALMEDIATODAYsocialmediatoday.com
socialmediatoday.com





