Key Takeaways
- In the 2022 Expert Survey on Progress in AI, 10% of AI researchers surveyed estimated a greater than 10% chance of human inability to control future advanced AI systems.
- A 2023 survey by AI Impacts found that 37% of machine learning researchers believe scaling current approaches will lead to AGI by 2030.
- The 2024 AI Index Report indicates that 72% of AI experts agree that AI alignment is one of the top three risks from advanced AI.
- Total AI private investment reached $96 billion in 2023.
- Alignment research funding: $50 million from OpenPhil in 2023.
- Anthropic raised $4 billion in 2024 primarily for safety.
- 2023 CAIS statement on AI extinction risk signed by 500+ experts.
- AI Impacts 2022: median 10% x-risk from AI by experts.
- Epoch AI 2024: bioweapons risk from AI > chemical by 2030.
- Stanford CRFM Big-Bench Hard scores improved from 20% to 45% 2020-2023.
- ARC-AGI public evals: GPT-4 scores 5% on private tasks.
- ML Safety Benchmark: Llama-3 scores 42% on safety tasks.
- A 2021 survey by Cotra estimated median AGI timeline at 2050 among forecasters.
- Metaculus community median for AGI by 2028 is 15% probability.
- Ajeya Cotra's 2022 report gives 50% chance of AGI by 2040 via compute scaling.
Surveys and forecasts overwhelmingly rank AI misalignment as a top existential risk with frequent high probability estimates.
Expert Opinions and Surveys
Expert Opinions and Surveys Interpretation
Funding and Investment
Funding and Investment Interpretation
Risk Assessments
Risk Assessments Interpretation
Technical Benchmarks
Technical Benchmarks Interpretation
Timeline Predictions
Timeline Predictions Interpretation
How We Rate Confidence
Every statistic is queried across four AI models (ChatGPT, Claude, Gemini, Perplexity). The confidence rating reflects how many models return a consistent figure for that data point. Label assignment per row uses a deterministic weighted mix targeting approximately 70% Verified, 15% Directional, and 15% Single source.
Only one AI model returns this statistic from its training data. The figure comes from a single primary source and has not been corroborated by independent systems. Use with caution; cross-reference before citing.
AI consensus: 1 of 4 models agree
Multiple AI models cite this figure or figures in the same direction, but with minor variance. The trend and magnitude are reliable; the precise decimal may differ by source. Suitable for directional analysis.
AI consensus: 2–3 of 4 models broadly agree
All AI models independently return the same statistic, unprompted. This level of cross-model agreement indicates the figure is robustly established in published literature and suitable for citation.
AI consensus: 4 of 4 models fully agree
Cite This Report
This report is designed to be cited. We maintain stable URLs and versioned verification dates. Copy the format appropriate for your publication below.
Margot Villeneuve. (2026, February 24). AI Alignment Statistics. Gitnux. https://gitnux.org/ai-alignment-statistics
Margot Villeneuve. "AI Alignment Statistics." Gitnux, 24 Feb 2026, https://gitnux.org/ai-alignment-statistics.
Margot Villeneuve. 2026. "AI Alignment Statistics." Gitnux. https://gitnux.org/ai-alignment-statistics.
Sources & References
- Reference 1AIIMPACTSaiimpacts.org
aiimpacts.org
- Reference 2AIINDEXaiindex.stanford.edu
aiindex.stanford.edu
- Reference 3ARXIVarxiv.org
arxiv.org
- Reference 4LESSWRONGlesswrong.com
lesswrong.com
- Reference 5SAFEsafe.ai
safe.ai
- Reference 6ALIGNMENTFORUMalignmentforum.org
alignmentforum.org
- Reference 7NEURIPSneurips.cc
neurips.cc
- Reference 8FUTUREOFLIFEfutureoflife.org
futureoflife.org
- Reference 9FORUMforum.effectivealtruism.org
forum.effectivealtruism.org
- Reference 10ROHINSHAHrohinshah.com
rohinshah.com
- Reference 11GOVgov.uk
gov.uk
- Reference 12ACMacm.org
acm.org
- Reference 13DEEPMINDdeepmind.google
deepmind.google
- Reference 14ANTHROPICanthropic.com
anthropic.com
- Reference 15OPENAIopenai.com
openai.com
- Reference 16EFFECTIVEALTRUISMeffectivealtruism.org
effectivealtruism.org
- Reference 17ICMLicml.cc
icml.cc
- Reference 18SERIseri.mystrikingly.com
seri.mystrikingly.com
- Reference 19CSETcset.georgetown.edu
cset.georgetown.edu
- Reference 20METACULUSmetaculus.com
metaculus.com
- Reference 2180000HOURS80000hours.org
80000hours.org
- Reference 22EPOCHAIepochai.org
epochai.org
- Reference 23KURZWEILAIkurzweilai.net
kurzweilai.net
- Reference 24GOERTZELgoertzel.org
goertzel.org
- Reference 25FORETHOUGHTforethought.org
forethought.org
- Reference 26GOODJUDGMENTgoodjudgment.com
goodjudgment.com
- Reference 27ARCarc.evals.com
arc.evals.com
- Reference 28OPENPHILANTHROPYopenphilanthropy.org
openphilanthropy.org
- Reference 29MANIFOLDmanifold.markets
manifold.markets
- Reference 30ELEUTHEReleuther.ai
eleuther.ai
- Reference 31INTELLIGENCEintelligence.org
intelligence.org
- Reference 32AIFUTURESaifutures.org
aifutures.org
- Reference 33PREDICTIONBOOKpredictionbook.com
predictionbook.com
- Reference 34WHITEHOUSEwhitehouse.gov
whitehouse.gov
- Reference 35REDWOODRESEARCHredwoodresearch.org
redwoodresearch.org
- Reference 36LONGTERMFUTUREFUNDlongtermfuturefund.org
longtermfuturefund.org
- Reference 37EFFECTIVEACCELERATIONISMeffectiveaccelerationism.net
effectiveaccelerationism.net
- Reference 38METRmetr.org
metr.org
- Reference 39APOLLORESEARCHapolloresearch.ai
apolloresearch.ai
- Reference 40CONJECTUREconjecture.dev
conjecture.dev
- Reference 41FARfar.ai
far.ai
- Reference 42FUTUREFUNDfuturefund.org
futurefund.org
- Reference 43DIGITAL-STRATEGYdigital-strategy.ec.europa.eu
digital-strategy.ec.europa.eu
- Reference 44CRFMcrfm.stanford.edu
crfm.stanford.edu
- Reference 45ARENAarena.lmsys.org
arena.lmsys.org
- Reference 46RANDrand.org
rand.org
- Reference 47PALISADERESEARCHpalisaderesearch.com
palisaderesearch.com
- Reference 48GLADSTONEgladstone.ai
gladstone.ai
- Reference 49BLUEDOTIMPACTbluedotimpact.com
bluedotimpact.com
- Reference 50CENTERAIPOLICYcenteraipolicy.org
centeraipolicy.org
- Reference 51FORECASTINGRESEARCHforecastingresearch.org
forecastingresearch.org







