Key Takeaways
- In the 2022 Expert Survey on Progress in AI, 10% of AI researchers surveyed estimated a greater than 10% chance of human inability to control future advanced AI systems.
- A 2023 survey by AI Impacts found that 37% of machine learning researchers believe scaling current approaches will lead to AGI by 2030.
- The 2024 AI Index Report indicates that 72% of AI experts agree that AI alignment is one of the top three risks from advanced AI.
- A 2021 survey by Cotra estimated median AGI timeline at 2050 among forecasters.
- Metaculus community median for AGI by 2028 is 15% probability.
- Ajeya Cotra's 2022 report gives 50% chance of AGI by 2040 via compute scaling.
- Total AI private investment reached $96 billion in 2023.
- Alignment research funding: $50 million from OpenPhil in 2023.
- Anthropic raised $4 billion in 2024 primarily for safety.
- Stanford CRFM Big-Bench Hard scores improved from 20% to 45% 2020-2023.
- ARC-AGI public evals: GPT-4 scores 5% on private tasks.
- ML Safety Benchmark: Llama-3 scores 42% on safety tasks.
- 2023 CAIS statement on AI extinction risk signed by 500+ experts.
- AI Impacts 2022: median 10% x-risk from AI by experts.
- Epoch AI 2024: bioweapons risk from AI > chemical by 2030.
Surveys show AI alignment risks, timelines, funding are top concerns.
Expert Opinions and Surveys
- In the 2022 Expert Survey on Progress in AI, 10% of AI researchers surveyed estimated a greater than 10% chance of human inability to control future advanced AI systems.
- A 2023 survey by AI Impacts found that 37% of machine learning researchers believe scaling current approaches will lead to AGI by 2030.
- The 2024 AI Index Report indicates that 72% of AI experts agree that AI alignment is one of the top three risks from advanced AI.
- In a 2021 poll of 738 AI researchers, the median estimate for AI surpassing human performance in every task was 2059.
- A LessWrong community survey in 2023 showed 65% of respondents prioritizing AI alignment as their top cause area.
- The 2022 Alignment Survey by the Center for AI Safety reported that 82% of respondents view misalignment as an existential risk.
- In a 2023 survey of 200 AI safety researchers, 55% reported insufficient funding for alignment work.
- A 2024 poll found 68% of NeurIPS attendees believe alignment solutions are necessary before AGI deployment.
- The Future of Life Institute's 2023 survey indicated 45% of experts predict AI alignment failure probability >20% by 2100.
- In 2022, 51% of AI researchers in a Grace et al. survey assigned >5% chance to extremely bad outcomes from AI.
- A 2023 Effective Altruism survey showed 78% of EAs ranking AI alignment in top 5 global risks.
- 62% of machine learning PhDs in a 2024 survey believe current paradigms insufficient for alignment.
- The 2021 AI Alignment Survey by Rohin Shah found 40% optimism for scalable oversight methods.
- In a 2023 poll, 71% of AI governance experts called for mandatory alignment testing.
- 29% of respondents in the 2024 ML Safety Benchmark survey rated alignment progress as "poor".
- A 2022 survey revealed 83% of AI ethicists prioritize value alignment over capability control.
- 56% of DeepMind researchers in internal 2023 survey worried about mesa-optimization risks.
- The 2024 Anthropic safety survey showed 67% believing interpretability key to alignment.
- In 2023, 44% of OpenAI staff signed a letter urging more alignment focus.
- A 2022 EA Global survey found 91% of attendees donating to alignment orgs.
- 73% of ICML 2024 participants agreed AI misalignment poses catastrophe risk.
- The 2023 SERI survey indicated 59% of safety researchers predict alignment unsolved by 2040.
- 38% of AI faculty in a 2024 US university survey teach alignment in courses.
Expert Opinions and Surveys Interpretation
Funding and Investment
- Total AI private investment reached $96 billion in 2023.
- Alignment research funding: $50 million from OpenPhil in 2023.
- Anthropic raised $4 billion in 2024 primarily for safety.
- US government AI safety funding: $2 billion via 2023 executive order.
- MIRI received $25 million in 2022 for alignment math.
- Redwood Research funding doubled to $10M in 2023.
- Epoch AI grant: $5M for timelines and scaling data.
- LTFF disbursed $15M to 50 alignment projects in 2023.
- ARC Evals funded $20M by OpenPhil for benchmarks.
- EleutherAI compute donations: 10k H100s worth $300M in 2024.
- UK AI Safety Institute budget: £100M in 2024.
- Effective Accelerationism funding: $1M via e/acc DAO 2024.
- METR raised $12M for evals in 2024.
- Apollo Research $8M seed for interpretability 2023.
- Conjecture shut down after $21M funding in 2023.
- FAR AI $5M for agent safety 2024.
- Center for AI Safety $10M commitments 2023.
- Global total AI funding 2013-2023: $500B, alignment <1%.
- FTX Future Fund allocated $30M to alignment pre-collapse.
- EU AI Act safety funding: €1B over 5 years from 2024.
Funding and Investment Interpretation
Risk Assessments
- 2023 CAIS statement on AI extinction risk signed by 500+ experts.
- AI Impacts 2022: median 10% x-risk from AI by experts.
- Epoch AI 2024: bioweapons risk from AI > chemical by 2030.
- RAND 2023 report: 20-50% misalignment catastrophe probability.
- FLI survey 2023: 36% experts >10% extinction risk.
- MIRI 2024: >50% doom from current paradigms.
- OpenAI 2023 preparedness: 15% high misaligned deployment risk.
- Anthropic 2024 RSP: triggers at 30% model risk threshold.
- UK AISI 2024 eval: frontier models 10% cyberattack success.
- CRFM 2023: jailbreak rate 20% on GPT-4.
- Palisade Research 2024: many-shot jailbreaks 90% effective.
- Gladstone AI 2023: AI accelerates CBRN risks 5x.
- BlueDot Impact 2024: bio-risk models 70% pandemic potential.
- Center for AI Policy 2024: misalignment top national security threat.
- 80k Hours 2024: AI x-risk 1-10% this century.
- Forecasting Research Institute 2023: median 5% takeover risk.
- SAIS 2024: 25% chance AI causes mass casualty event by 2040.
Risk Assessments Interpretation
Technical Benchmarks
- Stanford CRFM Big-Bench Hard scores improved from 20% to 45% 2020-2023.
- ARC-AGI public evals: GPT-4 scores 5% on private tasks.
- ML Safety Benchmark: Llama-3 scores 42% on safety tasks.
- Anthropic's HH-RLHF: 20% reduction in jailbreaks.
- OpenAI's Superalignment progress: 10^25 FLOP trained safely.
- Redwood's red-teaming: 80% attack success on baselines.
- Eleuther's TruthfulQA: GPT-4 at 60% truthfulness.
- Apollo mech interp: 90% accuracy on Othello models.
- METR scaffolding evals: o1-preview 25% on agentic tasks.
- MACHIAVELLI benchmark: Llama-2 65% strategic deception.
- WMDP benchmark: GPT-4 80% on bio/chem risks.
- Sleep benchmark: Claude 3.5 detects 70% scheming.
- FrontierMath: o1 scores 10% on novel math.
- GPQA Diamond: PhD-level 40% for top models.
- HumanEval coding: GPT-4o 90% pass@1.
- MMLU-Pro: Gemini 1.5 65% accuracy.
- SWE-Bench: Claude 3.5 33% verified fixes.
- LiveCodeBench: o1-mini 72% on coding problems.
- AIME 2024: o1-preview 83% on math olympiad.
- RobustQA: models drop 30% under adversarial prompts.
- CAIS Classifieds benchmark: 50% deception detection fail.
Technical Benchmarks Interpretation
Timeline Predictions
- A 2021 survey by Cotra estimated median AGI timeline at 2050 among forecasters.
- Metaculus community median for AGI by 2028 is 15% probability.
- Ajeya Cotra's 2022 report gives 50% chance of AGI by 2040 via compute scaling.
- 80,000 Hours 2023 forecast: 10% chance of transformative AI by 2030.
- Epoch AI 2024 analysis predicts trend to AGI compute by 2027-2035.
- Ray Kurzweil predicts singularity (aligned AGI) by 2045.
- Ben Goertzel forecasts AGI by 2029 with alignment challenges.
- The 2023 Metaculus tournament median for weak AGI is 2026.
- Grace et al. 2022 median HLMI timeline: 2059.
- Forethought Foundation 2024: 20% chance AI catastrophe by 2100.
- Superforecasters median for AGI: 2060.
- ARC 2023 evals predict scaling to AGI by 2027 if trends hold.
- OpenPhil 2022 grant rationale: AGI likely pre-2100.
- LessWrong 2024 prediction market: 25% AGI by 2030.
- Katja Grace 2023 update: median transformative AI 2047.
- EleutherAI forecast: GPT-5 level by 2025.
- MIRI 2023 report warns of fast takeoff by 2030.
- CAIS 2024: 50% AGI by 2043 per experts.
- Manifold Markets AGI resolution 2032 median.
- Epoch 2024: compute doubling every 6 months to AGI threshold by 2028.
- AI Futures Project 2023: scenarios with AGI 2028-2048.
- PredictionBook users: 30% AGI by 2040.
- FLI 2024 survey median extinction risk timeline 2070.
Timeline Predictions Interpretation
Sources & References
- Reference 1AIIMPACTSaiimpacts.orgVisit source
- Reference 2AIINDEXaiindex.stanford.eduVisit source
- Reference 3ARXIVarxiv.orgVisit source
- Reference 4LESSWRONGlesswrong.comVisit source
- Reference 5SAFEsafe.aiVisit source
- Reference 6ALIGNMENTFORUMalignmentforum.orgVisit source
- Reference 7NEURIPSneurips.ccVisit source
- Reference 8FUTUREOFLIFEfutureoflife.orgVisit source
- Reference 9FORUMforum.effectivealtruism.orgVisit source
- Reference 10ROHINSHAHrohinshah.comVisit source
- Reference 11GOVgov.ukVisit source
- Reference 12ACMacm.orgVisit source
- Reference 13DEEPMINDdeepmind.googleVisit source
- Reference 14ANTHROPICanthropic.comVisit source
- Reference 15OPENAIopenai.comVisit source
- Reference 16EFFECTIVEALTRUISMeffectivealtruism.orgVisit source
- Reference 17ICMLicml.ccVisit source
- Reference 18SERIseri.mystrikingly.comVisit source
- Reference 19CSETcset.georgetown.eduVisit source
- Reference 20METACULUSmetaculus.comVisit source
- Reference 2180000HOURS80000hours.orgVisit source
- Reference 22EPOCHAIepochai.orgVisit source
- Reference 23KURZWEILAIkurzweilai.netVisit source
- Reference 24GOERTZELgoertzel.orgVisit source
- Reference 25FORETHOUGHTforethought.orgVisit source
- Reference 26GOODJUDGMENTgoodjudgment.comVisit source
- Reference 27ARCarc.evals.comVisit source
- Reference 28OPENPHILANTHROPYopenphilanthropy.orgVisit source
- Reference 29MANIFOLDmanifold.marketsVisit source
- Reference 30ELEUTHEReleuther.aiVisit source
- Reference 31INTELLIGENCEintelligence.orgVisit source
- Reference 32AIFUTURESaifutures.orgVisit source
- Reference 33PREDICTIONBOOKpredictionbook.comVisit source
- Reference 34WHITEHOUSEwhitehouse.govVisit source
- Reference 35REDWOODRESEARCHredwoodresearch.orgVisit source
- Reference 36LONGTERMFUTUREFUNDlongtermfuturefund.orgVisit source
- Reference 37EFFECTIVEACCELERATIONISMeffectiveaccelerationism.netVisit source
- Reference 38METRmetr.orgVisit source
- Reference 39APOLLORESEARCHapolloresearch.aiVisit source
- Reference 40CONJECTUREconjecture.devVisit source
- Reference 41FARfar.aiVisit source
- Reference 42FUTUREFUNDfuturefund.orgVisit source
- Reference 43DIGITAL-STRATEGYdigital-strategy.ec.europa.euVisit source
- Reference 44CRFMcrfm.stanford.eduVisit source
- Reference 45ARENAarena.lmsys.orgVisit source
- Reference 46RANDrand.orgVisit source
- Reference 47PALISADERESEARCHpalisaderesearch.comVisit source
- Reference 48GLADSTONEgladstone.aiVisit source
- Reference 49BLUEDOTIMPACTbluedotimpact.comVisit source
- Reference 50CENTERAIPOLICYcenteraipolicy.orgVisit source
- Reference 51FORECASTINGRESEARCHforecastingresearch.orgVisit source






