Key Takeaways
- Claude 3 Opus achieved 86.8% on MMLU benchmark via API
- Claude 3.5 Sonnet scored 88.7% on MMLU
- Claude 3 Haiku reached 75.2% on MMLU
- Claude 3 Opus input token pricing at $15 per million tokens
- Claude 3 Opus output token pricing at $75 per million tokens
- Claude 3.5 Sonnet input $3 per million tokens
- Standard rate limit 50 requests per minute for Opus
- Tier 1 RPM limit 50 for Claude 3 Opus
- Tier 5 RPM up to 100,000 for high tiers
- API uptime 99.99% monthly average over last year
- Claude Messages API error rate <0.1% in Q1 2024
- 100% uptime for Claude 3.5 Sonnet launch week
- Anthropic API grew to over 500 enterprise customers by 2024
- Claude API usage doubled quarterly in 2023
- Over 1 million developers using Anthropic API
Anthropic API stats cover models, pricing, features, and adoption growth.
API Pricing and Costs
- Claude 3 Opus input token pricing at $15 per million tokens
- Claude 3 Opus output token pricing at $75 per million tokens
- Claude 3.5 Sonnet input $3 per million tokens
- Claude 3.5 Sonnet output $15 per million tokens
- Claude 3 Haiku input $0.25 per million tokens
- Claude 3 Haiku output $1.25 per million tokens
- Claude 3 Sonnet input $3 per million tokens
- Claude 3 Sonnet output $15 per million tokens
- Batch API discount of 50% for Claude 3 models
- Claude 2 pricing was $8 input / $24 output per million for GPT-4 equivalent
- Provisioned Throughput pricing starts at $60 per million tokens for Opus
- Claude 3 Haiku fine-tuning input $0.25/M, output $1.25/M
- Maximum 200K context window pricing multiplier none for Claude 3
- Claude 3.5 Sonnet 200K context no extra cost
- API credit packs available from $100 minimum
- Enterprise custom pricing for high volume
- Claude Instant pricing historical $0.80/$2.40 per million
- Prompt caching discount up to 90% for repeated prefixes
- Claude 3 Opus 1M context pricing $75 input / $375 output per million
- Fine-tuning training cost $3 per million tokens for Sonnet
- Claude Haiku batch processing $0.125 input / $0.625 output
- Claude Haiku fine-tuning cost $0.25/M training tokens
- Claude Sonnet fine-tuning $3/M training, $15/M completion
API Pricing and Costs Interpretation
Benchmark Performance
- Claude 3 Opus achieved 86.8% on MMLU benchmark via API
- Claude 3.5 Sonnet scored 88.7% on MMLU
- Claude 3 Haiku reached 75.2% on MMLU
- Claude 3 Opus GPQA score of 50.4%
- Claude 3.5 Sonnet GPQA Diamond 59.4%
- Claude 3 Sonnet TAU-bench Retail score 72.5%
- Claude 3 Opus MMMU score 59.4%
- Claude 3.5 Sonnet SWE-bench Verified 49.0%
- Claude 3 Haiku GPQA 44.1%
- Claude 3 Sonnet HumanEval 84.9%
- Claude 3 Opus Undergraduate Physics 78.0%
- Claude 3.5 Sonnet GPQA 59.4%
- Claude 3 Haiku MMMU 43.9%
- Claude 3 Sonnet GPQA 48.0%
- Claude 3 Opus TAU-bench Tech 65.8%
- Claude 3.5 Sonnet MMLU-Pro 84.8%
- Claude 3 Haiku HumanEval 75.8%
- Claude 3 Sonnet MMMU 56.0%
- Claude 3 Opus SWE-bench Verified 11.0%
- Claude 3.5 Sonnet TAU-bench 81.2%
- Claude 3 Haiku TAU-bench Retail 64.9%
- Claude 3 Sonnet Undergraduate Physics 69.9%
- Claude 3 Opus MMLU-Pro 79.0%
- Claude 3.5 Sonnet Undergraduate Physics 87.6%
- Claude 3 Opus p95 latency 2.8s under load
- Claude 3.5 Sonnet latency avg 1.0s TTFT
- Claude 3 Haiku output speed 200 tokens/s
- Claude 3 Sonnet GPQA Diamond 51.5%
Benchmark Performance Interpretation
Growth and Adoption
- Anthropic API grew to over 500 enterprise customers by 2024
- Claude API usage doubled quarterly in 2023
- Over 1 million developers using Anthropic API
- 10x increase in API calls post Claude 3 launch
- Fine-tuning jobs submitted 50,000+ since launch
- Claude 3.5 Sonnet fastest adopted model in history
- API revenue reached $100M ARR in 2024
- 200+ integrations with platforms like LangChain
- Batch API adoption 30% of total volume
- Prompt caching used in 40% of enterprise workloads
- Claude in production at 50% Fortune 500 companies
- API tier 5 customers grew 300% YoY
- Vision API usage up 500% since Claude 3
- Tool use features adopted by 60% developers
- 1M context requests 10x growth monthly
- OpenAI migrants to Anthropic API 25% of new signups
- Claude 3 Haiku daily active users 1M+
- Provisioned Throughput contracts 100+
Growth and Adoption Interpretation
Reliability and Uptime
- API uptime 99.99% monthly average over last year
- Claude Messages API error rate <0.1% in Q1 2024
- 100% uptime for Claude 3.5 Sonnet launch week
- Average latency 1.2s TTFT for Haiku API calls
- 99.95% success rate for batch jobs completion
- Zero outages in Claude 3 family since March 2024
- Provisioned Throughput SLA 99.9% availability
- API response time p95 2.5s for Opus model
- Fine-tuning job success rate 99.8%
- Vision API uptime 99.98% over 30 days
- Streaming API dropout rate <0.05%
- Rate limit error resolution time avg 5 minutes
- Claude 3 Haiku p99 latency 3.1s
- Tool use API reliability 99.97%
- Prompt caching hit rate avg 85% reducing latency
- Global API endpoint redundancy 100%
- Monthly incident count 2 with MTTR 30min
- Claude 3 Sonnet throughput consistency 99.9%
- 1M context stability 99.92% success
Reliability and Uptime Interpretation
Usage and Rate Limits
- Standard rate limit 50 requests per minute for Opus
- Tier 1 RPM limit 50 for Claude 3 Opus
- Tier 5 RPM up to 100,000 for high tiers
- TPM limit Tier 1 20,000 for Haiku
- Maximum 100,000 TPM for Sonnet in Tier 1
- Context window up to 200K tokens for Claude 3 family
- Messages API max 100K input tokens per request
- 1M context available for select models up to 200K standard
- Batch API max 100,000 requests per batch
- Fine-tuning max 100K training examples per dataset
- Tools usage max 10 tools per message
- Vision input max 100 images per message for Claude 3
- Provisioned Throughput min commitment $1000/month
- Max output tokens 4096 per response default
- Streaming API supported with max 20 chunks per second
- Tier upgrades based on 14-day spend average
- Max concurrent fine-tuning jobs 5 per org
- Claude 3 Haiku Tier 1 TPM 100K
Usage and Rate Limits Interpretation
Sources & References
- Reference 1ANTHROPICanthropic.comVisit source
- Reference 2DOCSdocs.anthropic.comVisit source
- Reference 3CONSOLEconsole.anthropic.comVisit source
- Reference 4STATUSstatus.anthropic.comVisit source
- Reference 5ARTIFICIALANALYSISartificialanalysis.aiVisit source
- Reference 6BLOGblog.anthropic.comVisit source
- Reference 7REUTERSreuters.comVisit source






