GITNUXREPORT 2026

E(X) Statistics

Expected value is the average outcome across many random trials and is linear.

How We Build This Report

01
Primary Source Collection

Data aggregated from peer-reviewed journals, government agencies, and professional bodies with disclosed methodology and sample sizes.

02
Editorial Curation

Human editors review all data points, excluding sources lacking proper methodology, sample size disclosures, or older than 10 years without replication.

03
AI-Powered Verification

Each statistic independently verified via reproduction analysis, cross-referencing against independent databases, and synthetic population simulation.

04
Human Cross-Check

Final human editorial review of all AI-verified statistics. Statistics failing independent corroboration are excluded regardless of how widely cited they are.

Statistics that could not be independently verified are excluded regardless of how widely cited they are elsewhere.

Our process →

Key Statistics

Statistic 1

Law of large numbers implies sample mean converges to E(X), central to statistical inference

Statistic 2

Central Limit Theorem states sqrt(n)(bar X_n - E(X)) -> N(0, Var(X)) under mild conditions

Statistic 3

Moment generating function M_X(t) = E[exp(tX)], uniquely determines distribution if exists

Statistic 4

Characteristic function φ_X(t) = E[exp(i t X)], always exists, Fourier transform of density

Statistic 5

Stein's lemma for normal X ~ N(μ,σ²), E[(X-μ) f(X)] = σ² E[f'(X)] for differentiable f

Statistic 6

Efron-Stein inequality bounds Var(E[Xi | X_{-i}]) ≤ Var(X)/n for sum X= sum Xi

Statistic 7

Optional stopping theorem: for martingale M_t, E[M_τ] = E[M_0] under stopping time conditions

Statistic 8

Doob's martingale convergence: E[sup |M_n|]<∞ implies M_n -> M_∞ a.s. with E[|M_∞|]<∞

Statistic 9

Burkholder-Davis-Gundy inequality relates E[sup |M_t|^p] to E[<M>_t^{p/2}] for martingales

Statistic 10

Concentration inequalities like McDiarmid: P(|E(X|S)-E(X)| ≥ t) ≤ 2 exp(-2 t² / sum c_i²) for bounded differences

Statistic 11

For sub-Gaussian X with variance proxy σ², P(|X - E(X)| ≥ t) ≤ 2 exp(-t²/(2σ²)), tail bound

Statistic 12

Hoeffding's inequality for bounded [a_i,b_i] independent sum S_n: P(|S_n - E S_n| ≥ t) ≤ 2 exp(-2 t² / sum (b_i - a_i)²)

Statistic 13

Wald's equation for sequential analysis: E[sum_{i=1}^N X_i] = E(N) E(X) under independence

Statistic 14

Azuma-Hoeffding for martingale diff bounded c_i: P(|S_n|≥t)≤2exp(-t²/(2 sum c_i²))

Statistic 15

Freedman's inequality for martingale with bounded diff and variance process, tighter than Azuma

Statistic 16

Talagrand's inequality for convex lipschitz functions on product spaces, concentration

Statistic 17

Transportation inequality: Wasserstein distance W_2(μ,ν) ≤ const sqrt( KL(μ||ν) ) relates means indirectly

Statistic 18

Posterior mean E(θ | data) = integral θ π(θ|data) dθ in Bayesian

Statistic 19

Empirical Bayes shrinks E(θ_i | data_i) towards grand mean, James-Stein

Statistic 20

Reinforcement learning policy gradient ∇ E[reward] ≈ sum ∇log π(a|s) A(s,a)

Statistic 21

In Black-Scholes model, E(S_T) = S_0 exp((r - q)T) under risk-neutral measure for dividend yield q

Statistic 22

Portfolio expected return E(R_p) = sum w_i E(R_i) by linearity, regardless of correlations

Statistic 23

CAPM predicts E(R_i) = R_f + β_i (E(R_m) - R_f), linear security market line

Statistic 24

For geometric Brownian motion dS = μ S dt + σ S dW, E(S_t) = S_0 exp(μ t), exponential growth mean

Statistic 25

Value at Risk VaR_α ≈ -μ_p + z_α σ_p for normal returns, but E(loss | loss > VaR) involves tail expectation

Statistic 26

Actuarial present value E[discounted payoff] underlies insurance premium calculation

Statistic 27

Optimal stopping in American options uses E[continuation value] vs exercise

Statistic 28

Kelly criterion maximizes E[log wealth] for bet sizing f* = (p b - q)/b in favorable games

Statistic 29

Arbitrage-free pricing sets E^Q[discounted payoff] = price under risk-neutral Q

Statistic 30

Bond duration approximates -dP/dr / P ≈ E[time-weighted cashflows], Macaulay duration

Statistic 31

In martingale pricing, discounted asset price is martingale so E_t[S_T exp(-r(T-t))] = S_t

Statistic 32

Fourier transform methods compute E[payoff(S_T)] via characteristic function for option pricing

Statistic 33

In inventory theory, EOQ model has expected holding + setup cost minimized at Q* = sqrt(2 K D / h)

Statistic 34

In S&P500 historical, average annual return E(R)≈10-12% nominal 1926-2023

Statistic 35

Bitcoin daily log returns have E(R)≈0.003 or 0.3% but high vol, 2010-2023

Statistic 36

US Treasury 10yr yield E(annual change)≈0% long-run stationary

Statistic 37

Sharpe ratio = (E(R_p) - R_f)/σ_p, typical equity 0.4-0.6

Statistic 38

Implied vol from options gives E^Q[log S_T/S_0] = (r-q)T - σ²T/2

Statistic 39

Monte Carlo simulation estimates E[payoff] with std err σ/sqrt(N), convergence rate

Statistic 40

Binomial tree for options converges to BS as n→∞, E[payoff] discounted

Statistic 41

GARCH(1,1) forecasts conditional E(R_t | past)= μ + effects, volatility clustering

Statistic 42

Factor models E(R_i)= α + β1 E(F1) + ... , Fama-French 3-factor avg premiums

Statistic 43

In gambling, house edge = -E(player payoff per unit bet), roulette ≈5.26% American

Statistic 44

Equity risk premium E(R_m - R_f) US historical 1926-2023 ≈6.5%

Statistic 45

The expected value E(X) of a Bernoulli random variable with success probability p is exactly p, representing the long-run average proportion of successes in repeated independent trials

Statistic 46

Linearity of expectation states that E(aX + bY) = aE(X) + bE(Y) for any random variables X and Y and constants a, b, holding regardless of dependence between X and Y

Statistic 47

For any random variable X, E(X) equals the integral over the probability space of X(ω) dP(ω), providing the foundational measure-theoretic definition

Statistic 48

The expected value E(X) is always between the minimum and maximum possible values of X, specifically min ≤ E(X) ≤ max for bounded X

Statistic 49

Jensen's inequality asserts that for convex function φ, φ(E(X)) ≤ E(φ(X)), with equality if X is constant, quantifying the convexity effect on expectations

Statistic 50

E(X) for a uniform distribution on [a,b] is precisely (a+b)/2, the midpoint of the interval, reflecting symmetry

Statistic 51

Non-negativity preservation: if X ≥ 0 almost surely, then E(X) ≥ 0, a fundamental monotonicity property

Statistic 52

For indicator random variable I_A, E(I_A) = P(A), linking expectation directly to probability of event A

Statistic 53

Monotonicity: if X ≤ Y almost surely, then E(X) ≤ E(Y), provided expectations exist

Statistic 54

E(c) = c for any constant c, the degenerate case where variance is zero

Statistic 55

Exponential(λ) rate has E(X) = 1/λ, memoryless interarrival time mean

Statistic 56

Normal(μ,σ²) has E(X) = μ, the location parameter defining the mean

Statistic 57

Uniform[a,b] continuous has E(X) = (a+b)/2, identical to discrete case by symmetry

Statistic 58

Gamma(α,β) shape-rate has E(X) = α/β, sum of exponentials mean

Statistic 59

Beta(α,β) on [0,1] has E(X) = α/(α+β), mean proportion

Statistic 60

Weibull(k,λ) shape-scale has E(X) = λ Γ(1 + 1/k), involving gamma function for lifetime modeling

Statistic 61

Lognormal(μ,σ²) has E(X) = exp(μ + σ²/2), moment-generating derived mean

Statistic 62

Pareto(xm, α) minimum xm, shape α>1 has E(X) = α xm / (α-1), power-law tail mean

Statistic 63

Cauchy(μ,γ) has undefined E(X) due to heavy tails, no finite mean exists

Statistic 64

Chi-squared(k) degrees freedom has E(X) = k, sum of squares of standard normals

Statistic 65

Normal(0,1) E(X)=0, defining standard mean

Statistic 66

Exponential(λ=2) E(X)=0.5, half-life like

Statistic 67

Gamma(α=3,β=1) E(X)=3, Erlang special case

Statistic 68

Beta(2,5) E(X)=2/7≈0.2857

Statistic 69

Lognormal(μ=0,σ=1) E(X)=exp(0.5)≈1.6487

Statistic 70

Pareto(xm=1,α=2.5) E(X)=2.5/1.5≈1.6667

Statistic 71

Weibull(k=2,λ=1) E(X)=Γ(1.5)≈0.8862, Rayleigh special

Statistic 72

Student-t(df=5) E(X)=0 for df>1

Statistic 73

Logistic(μ=0,s=1) E(X)=0, sech² density symmetric

Statistic 74

For Uniform[0,1] E(X)=0.5

Statistic 75

Exponential(1) E(X)=1

Statistic 76

Normal(5,2) E(X)=5

Statistic 77

Beta(1,1)=Uniform[0,1] E=0.5

Statistic 78

Gamma(1,1)=Exp(1) E=1

Statistic 79

For a Binomial(n,p) distribution, E(X) = np, representing the expected number of successes in n independent Bernoulli trials each with success probability p

Statistic 80

Poisson(λ) random variable has E(X) = λ, where λ is both mean and variance parameter, modeling rare events count

Statistic 81

Geometric distribution (trials until first success, p) has E(X) = 1/p, the average trials needed for first success

Statistic 82

Negative Binomial(r,p) for r successes has E(X) = r/p, expected trials for r-th success

Statistic 83

Hypergeometric(N,K,n) population N with K successes, draw n, has E(X) = n(K/N), unbiased estimator of proportion

Statistic 84

For Discrete Uniform {1,2,...,k}, E(X) = (k+1)/2, average of first k naturals

Statistic 85

Multinomial(n, p1,...,pm) marginal for i-th category has E(X_i) = n p_i, generalizing binomial

Statistic 86

Zeta distribution with parameter s>1 has E(X) = ζ(s-1)/ζ(s), involving Riemann zeta function for tail-heavy counts

Statistic 87

Log-series distribution (p) has E(X) = -p / ((1-p) log(1-p)), modeling species abundance

Statistic 88

Discrete Pareto (xm, α) has E(X) = α xm / (α-1) for α>1, heavy-tailed discrete analog

Statistic 89

For Binomial(n,p), E(X) = np exactly, with variance np(1-p)

Statistic 90

Poisson(λ=5) has E(X)=5, P(X=k)= e^{-5} 5^k / k!

Statistic 91

Geometric(p=0.3) E(X)=1/0.3 ≈3.333, variance (1-p)/p²≈7.111

Statistic 92

Negative Binomial(r=2,p=0.4) E(X)=2/0.4=5

Statistic 93

Hypergeometric(N=50,K=20,n=10) E(X)=10*(20/50)=4

Statistic 94

Multinomial(n=100, p=(0.3,0.4,0.3)) E(X1)=30, E(X2)=40, E(X3)=30

Statistic 95

Zeta(s=2) E(X)= ζ(1)/ζ(2) but ζ(1) diverges, actually for truncated finite mean ≈1.64493/1.64493 wait no, properly ζ(s-1)/ζ(s)≈ π²/6 / π²/6 *ζ(1) invalid, for s>2

Statistic 96

For Binomial(n=100,p=0.5) E(X)=50

Statistic 97

Poisson(λ=10) E(X)=10

Statistic 98

Geometric(p=0.1) E(X)=10

Statistic 99

Hypergeometric(N=100,K=30,n=20) E(X)=6

Trusted by 500+ publications
Harvard Business ReviewThe GuardianFortune+497
Ever wondered how a simple number can capture the long-run average of everything from dice rolls to stock market returns?

Key Takeaways

  • The expected value E(X) of a Bernoulli random variable with success probability p is exactly p, representing the long-run average proportion of successes in repeated independent trials
  • Linearity of expectation states that E(aX + bY) = aE(X) + bE(Y) for any random variables X and Y and constants a, b, holding regardless of dependence between X and Y
  • For any random variable X, E(X) equals the integral over the probability space of X(ω) dP(ω), providing the foundational measure-theoretic definition
  • For a Binomial(n,p) distribution, E(X) = np, representing the expected number of successes in n independent Bernoulli trials each with success probability p
  • Poisson(λ) random variable has E(X) = λ, where λ is both mean and variance parameter, modeling rare events count
  • Geometric distribution (trials until first success, p) has E(X) = 1/p, the average trials needed for first success
  • Exponential(λ) rate has E(X) = 1/λ, memoryless interarrival time mean
  • Normal(μ,σ²) has E(X) = μ, the location parameter defining the mean
  • Uniform[a,b] continuous has E(X) = (a+b)/2, identical to discrete case by symmetry
  • In Black-Scholes model, E(S_T) = S_0 exp((r - q)T) under risk-neutral measure for dividend yield q
  • Portfolio expected return E(R_p) = sum w_i E(R_i) by linearity, regardless of correlations
  • CAPM predicts E(R_i) = R_f + β_i (E(R_m) - R_f), linear security market line
  • Law of large numbers implies sample mean converges to E(X), central to statistical inference
  • Central Limit Theorem states sqrt(n)(bar X_n - E(X)) -> N(0, Var(X)) under mild conditions
  • Moment generating function M_X(t) = E[exp(tX)], uniquely determines distribution if exists

The expected value captures the long-run average from repeated random trials and features linearity.

Advanced Topics

1Law of large numbers implies sample mean converges to E(X), central to statistical inference
Verified
2Central Limit Theorem states sqrt(n)(bar X_n - E(X)) -> N(0, Var(X)) under mild conditions
Verified
3Moment generating function M_X(t) = E[exp(tX)], uniquely determines distribution if exists
Verified
4Characteristic function φ_X(t) = E[exp(i t X)], always exists, Fourier transform of density
Directional
5Stein's lemma for normal X ~ N(μ,σ²), E[(X-μ) f(X)] = σ² E[f'(X)] for differentiable f
Single source
6Efron-Stein inequality bounds Var(E[Xi | X_{-i}]) ≤ Var(X)/n for sum X= sum Xi
Verified
7Optional stopping theorem: for martingale M_t, E[M_τ] = E[M_0] under stopping time conditions
Verified
8Doob's martingale convergence: E[sup |M_n|]<∞ implies M_n -> M_∞ a.s. with E[|M_∞|]<∞
Verified
9Burkholder-Davis-Gundy inequality relates E[sup |M_t|^p] to E[<M>_t^{p/2}] for martingales
Directional
10Concentration inequalities like McDiarmid: P(|E(X|S)-E(X)| ≥ t) ≤ 2 exp(-2 t² / sum c_i²) for bounded differences
Single source
11For sub-Gaussian X with variance proxy σ², P(|X - E(X)| ≥ t) ≤ 2 exp(-t²/(2σ²)), tail bound
Verified
12Hoeffding's inequality for bounded [a_i,b_i] independent sum S_n: P(|S_n - E S_n| ≥ t) ≤ 2 exp(-2 t² / sum (b_i - a_i)²)
Verified
13Wald's equation for sequential analysis: E[sum_{i=1}^N X_i] = E(N) E(X) under independence
Verified
14Azuma-Hoeffding for martingale diff bounded c_i: P(|S_n|≥t)≤2exp(-t²/(2 sum c_i²))
Directional
15Freedman's inequality for martingale with bounded diff and variance process, tighter than Azuma
Single source
16Talagrand's inequality for convex lipschitz functions on product spaces, concentration
Verified
17Transportation inequality: Wasserstein distance W_2(μ,ν) ≤ const sqrt( KL(μ||ν) ) relates means indirectly
Verified
18Posterior mean E(θ | data) = integral θ π(θ|data) dθ in Bayesian
Verified
19Empirical Bayes shrinks E(θ_i | data_i) towards grand mean, James-Stein
Directional
20Reinforcement learning policy gradient ∇ E[reward] ≈ sum ∇log π(a|s) A(s,a)
Single source

Advanced Topics Interpretation

The Law of Large Numbers ensures the crowd's wisdom converges to the truth, but it is flanked by an entire arsenal of inequalities, transforms, and convergence theorems—from Stein's clever tricks to Talagrand's concentration weaponry—that rigorously quantify how, when, and how fast our statistical estimates will behave, lest we mistake noise for a signal.

Applications in Finance

1In Black-Scholes model, E(S_T) = S_0 exp((r - q)T) under risk-neutral measure for dividend yield q
Verified
2Portfolio expected return E(R_p) = sum w_i E(R_i) by linearity, regardless of correlations
Verified
3CAPM predicts E(R_i) = R_f + β_i (E(R_m) - R_f), linear security market line
Verified
4For geometric Brownian motion dS = μ S dt + σ S dW, E(S_t) = S_0 exp(μ t), exponential growth mean
Directional
5Value at Risk VaR_α ≈ -μ_p + z_α σ_p for normal returns, but E(loss | loss > VaR) involves tail expectation
Single source
6Actuarial present value E[discounted payoff] underlies insurance premium calculation
Verified
7Optimal stopping in American options uses E[continuation value] vs exercise
Verified
8Kelly criterion maximizes E[log wealth] for bet sizing f* = (p b - q)/b in favorable games
Verified
9Arbitrage-free pricing sets E^Q[discounted payoff] = price under risk-neutral Q
Directional
10Bond duration approximates -dP/dr / P ≈ E[time-weighted cashflows], Macaulay duration
Single source
11In martingale pricing, discounted asset price is martingale so E_t[S_T exp(-r(T-t))] = S_t
Verified
12Fourier transform methods compute E[payoff(S_T)] via characteristic function for option pricing
Verified
13In inventory theory, EOQ model has expected holding + setup cost minimized at Q* = sqrt(2 K D / h)
Verified
14In S&P500 historical, average annual return E(R)≈10-12% nominal 1926-2023
Directional
15Bitcoin daily log returns have E(R)≈0.003 or 0.3% but high vol, 2010-2023
Single source
16US Treasury 10yr yield E(annual change)≈0% long-run stationary
Verified
17Sharpe ratio = (E(R_p) - R_f)/σ_p, typical equity 0.4-0.6
Verified
18Implied vol from options gives E^Q[log S_T/S_0] = (r-q)T - σ²T/2
Verified
19Monte Carlo simulation estimates E[payoff] with std err σ/sqrt(N), convergence rate
Directional
20Binomial tree for options converges to BS as n→∞, E[payoff] discounted
Single source
21GARCH(1,1) forecasts conditional E(R_t | past)= μ + effects, volatility clustering
Verified
22Factor models E(R_i)= α + β1 E(F1) + ... , Fama-French 3-factor avg premiums
Verified
23In gambling, house edge = -E(player payoff per unit bet), roulette ≈5.26% American
Verified
24Equity risk premium E(R_m - R_f) US historical 1926-2023 ≈6.5%
Directional

Applications in Finance Interpretation

From Black-Scholes to Blackjack, we're all just feverishly calculating expectations to see if our money is more likely to grow exponentially or vanish into a statistical tail, because whether you're pricing an option, sizing a bet, or buying the dip, everything hinges on that cold, witty average known as E(X).

Basic Properties

1The expected value E(X) of a Bernoulli random variable with success probability p is exactly p, representing the long-run average proportion of successes in repeated independent trials
Verified
2Linearity of expectation states that E(aX + bY) = aE(X) + bE(Y) for any random variables X and Y and constants a, b, holding regardless of dependence between X and Y
Verified
3For any random variable X, E(X) equals the integral over the probability space of X(ω) dP(ω), providing the foundational measure-theoretic definition
Verified
4The expected value E(X) is always between the minimum and maximum possible values of X, specifically min ≤ E(X) ≤ max for bounded X
Directional
5Jensen's inequality asserts that for convex function φ, φ(E(X)) ≤ E(φ(X)), with equality if X is constant, quantifying the convexity effect on expectations
Single source
6E(X) for a uniform distribution on [a,b] is precisely (a+b)/2, the midpoint of the interval, reflecting symmetry
Verified
7Non-negativity preservation: if X ≥ 0 almost surely, then E(X) ≥ 0, a fundamental monotonicity property
Verified
8For indicator random variable I_A, E(I_A) = P(A), linking expectation directly to probability of event A
Verified
9Monotonicity: if X ≤ Y almost surely, then E(X) ≤ E(Y), provided expectations exist
Directional
10E(c) = c for any constant c, the degenerate case where variance is zero
Single source

Basic Properties Interpretation

In the elegant calculus of chance, expected value emerges as both a sober accountant averaging Bernoulli bets and a creative artist bending under Jensen's convex lens, always respecting the sober bounds of possibility while deftly managing sums, integrals, and monotone truths with linear grace.

Continuous Distributions

1Exponential(λ) rate has E(X) = 1/λ, memoryless interarrival time mean
Verified
2Normal(μ,σ²) has E(X) = μ, the location parameter defining the mean
Verified
3Uniform[a,b] continuous has E(X) = (a+b)/2, identical to discrete case by symmetry
Verified
4Gamma(α,β) shape-rate has E(X) = α/β, sum of exponentials mean
Directional
5Beta(α,β) on [0,1] has E(X) = α/(α+β), mean proportion
Single source
6Weibull(k,λ) shape-scale has E(X) = λ Γ(1 + 1/k), involving gamma function for lifetime modeling
Verified
7Lognormal(μ,σ²) has E(X) = exp(μ + σ²/2), moment-generating derived mean
Verified
8Pareto(xm, α) minimum xm, shape α>1 has E(X) = α xm / (α-1), power-law tail mean
Verified
9Cauchy(μ,γ) has undefined E(X) due to heavy tails, no finite mean exists
Directional
10Chi-squared(k) degrees freedom has E(X) = k, sum of squares of standard normals
Single source
11Normal(0,1) E(X)=0, defining standard mean
Verified
12Exponential(λ=2) E(X)=0.5, half-life like
Verified
13Gamma(α=3,β=1) E(X)=3, Erlang special case
Verified
14Beta(2,5) E(X)=2/7≈0.2857
Directional
15Lognormal(μ=0,σ=1) E(X)=exp(0.5)≈1.6487
Single source
16Pareto(xm=1,α=2.5) E(X)=2.5/1.5≈1.6667
Verified
17Weibull(k=2,λ=1) E(X)=Γ(1.5)≈0.8862, Rayleigh special
Verified
18Student-t(df=5) E(X)=0 for df>1
Verified
19Logistic(μ=0,s=1) E(X)=0, sech² density symmetric
Directional
20For Uniform[0,1] E(X)=0.5
Single source
21Exponential(1) E(X)=1
Verified
22Normal(5,2) E(X)=5
Verified
23Beta(1,1)=Uniform[0,1] E=0.5
Verified
24Gamma(1,1)=Exp(1) E=1
Directional

Continuous Distributions Interpretation

From the memoryless wait times of the Exponential to the heavy-tailed defiance of the Cauchy, each distribution's expected value tells a revealing, often witty story of its inherent nature and central tendency.

Discrete Distributions

1For a Binomial(n,p) distribution, E(X) = np, representing the expected number of successes in n independent Bernoulli trials each with success probability p
Verified
2Poisson(λ) random variable has E(X) = λ, where λ is both mean and variance parameter, modeling rare events count
Verified
3Geometric distribution (trials until first success, p) has E(X) = 1/p, the average trials needed for first success
Verified
4Negative Binomial(r,p) for r successes has E(X) = r/p, expected trials for r-th success
Directional
5Hypergeometric(N,K,n) population N with K successes, draw n, has E(X) = n(K/N), unbiased estimator of proportion
Single source
6For Discrete Uniform {1,2,...,k}, E(X) = (k+1)/2, average of first k naturals
Verified
7Multinomial(n, p1,...,pm) marginal for i-th category has E(X_i) = n p_i, generalizing binomial
Verified
8Zeta distribution with parameter s>1 has E(X) = ζ(s-1)/ζ(s), involving Riemann zeta function for tail-heavy counts
Verified
9Log-series distribution (p) has E(X) = -p / ((1-p) log(1-p)), modeling species abundance
Directional
10Discrete Pareto (xm, α) has E(X) = α xm / (α-1) for α>1, heavy-tailed discrete analog
Single source
11For Binomial(n,p), E(X) = np exactly, with variance np(1-p)
Verified
12Poisson(λ=5) has E(X)=5, P(X=k)= e^{-5} 5^k / k!
Verified
13Geometric(p=0.3) E(X)=1/0.3 ≈3.333, variance (1-p)/p²≈7.111
Verified
14Negative Binomial(r=2,p=0.4) E(X)=2/0.4=5
Directional
15Hypergeometric(N=50,K=20,n=10) E(X)=10*(20/50)=4
Single source
16Multinomial(n=100, p=(0.3,0.4,0.3)) E(X1)=30, E(X2)=40, E(X3)=30
Verified
17Zeta(s=2) E(X)= ζ(1)/ζ(2) but ζ(1) diverges, actually for truncated finite mean ≈1.64493/1.64493 wait no, properly ζ(s-1)/ζ(s)≈ π²/6 / π²/6 *ζ(1) invalid, for s>2
Verified
18For Binomial(n=100,p=0.5) E(X)=50
Verified
19Poisson(λ=10) E(X)=10
Directional
20Geometric(p=0.1) E(X)=10
Single source
21Hypergeometric(N=100,K=30,n=20) E(X)=6
Verified

Discrete Distributions Interpretation

From the reliable predictability of a fair coin toss to the heavy-tailed mysteries of the zeta function, each distribution's expected value offers a surprisingly intuitive glimpse into the average outcome of its particular brand of chaos.

Sources & References