Building the Machine
How do you pick 286 trials out of 17,955 possible choices?
The Problem
The design calls for 300 trials. But which 300?
Investments range from 1–10 tokens. Probabilities from 5%–95%. Outcomes from 0–39 tokens. The full factorial space contains 17,955 possible trials.
The challenge:
Not all trials are equally informative. Some tell us a lot about Prospect Theory parameters. Others are redundant or uninformative. We need a systematic method to select the best ones.
Step 1: Generate Trial Space
First, create all possible trials. For each investment level (1–10 tokens), generate all combinations of outcomes and probabilities.
Full Factorial Design
Investments: 1–10 tokens
Outcomes: Net losses & gains
Probabilities: 5% to 95%
Total Generated
17,955
possible trials
Step 2: Create Parameter Space
Define the complete 4D parameter space for Prospect Theory. For each parameter, create a fine-grained grid of possible values.
Parameter Grids (50 values each)
Combinations
6.25 Million
Parallel Jobs
250
Step 3: Calculate Certainty Equivalents
For every single trial (17,955) and every single parameter combination (6.25M), calculate the Certainty Equivalent (CE).
"How much is this gamble worth to a person with these specific risk preferences?"
This required massive parallel computing. We simulated billions of decisions to build a complete map of predicted behavior.
Input
Trial T
Input
Params θ
Output
Value V(T,θ)
Step 4: Filter to Discriminative Trials
First, eliminate trials that don't vary with parameter values. If everyone accepts a trial (or everyone rejects it) regardless of their parameters, it provides no information.
Filtering Results
Discriminative trials retained
Step 5: Calculate Derivatives
Fisher Information quantifies how much a trial tells you about a parameter. It's based on derivatives: how much does the certainty equivalent change when you tweak α, λ, γ⁺, or δ⁻?
179 Trillion
Derivatives computed using central differences
15,000
CPU-hours on HPC cluster
Step 6: D-optimal Sequential Selection
We don't just pick random trials. We use an algorithm that selects trials one by one.
Pick the trial that maximizes information about parameters, given what we already know.
Add it to the set. Update the information matrix.
Repeat until we have 300 trials.
The Result
A set of trials that is mathematically guaranteed to be the most efficient possible way to estimate Prospect Theory parameters.
Final Selection
286
Optimal Trials
The Result: Precision Estimates
The Fisher Information Matrix tells us how precisely we can estimate each parameter. Precision is measured by standard errors (SE)—smaller is better.
α (Sensitivity)
0.0147
Excellent
γ⁺ (Gain)
0.0296
Very Good
δ⁻ (Loss)
0.0538
Good
λ (Loss Aversion)
0.0761
Acceptable
Validation
We didn't just trust the math. We ran simulations to prove it works.
Parameter Recovery
We simulated agents with known parameters playing our game. We then tried to recover those parameters from their choices.
Model Recovery
We simulated agents using different decision models (e.g., EV vs. PT). We checked if our trial set could correctly identify which model they were using.
Next: Making It Real
The math is done. The trials are selected. The design is validated. Now comes the exciting part: building the actual experiment.
Explore Phase 4: Implementation