Phase 3

Building the Machine

How do you pick 286 trials out of 17,955 possible choices?

01

The Problem

The design calls for 300 trials. But which 300?

Investments range from 1–10 tokens. Probabilities from 5%–95%. Outcomes from 0–39 tokens. The full factorial space contains 17,955 possible trials.

The challenge:

Not all trials are equally informative. Some tell us a lot about Prospect Theory parameters. Others are redundant or uninformative. We need a systematic method to select the best ones.

Step 1: Generate Trial Space

First, create all possible trials. For each investment level (1–10 tokens), generate all combinations of outcomes and probabilities.

Full Factorial Design

Investments: 1–10 tokens

Outcomes: Net losses & gains

Probabilities: 5% to 95%

Total Generated

17,955

possible trials

Step 2: Create Parameter Space

Define the complete 4D parameter space for Prospect Theory. For each parameter, create a fine-grained grid of possible values.

Parameter Grids (50 values each)

α 0.5 to 1.0
γ⁺ 0.4 to 3.0
δ⁻ 0.4 to 3.0
λ 1.0 to 6.0

Combinations

6.25 Million

Parallel Jobs

250

Step 3: Calculate Certainty Equivalents

For every single trial (17,955) and every single parameter combination (6.25M), calculate the Certainty Equivalent (CE).

"How much is this gamble worth to a person with these specific risk preferences?"

This required massive parallel computing. We simulated billions of decisions to build a complete map of predicted behavior.

Input

Trial T

Input

Params θ

=

Output

Value V(T,θ)

Step 4: Filter to Discriminative Trials

First, eliminate trials that don't vary with parameter values. If everyone accepts a trial (or everyone rejects it) regardless of their parameters, it provides no information.

1 Threshold crossing (CE crosses 0)
2 Sufficient range (Range ≥ 0.5)
3 Avoid consensus (< 95% agreement)

Filtering Results

17,955 7,169

Discriminative trials retained

Step 5: Calculate Derivatives

Fisher Information quantifies how much a trial tells you about a parameter. It's based on derivatives: how much does the certainty equivalent change when you tweak α, λ, γ⁺, or δ⁻?

179 Trillion

Derivatives computed using central differences

15,000

CPU-hours on HPC cluster

Step 6: D-optimal Sequential Selection

We don't just pick random trials. We use an algorithm that selects trials one by one.

1

Pick the trial that maximizes information about parameters, given what we already know.

2

Add it to the set. Update the information matrix.

3

Repeat until we have 300 trials.

The Result

A set of trials that is mathematically guaranteed to be the most efficient possible way to estimate Prospect Theory parameters.

Final Selection

286

Optimal Trials

The Result: Precision Estimates

The Fisher Information Matrix tells us how precisely we can estimate each parameter. Precision is measured by standard errors (SE)—smaller is better.

α (Sensitivity)

0.0147

Excellent

γ⁺ (Gain)

0.0296

Very Good

δ⁻ (Loss)

0.0538

Good

λ (Loss Aversion)

0.0761

Acceptable

Validation

We didn't just trust the math. We ran simulations to prove it works.

Parameter Recovery

We simulated agents with known parameters playing our game. We then tried to recover those parameters from their choices.

98% Accuracy

Model Recovery

We simulated agents using different decision models (e.g., EV vs. PT). We checked if our trial set could correctly identify which model they were using.

95% Accuracy

Next: Making It Real

The math is done. The trials are selected. The design is validated. Now comes the exciting part: building the actual experiment.

Explore Phase 4: Implementation