Getting started with spdgt.sdm
getting-started.RmdWhat this package does
spdgt.sdm evaluates harvest management alternatives by scoring how well each option satisfies different stakeholder groups. It turns the subjective question “which regulation is best?” into a transparent, defensible comparison. The output is not a single answer but a clear picture of who benefits, who loses, and by how much under each option.
The package wraps the SpeedGoat SDM API. You build a scenario locally using constructors, send it to the API for evaluation, and get back scored results.
What you need
- A SpeedGoat account (authentication is automatic)
- A species
- Two or more management alternatives with metric values – from an IPM, harvest data, or your best estimates
You do not need an IPM. Manual estimates, harvest records, or expert judgment all work.
Quick start with species defaults
The fastest path is to load defaults for your species, add your alternatives, and evaluate:
library(spdgt.sdm)
# Load species-specific objectives and stakeholder groups
objs <- sdm_read_objectives(species = "moose")
grps <- sdm_read_groups(species = "moose")
# Define your management alternatives (unnamed vectors, order matches objectives)
alts <- list(
sdm_alternative(
name = "Status Quo",
description = "Current regulations (20% harvest rate)",
metrics = c(5000, 8.5, 0.987, 0.45, 117)
),
sdm_alternative(
name = "Reduce Harvest",
description = "Conservative approach (10% harvest rate)",
metrics = c(3000, 12.0, 1.084, 0.52, 77)
)
)
# Build the scenario -- assigns obj_1, obj_2, ... to each objective
scn <- sdm_scenario(
name = "Moose Harvest Review 2026",
species = "Moose",
objectives = objs,
groups = grps,
alternatives = alts,
unit = "WMU 300"
)
# Evaluate
results <- sdm_evaluate_scenario(scn)
results$matrixsdm_read_objectives() and sdm_read_groups()
return ready-to-use objects that plug directly into
sdm_scenario(). When you call sdm_scenario(),
each objective receives a generic key (obj_1,
obj_2, …) and weights/metrics are named accordingly.
To see the assigned keys:
The building blocks
A scenario has four pieces:
| Component | What it is | How many |
|---|---|---|
| Objectives | Metrics you care about (licenses, lambda, etc.) | 2–7 |
| Stakeholder groups | Archetypes with different priorities | 2–6 |
| Alternatives | Management options being compared | 2–5 |
| Value functions | Curves that define where changes matter most | 1 per objective |
These are connected: each group assigns weights to every objective,
and each alternative provides a metric value for every objective.
Weights and metrics are indexed by generic keys (obj_1,
obj_2, …) assigned by sdm_scenario().
Understanding objectives
Objectives are the dimensions of a good outcome. A harvest decision affects hunter access, population trajectory, herd structure, and landowner relationships – all at the same time. Listing objectives explicitly prevents the conversation from collapsing into a single metric.
Each objective has:
- An ID – the generic key assigned by
sdm_scenario()(e.g.,"obj_1") - A direction –
"higher_is_better"or"lower_is_better" - A range – the
c(min, max)bounds for normalization - A value function – 5 tokens defining the satisfaction curve shape
library(spdgt.sdm)
# Custom objective with diminishing returns
obj <- sdm_objective(
name = "Hunter Opportunity",
description = "Number of licenses issued",
direction = "higher_is_better",
range = c(0, 10000),
value_function = c(8, 5, 4, 2, 1)
)
obj$id # NULL until added to a scenario
#> NULLThe default moose objectives illustrate common patterns:
| Objective | Direction | Range | Value function |
|---|---|---|---|
| Hunter Opportunity | Higher is better | 0–10,000 | [8, 5, 4, 2, 1] |
| Harvest Success | Lower is better | 0–30 | [1, 2, 4, 5, 8] |
| Population Performance | Higher is better | 0.8–1.3 | [10, 6, 2, 1, 1] |
| Herd Composition | Higher is better | 0–1 | [4, 4, 4, 4, 4] |
| Harvest Yield | Higher is better | 0–500 | [6, 5, 4, 3, 2] |
Notice the built-in tension: maximizing licenses and harvest yield typically requires higher harvest rates, but sustaining population growth (lambda above 1.0) often requires lower harvest rates. You cannot max out all five objectives at once – which is exactly why you need a structured comparison.
Understanding value functions
Raw metric values do not translate evenly into satisfaction. Going from 0 to 1,000 hunting licenses is a big deal; going from 9,000 to 10,000 barely registers. Value functions capture where changes matter most along a metric’s range.
The metric range is divided into 5 equal bins. You distribute tokens across the bins. More tokens in a bin means satisfaction changes faster across that part of the range. Think of it like allocating attention – you are telling the engine “pay more attention to changes in this range, less attention over there.”
For Hunter Opportunity with tokens [8, 5, 4, 2, 1]
across 0–10,000 licenses:
| Bin | Range | Tokens | Interpretation |
|---|---|---|---|
| 1 | 0–2,000 | 8 | Satisfaction climbs steeply. Biggest gains. |
| 2 | 2,000–4,000 | 5 | Still meaningful improvement. |
| 3 | 4,000–6,000 | 4 | Moderate gains. |
| 4 | 6,000–8,000 | 2 | Diminishing returns setting in. |
| 5 | 8,000–10,000 | 1 | More licenses barely moves the needle. |
Compare that to Herd Composition’s flat distribution
[4, 4, 4, 4, 4], which says every part of the sex-ratio
range matters equally – a linear value function.
Common curve shapes:
-
Diminishing returns
[8, 5, 4, 2, 1]– initial gains matter most -
Linear
[4, 4, 4, 4, 4]– every increment matters equally -
Accelerating
[1, 2, 4, 5, 8]– improvements at the high end matter most -
Threshold
[10, 6, 2, 1, 1]– crossing a critical threshold (like lambda = 1.0) dominates
This is where biological and management knowledge matters most. A population growth rate dropping below 1.0 (replacement) is qualitatively different from rising above 1.1 – the value function should reflect that. The curve shape is shared across all stakeholder groups: they may weight the objective differently, but they agree on what “good” and “bad” look like for the metric itself.
Understanding stakeholder groups
Different people care about the same objectives in different proportions. A hunter and a landowner might both agree that harvest matters, but for very different reasons and to very different degrees. By naming stakeholder groups explicitly, you make the decision process transparent.
Groups are archetypes representing common patterns of priority, not
real individuals. Each group assigns weights to every objective. Weights
must sum to 1.0. Pass an unnamed vector – sdm_scenario()
maps weights to objectives in order.
grp <- sdm_group(
name = "Opportunity Hunter",
description = "Prioritizes license availability and total harvest",
weights = c(0.35, 0.05, 0.10, 0.00, 0.50)
)
grp$weights
#> [1] 0.35 0.05 0.10 0.00 0.50A weight of 0 means that objective is excluded from the group’s evaluation. The Opportunity Hunter does not care about Herd Composition (4th objective = 0), but the Quality Hunter and Conservation Advocate do.
Defining alternatives
Alternatives are the management options you are comparing. Each one needs a metric value for every objective, in the same order as the objectives list.
alt <- sdm_alternative(
name = "Status Quo",
description = "Current regulations (20% harvest rate)",
metrics = c(5000, 8.5, 0.987, 0.45, 117)
)
alt$metrics
#> [1] 5000.000 8.500 0.987 0.450 117.000Notice the trade-off in the default moose alternatives: Reduce Harvest means fewer licenses and lower total harvest, but a growing population and better herd composition. There is no free lunch – which is exactly why you need the comparison.
Building a scenario from scratch
If you are not using species defaults, build each piece manually:
# Objectives (no metric_key -- id is assigned by sdm_scenario)
objs <- list(
sdm_objective(
name = "Hunter Opportunity",
description = "Number of licenses issued",
direction = "higher_is_better",
range = c(0, 10000),
value_function = c(8, 5, 4, 2, 1)
),
sdm_objective(
name = "Population Growth",
description = "Population growth rate",
direction = "higher_is_better",
range = c(0.8, 1.3),
value_function = c(10, 6, 2, 1, 1)
)
)
# Stakeholder groups (unnamed vectors -- order matches objectives)
grps <- list(
sdm_group(
name = "Hunter",
description = "Wants license availability",
weights = c(0.7, 0.3)
),
sdm_group(
name = "Conservationist",
description = "Wants population sustainability",
weights = c(0.2, 0.8)
)
)
# Alternatives (unnamed vectors -- order matches objectives)
alts <- list(
sdm_alternative(
name = "Status Quo",
description = "Current harvest rate",
metrics = c(5000, 0.98)
),
sdm_alternative(
name = "Conservative",
description = "Reduced harvest rate",
metrics = c(3000, 1.08)
)
)
# Assemble -- assigns obj_1, obj_2, group_1, group_2, alt_1, alt_2
scn <- sdm_scenario(
name = "Simple Example",
species = "Elk",
objectives = objs,
groups = grps,
alternatives = alts
)
scn
#> <sdm_scenario> Simple Example
#> Species: Elk
#> Objectives (2): obj_1, obj_2
#> Groups (2): Hunter, Conservationist
#> Alternatives (2): Status Quo, Conservative
# Evaluate via API
results <- sdm_evaluate_scenario(scn)
results$matrixWeights and metrics are assigned in order: the first weight in each
group corresponds to obj_1, the second to
obj_2, and so on.
Reading the consequence matrix
The consequence matrix is the core output. Each cell is a satisfaction score from 0 to 100.
results$matrix
#> alternative Hunter Conservationist
#> 1 Status Quo 72.5 31.2
#> 2 Conservative 38.1 85.7| Color | Score | Meaning |
|---|---|---|
| Green | 70+ | High satisfaction |
| Amber | 40–70 | Moderate satisfaction |
| Red | Below 40 | Low satisfaction |
Look for patterns: does one alternative satisfy all groups reasonably well, or are there sharp trade-offs?
Running sensitivity analysis
Sensitivity analysis asks: how much would a group’s weights need to change before the best alternative flips? Pass the generic keys assigned by the scenario.
sens <- sdm_run_sensitivity(
scenario = scn,
group = "group_1",
objective = "obj_1",
weight_seq = seq(0, 1, by = 0.05)
)
# Find tipping points
sens[sens$tipping_point, ]The result has one row per weight value. Columns include
weight, one score column per alternative,
top_alternative (the current winner), and
tipping_point (TRUE where the winner
changes).
Interpreting tipping points:
- No tipping points – the top alternative dominates regardless of weight. The decision is robust.
- Tipping point far from current weight – the ranking only flips at an extreme. The decision is stable.
- Tipping point near current weight – a small change flips the outcome. Worth discussing with stakeholders before finalizing.
Think of it like setting a harvest quota based on an aerial survey. If the population could be 20% lower and the quota is still safe, you proceed with confidence. If a 5% error means overharvest, you pause. Sensitivity analysis applies the same logic to stakeholder opinions.
Generating a decision document
Export the analysis as an HTML or PDF decision rationale document:
sdm_generate_report(
scenario = scn,
format = "html",
file = "moose_report.html",
justification = "Reduce Harvest balances stakeholder needs while
maintaining population growth above replacement."
)The report includes objectives, stakeholder profiles, value function curves, alternatives, the consequence matrix, radar chart, and your written justification. It is designed to be shared with directors and decision-makers.
Complete workflow summary
library(spdgt.sdm)
# 1. Load defaults (or build from scratch)
objs <- sdm_read_objectives(species = "moose")
grps <- sdm_read_groups(species = "moose")
# 2. Define alternatives with your metric values (in objective order)
alts <- list(
sdm_alternative(
name = "Status Quo",
description = "Current regs",
metrics = c(5000, 8.5, 0.987, 0.45, 117)
),
sdm_alternative(
name = "Reduce Harvest",
description = "Conservative approach",
metrics = c(3000, 12.0, 1.084, 0.52, 77)
)
)
# 3. Build scenario -- assigns generic keys
scn <- sdm_scenario(
name = "Moose Review",
species = "Moose",
objectives = objs,
groups = grps,
alternatives = alts
)
# 4. Evaluate -- get consequence matrix and radar data
results <- sdm_evaluate_scenario(scn)
results$matrix
# 5. Test robustness using generic keys
sens <- sdm_run_sensitivity(scn, "group_1", "obj_1")
sens[sens$tipping_point, ]
# 6. Export decision document
sdm_generate_report(
scn,
file = "moose_report.html",
justification = "The analysis supports Reduce Harvest."
)Customizing defaults
Species defaults are a starting point. You can modify them before building your scenario:
objs <- sdm_read_objectives(species = "moose")
# Inspect an objective
objs[[1]]$name
#> [1] "Hunter Opportunity"
objs[[1]]$range
#> [1] 0 10000
objs[[1]]$value_function
#> [1] 8 5 4 2 1
# Replace an objective with a customized version
objs[[1]] <- sdm_objective(
name = "Hunter Opportunity",
description = "Number of licenses issued",
direction = "higher_is_better",
range = c(0, 5000),
value_function = c(6, 5, 4, 3, 2)
)
# Remove an objective you don't need
objs <- objs[-5]When you remove objectives, rebuild your groups’ weights to match. Since weights are unnamed at this stage, just drop the 5th element:
grps <- sdm_read_groups(species = "moose")
# Drop the weight for the removed objective
grps <- lapply(grps, function(g) {
wts <- unname(g$weights)[-5]
sdm_group(g$name, g$description, wts / sum(wts))
})After sdm_scenario() is called, each objective’s
id field shows its assigned key:
scn$objectives[[1]]$id
#> [1] "obj_1"Glossary
| Term | Definition |
|---|---|
| Alternative | A management option being compared (e.g., “Status Quo”) |
| Consequence matrix | Table of satisfaction scores for each alternative–group pair |
| Direction | Whether higher or lower metric values are better |
| ID | Generic key assigned to each objective (obj_N), group
(group_N), or alternative (alt_N) by
sdm_scenario()
|
| Objective | Something the decision should achieve, defined by a metric and direction |
| Satisfaction score | A 0–100 rating computed from metric values, value functions, and weights |
| Stakeholder group | An archetype representing a common pattern of priority |
| Tipping point | The weight value at which the best-ranked alternative changes |
| Value function | A curve converting raw metric values into satisfaction, defined by tokens across 5 bins |
| Weight | How much a group cares about a given objective, relative to the others |