Skip to contents

Overview

SpeedGoat APIs return a small number of distinct response shapes. Each shape maps to a specific parse_* function. This vignette documents every response pattern with verified JSON structures, and catalogs the available includes and appends for each resource.

All response structures were verified against the live count-api and the OpenAPI specification at /docs?api-docs.json.

Quick reference

Pattern Keys Has schema Has meta Parser
Collection schema, data, meta, message Yes Yes parse_json2tibble()
Single record schema, data, message Yes No parse_json2tibble()
Lookup (composite) schema, data, message Yes (array types) No parse_json2list()
Export name, hash, path, disk, url, metadata, message No No parse_url() + parse_url2df()
Batch result data, message No No parse_multi2tibble()
Delete message or empty (204) No No handled by api_delete()
Auth data, message No No parse_json2list()
Async progress key, status, ... No No parse_json2list()

Schema types

The schema field maps column names to type descriptors. Each entry has a type and nullable flag. Six types are used across all endpoints:

Schema type R type after validate = TRUE Example columns
integer integer id, project_id, species_id
string character name, abbreviation, created_at
float double age_min_months, default_prop
boolean logical is_first_reproduction, extrapolate
map list (passed through) metadata
array list (passed through) enum (on entry-column-mappings)

A schema entry looks like this:

{
  "id": {"type": "integer", "nullable": false},
  "name": {"type": "string", "nullable": false},
  "description": {"type": "string", "nullable": true},
  "metadata": {"type": "map", "nullable": true, "key_type": "string", "value_type": "string"}
}

Some IPM endpoints have schema entries with empty objects ({}) for JSON/JSONB columns like metadata and result. These fall through to identity with a warning during type conversion.

Verified endpoints by database schema

Every collection endpoint returns schema, data, meta, message. This was verified for all endpoint groups below.

DB schema Endpoints
api users, files
enum surveys/types, surveys/column-mappings, surveys/entry-column-mappings, count-categories
ipm ipm/estimates, ipm/estimation-runs, ipm/parameters, ipm/prior-estimates, ipm/runs
location regions, management-units, analysis-units, analysis-units/versions, subunits, strata, points
model models, covars, covars/betas, covars/bins, covars/categories, beta-vars, model-covars
project projects, projects/species, projects/age-classes
sight aerial-surveys, aerial-surveys/designs, aerial-surveys/entries, aerial-surveys/files, aircraft

Pattern 1: Collection

Returned by: api_get() on any list endpoint.

Top-level keys: schema, data, meta, message

{
  "schema": {
    "id": {"type": "integer", "nullable": false},
    "name": {"type": "string", "nullable": false},
    "description": {"type": "string", "nullable": true},
    "metadata": {"type": "map", "nullable": true, "key_type": "string", "value_type": "string"}
  },
  "data": [
    {"id": 2, "name": "Project A", "description": null, "metadata": null},
    {"id": 3, "name": "Project B", "description": "Notes", "metadata": {"key": "value"}}
  ],
  "meta": {
    "from": 1,
    "to": 2,
    "path": "https://counts.spdgt.com/api/projects",
    "per_page": 50,
    "current_page": 1,
    "last_page": 1,
    "total": 2,
    "links": []
  },
  "message": "OK"
}

Parse with parse_json2tibble() (default arguments):

resp <- api_get("counts", "projects", pages = list(size = 50))
projects <- parse_json2tibble(resp)

parse_json2tibble() with default arguments:

  1. Extracts the data element (elements = "data").
  2. Reads the schema and coerces column types (validate = TRUE).
  3. Returns a tibble.

Accessing pagination metadata:

raw <- parse_json2list(resp)
raw$meta$total
raw$meta$last_page

Pattern 2: Single record

Returned by: api_get_id(), api_post(), api_patch().

Top-level keys: schema, data, message (no meta)

Single-record responses include the schema, so validate = TRUE (default) works correctly. The only difference from a collection is that data is a single object instead of an array, and there is no meta.

{
  "schema": {
    "id": {"type": "integer", "nullable": false},
    "name": {"type": "string", "nullable": false},
    "abbreviation": {"type": "string", "nullable": false}
  },
  "data": {
    "id": 2,
    "name": "Project A",
    "abbreviation": "PA"
  },
  "message": "OK"
}
resp <- api_get_id("counts", "projects", id = 2)
record <- parse_json2tibble(resp)

api_post() and api_patch() return the same shape:

body <- tibble::tibble(project_id = 2, species_id = 1, age_class = "Adult")
resp <- api_post("counts", "projects/age-classes", body = body)
created <- parse_json2tibble(resp)

Pattern 3: Lookup (composite)

Returned by: api_get() on /lookup and /lookup/for-display.

Top-level keys: schema, data, message (no meta)

These endpoints return multiple named arrays rather than a single flat table. The schema uses type "array" with "item_type": "object" for each group. Use parse_json2list() and extract what you need.

resp <- api_get("counts", "lookup")
raw <- parse_json2list(resp)

# Available arrays: species, projectSpecies, ageClasses,
# parameters, analysisUnitVersions, analysisUnits, managementUnits
species <- dplyr::bind_rows(raw$data$species)
age_classes <- dplyr::bind_rows(raw$data$ageClasses)

The /lookup/for-display endpoint returns: species, surveyTypes, analysisUnits, managementUnits.

Pattern 4: Export

Returned by: api_export().

Top-level keys: name, hash, path, disk, url, metadata, message

No schema or data. The response provides a temporary authenticated URL.

{
  "name": "AXSZ3j2ahnqWMgez.parquet",
  "hash": "cbe69f04783ba7e3f5dade479e658b63",
  "path": "/exports/species",
  "disk": "uploads",
  "url": "https://storage.googleapis.com/count-api-production-uploads/...",
  "metadata": {"size": 745, "type": "application/octet-stream"},
  "message": "Export generated successfully."
}

The endpoint argument must include export/; format is appended:

resp <- api_export("counts", "species/export", format = "parquet")
url <- parse_url(resp)
df <- parse_url2df(resp)

Pattern 5: Batch result

Returned by: api_post_multi(), api_patch_multi() via /multiple endpoints.

Top-level keys: data (per-record results), message

{
  "data": [
    {"success": true, "data": {"id": 101, "age_class": "Adult"}, "message": "Created"},
    {"success": false, "data": {"age_class": "Unknown"}, "message": "Validation failed."}
  ],
  "message": "OK"
}
result <- parse_multi2tibble(resp)
# Returns tibble with columns: data (list), message (chr), success (lgl)

Pattern 6: Delete

api_delete() handles parsing internally. Returns a list with message.

result <- api_delete("counts", "projects/age-classes", id = 42)
result$message

Pattern 7: Auth

auth_me() returns data + message (no schema). Fields include id, project_id, name, email, is_oidc, is_impersonated, and timestamps.

me <- auth_me()
me$data$email
me$data$project_id

Includes

Includes eagerly load related resources. They appear as additional fields in each record. The schema does not include entries for included relations — only the base columns appear in the schema.

Naming convention

Includes use camelCase names matching the Laravel relationship method:

# Correct:
api_get("counts", "management-units",
  includes = "analysisUnit",
  valid_includes = "analysisUnit"
)

# Wrong (causes 500 error): using snake_case like "analysis_unit"

How includes appear in parsed data

parse_json2tibble() converts included relations to list columns:

  • Belongs-to (single parent): each cell is a 1-row tibble.
  • Has-many (child array): each cell is an N-row data frame.
# Belongs-to: species is a single object -> list of 1-row tibbles
resp <- api_get("counts", "projects/age-classes",
  includes = "species",
  valid_includes = "species",
  pages = list(size = 5)
)
tbl <- parse_json2tibble(resp)
tbl$species        # list column
tbl$species[[1]]   # tibble: 1 x 4 (id, name, latin_name, inner_name)

# Has-many: entries is an array -> list of N-row data frames
resp <- api_get("counts", "aerial-surveys",
  includes = "entries",
  valid_includes = "entries",
  pages = list(size = 2)
)
tbl <- parse_json2tibble(resp)
tbl$entries[[1]]   # data.frame: N x 25

Multiple includes

Separate include names with commas. Each becomes its own list column:

resp <- api_get("counts", "aerial-surveys",
  includes = "managementUnit,surveyType,aircraft",
  valid_includes = c("managementUnit", "surveyType", "aircraft"),
  pages = list(size = 5)
)
tbl <- parse_json2tibble(resp)
# Columns: ..., managementUnit, surveyType, aircraft (all list columns)

Nested includes

Use dot notation to include nested relations:

resp <- api_get("counts", "aerial-surveys",
  includes = "columnMappingVersion.columnMappings",
  valid_includes = c(
    "columnMappingVersion",
    "columnMappingVersion.columnMappings"
  )
)

Available includes by resource

Resource Available includes
aerial-surveys surveyType, managementUnit, columnMappingVersion, aircraft, entries, files, models
aerial-surveys/designs surveyType, subunit, stratum, point, line
aerial-surveys/entries aerialSurvey, subunit, species, point, line
analysis-units version
analysis-units/versions project, species
beta-vars betaOne, betaTwo
count-categories surveyType
covars model, betas, bins, categories
covars/betas covar
covars/bins covar
covars/categories covar
files project, context
ipm/estimates model, species, surveyType, analysisUnit, managementUnit, parameter, ageClass, opinionOnEstimate, project, user
ipm/estimation-runs model, species, surveyType, analysisUnit, managementUnit, project, user
ipm/parameters parent, project, species
ipm/prior-estimates project, species, analysisUnit, ageClass, parameter, createdBy, updatedBy
ipm/runs model, species, analysisUnit, managementUnit, project, user, estimates
management-units analysisUnit, region, project, species
models covars, surveyTypes, betaVars, countCategories, covarBetas, covarBins, covarCategories, aerialSurveys
points line
projects (none — use appends)
projects/age-classes project, species
projects/species project, species
regions project
strata surveyType, parentStratum
subunits managementUnit
surveys/column-mappings surveyType
surveys/entry-column-mappings version
surveys/entry-column-mappings/versions project, surveyType, columnMappings
surveys/types project, species, columnMappings
users project, roles

Appends

Appends add computed attributes to each record. Unlike includes (which load related database rows), appends are derived values that don’t exist as columns in the database.

How appends appear in parsed data

Appended fields appear as additional columns. The data type depends on the append:

Append type R type in tibble Examples
Scalar string character url, polygon, centroid, geography
String array list of character vectors available_activities, available_vegetation
# Scalar append: url is a character column
resp <- api_get("counts", "files",
  appends = "url",
  valid_appends = "url",
  pages = list(size = 5)
)
tbl <- parse_json2tibble(resp)
tbl$url  # character vector of GCS URLs

# Array append: available_activities is a list column
resp <- api_get("counts", "covars",
  appends = "available_activities,available_vegetation",
  valid_appends = c("available_activities", "available_vegetation"),
  pages = list(size = 3)
)
tbl <- parse_json2tibble(resp)
tbl$available_activities       # list column
tbl$available_activities[[1]]  # Returns: "Bedded", "Standing", "Moving"

Geography appends

Several location resources support polygon, centroid, and geography appends that return GeoJSON or WKT strings:

resp <- api_get("counts", "management-units",
  appends = "polygon,centroid",
  valid_appends = c("polygon", "centroid", "geography"),
  pages = list(size = 5)
)
tbl <- parse_json2tibble(resp)
tbl$polygon   # character: GeoJSON/WKT strings
tbl$centroid  # character: point coordinates

Available appends by resource

Resource Available appends
aerial-surveys files.url
aerial-surveys/entries polygon, geography
covars available_activities, available_vegetation
files url
management-units polygon, centroid, geography
models available_activities, available_vegetation
points geography, polygon
projects polygon, geography
regions polygon, centroid, geography
subunits polygon, centroid, geography
surveys/entry-column-mappings/versions titles, headers

Exploring response structure

To inspect any endpoint’s raw response:

resp <- api_get("counts", "projects", pages = list(size = 1))
raw <- httr2::resp_body_json(resp)

# Top-level keys
names(raw)

# Full structure
str(raw, max.level = 2)

# Schema details
str(raw$schema, max.level = 2)

Parser summary

Choosing the right parser

api_get()           -> parse_json2tibble()
api_get_id()        -> parse_json2tibble()
api_post()          -> parse_json2tibble()
api_post_multi()    -> parse_multi2tibble()
api_post_df()       -> parse_multi2tibble()
api_patch()         -> parse_json2tibble()
api_patch_multi()   -> parse_multi2tibble()
api_delete()        -> already parsed (returns list)
api_export()        -> parse_url() then parse_url2df()
api_post_progress() -> parse_json2list()
auth_me()           -> already parsed (returns list)
lookup endpoints    -> parse_json2list() then dplyr::bind_rows()

parse_json2tibble() arguments

Argument Default When to change
df TRUE Set FALSE for non-standard responses without a data array
validate TRUE Set FALSE only if the endpoint lacks a schema
elements "data" Change only if the array key is not "data"

All standard collection and single-record endpoints include a schema, so validate = TRUE (the default) works on both patterns.

When to use parse_json2list()

  • Pagination metadata (raw$meta$total, raw$meta$last_page)
  • Lookup endpoints (composite arrays, not tabular)
  • The raw message field
  • A few specific fields rather than the full tibble
  • Inspecting an unfamiliar response before choosing a parser