← Blog
59% of ChatGPT app parameters have no example values

59% of ChatGPT app parameters have no example values

·MB Samuel

Of the 3,349 parameters we analyzed across 147 third-party ChatGPT apps, 1,975 of them (59%) include no example value whatsoever. No default, no inline "e.g.," no quoted sample, nothing.

That means when a user asks something like "find me flights to Tokyo next month" or "look up AAPL stock performance," ChatGPT often has to guess what format your API expects for each field. Sometimes it guesses right. Sometimes it doesn't. And when it doesn't, your app either returns an error or silently delivers the wrong results.

We dug into the metadata of every third-party app in the ChatGPT App Store to understand how developers are (and aren't) providing example values. The results reveal a meaningful gap between the best and worst apps, and a straightforward fix that most developers can apply in an afternoon.


What Counts as an "Example Value"?

Before we get into the numbers, here's what we looked for. We counted a parameter as having an example if it met any of these criteria:

  • Default value set: The default_value field is populated (18.7% of all params)
  • Inline "e.g.": The description contains "e.g." with a sample value (10.6%)
  • Explicit "example": The description mentions "example," "for example," or "for instance" (2.6%)
  • Quoted values: The description includes quoted sample values like "USD" or 'America/New_York' (23.3%)
  • Parenthetical examples: The description contains parenthetical format hints like (YYYY-MM-DD) or (e.g., 10) (11.0%)

Many parameters use multiple strategies at once. About 500 parameters combine two or more of these approaches, which tends to produce the clearest metadata overall.


The Overall Landscape: 59% of Parameters Are Example-Free

Here's the top-level picture across all 3,349 parameters:

MetricCountShare
Parameters with any example1,37441.0%
Parameters with no example1,97559.0%

That's a striking majority of fields where ChatGPT has to infer valid input formats from the parameter name and (sometimes sparse) description alone. To be clear, 41% having some form of example is better than nothing. But 59% without any example at all means the typical ChatGPT app is leaving most of its input surface underspecified.


Example Coverage by Parameter Type

Not all parameter types are equally likely to include examples. Here's how coverage breaks down:

TypeTotalWith ExampleCoverage
string1,95874137.8%
array38119049.9%
number31013844.5%
integer25011646.4%
boolean21510247.4%
object1553321.3%

String parameters, which are the most common type by far, also have the lowest example coverage (37.8%). That's the worst possible combination: the type most prone to format ambiguity is the one least likely to have format guidance. When a string parameter could plausibly be a date, a currency code, a URL, a name, or a free-text query, the absence of an example forces ChatGPT to guess among all those possibilities.

Object parameters are even worse at 21.3%, though that's partly structural. Objects often represent complex nested structures where a single example in the description may not be practical. Still, the apps that do provide object examples (like Amplitude, which includes full JSON examples for its chart definition parameter) tend to have noticeably better metadata overall.

Boolean parameters have relatively high coverage (47.4%), largely because many apps set default values for boolean flags. A default_value of false counts as an example, and that pattern is common for optional feature toggles.


Example Coverage by Category

The gap between the best and worst categories is substantial:

CategoryParamsWith ExamplesCoverage
Lifestyle23716569.6%
Food664162.1%
Education522955.8%
Shopping41921852.0%
Finance62031050.0%
Design2168740.3%
Collaboration2348134.6%
Travel1926332.8%
Developer Tools2297030.6%
Business34710229.4%
Productivity65717526.6%
Entertainment25520.0%

Lifestyle apps lead at nearly 70% coverage, driven by apps like komoot (48/48 params with examples), realestate.com.au (32/32), and Trade Me Property (25/25). These are consumer-facing apps where getting search parameters right is directly tied to user satisfaction.

On the other end, Productivity apps sit at just 26.6%, and Entertainment at 20%. Productivity is a large category with 657 total parameters across 16 apps, so that low coverage rate represents hundreds of underspecified fields. Developer Tools at 30.6% is also notable: the developers building ChatGPT integrations are, ironically, among the least thorough at documenting their own parameters.


Apps That Do It Well

Nine apps achieve 100% example coverage (with at least 5 parameters each). The standouts by parameter volume are:

AppCategoryParamsCoverage
komootLifestyle48100%
InsurifyShopping40100%
realestate.com.auLifestyle32100%
Trade Me PropertyLifestyle25100%
StubHubLifestyle17100%
InternshalaBusiness17100%
MalwarebytesLifestyle12100%

Just below perfect, DoorDash (10/11, 90.9%) and Intuit QuickBooks (30/33, 90.9%) also demonstrate strong coverage across meaningful parameter counts.

What sets these apps apart isn't just that they provide examples. It's how they provide them. Here are some real descriptions from the dataset:

Morningstar on its investment_tickers parameter:

"List of ticker symbols to look up (e.g., 'AAPL', 'MSFT', 'VTSAX'). Preferred for funds when available."

This is clean and effective. The "e.g." provides three examples spanning different security types (individual stock, large-cap stock, index fund), and the trailing note tells the model when to prefer this field over the alternative investment_names field.

DoorDash on its desired_mx_name parameter:

"Optional merchant/store name to filter results (e.g., 'Whole Foods', 'Safeway', 'Target'). If omitted, searches all nearby stores."

Three concrete examples, a clear explanation of what happens when the field is empty, and guidance on an alternative approach. ChatGPT knows exactly what to pass and what to expect.

Intuit TurboTax on its timezone parameter:

"IANA timezone string (e.g., 'America/Chicago', 'America/New_York', 'America/Los_Angeles'). IMPORTANT: DO NOT guess, infer, or fabricate timezone values."

This combines the format name (IANA), three concrete examples, and behavioral guidance for the model. It's a pattern we see consistently in the best-documented apps: format name, example values, and instructions about how to handle edge cases.

Klook on its currency parameter:

"3-letter currency code for prices (e.g. 'USD', 'HKD', 'TWD'). Supported currencies ONLY: SGD, HKD, TWD, ILS, USD, PHP, MYR, CNY, KRW..."

Klook goes further than most by listing every supported value. This is especially valuable for constrained string fields where the AI needs to map user intent ("I want prices in Hong Kong dollars") to a specific code.


Apps That Do It Poorly

Thirteen apps with 5 or more parameters have 0% example coverage. The largest are:

AppCategoryParamsCoverage
Intuit Credit KarmaFinance360%
ZillowLifestyle200%
HyattTravel160%
JobkoreaProductivity160%
JamDeveloper Tools160%
Apple MusicEntertainment140%
NetlifyDeveloper Tools120%
LovableDeveloper Tools100%
HexBusiness100%

Intuit Credit Karma stands out because its sibling apps, Intuit TurboTax (82.0% coverage) and Intuit QuickBooks (90.9%), are among the best in the dataset. The same parent company ships radically different metadata quality across its product lines.

Zillow at 0% across 20 parameters is also notable. Every single one of Zillow's parameters relies entirely on the parameter name for ChatGPT to figure out what to send. For a high-profile consumer app handling searches with location, price, and property type filters, that's a lot of guesswork.


The Worst Case: Parameters with No Description and No Example

Beyond the 59% that lack examples, there's an even more concerning subset: parameters that have no description at all and no example of any kind. These are fields where ChatGPT has nothing to work with except the parameter name.

We found 228 parameters (6.8% of all params) in this "completely blind" category. The apps with the most blind parameters are:

AppBlind ParamsTotal Params
Ramp3567
Cloudinary2168
Zillow2020
Alpaca2083
Netlify1212
Target1231
Intuit TurboTax12122

Ramp has 35 parameters with no description and no example, out of 67 total. That means for more than half of its input surface, ChatGPT is operating on parameter names alone. Zillow's situation is even starker: all 20 of its parameters are completely blind.

Intuit TurboTax is an interesting case. It has 100 parameters with examples (82% coverage) but still has 12 completely blind parameters. At 122 total parameters, it's one of the most complex apps in the store, and the blind spots are concentrated in less common tool paths. Even the best apps have gaps when parameter counts get high.


The Date/Time Problem

Date and time fields deserve special attention because format matters enormously for temporal data. We identified 211 date/time-related parameters across the dataset.

MetricCountShare
With format guidance17683.4%
Without format guidance3516.6%

This is actually better than the overall 41% example coverage rate, which suggests developers are more aware of format sensitivity for date/time fields specifically. But 35 date/time parameters with no format guidance still represents meaningful risk.

Among the 35 without format guidance, we found params like Ramp's start_date and end_date (both with null descriptions), Fireflies' date parameter (described only as "Optional date filter (deprecated)"), and DoneDeal's nctExpiryMin (described as "Minimum time before NCT expiry" with no indication of the expected format).

We wrote about date/time formatting separately and found that 52% of date/time parameters across the broader dataset have no format specification. The higher coverage rate here (83.4%) likely reflects improvements some apps have made since our initial analysis, plus the fact that our example detection in this analysis is slightly more inclusive (counting any form of guidance, not just explicit format strings).


Practical Advice: How to Add Examples Effectively

Based on what we've seen working well across the best apps, here's a practical framework for adding examples to your parameters.

For string parameters (the biggest gap)

The pattern that works best combines a format name, concrete examples, and behavioral guidance:

"IANA timezone string (e.g., 'America/New_York', 'Europe/London').
Determine based on user location or context."

For numeric parameters

Include the unit, range, and a representative value:

"Maximum number of results to return (1-100, default 50).
Use smaller values for quick lookups, larger for comprehensive searches."

For array parameters

Show the expected structure with a realistic example:

"List of ticker symbols (e.g., ['AAPL', 'MSFT', 'VTSAX']).
Pass as a JSON array of strings."

For object parameters

Provide a complete, minimal JSON example. Amplitude does this exceptionally well, including full JSON structures in their chart definition parameter. Even a simple example like {"query": "milk", "quantity": 2} (from DoorDash's checkout items) eliminates ambiguity about the expected shape.

Use multiple strategies

The most effective parameter descriptions in our dataset combine two or more approaches. PitchBook's min_date parameter demonstrates this well:

"Minimum date (ISO format: 'YYYY-MM-DD') to filter report relevance. Only use if the user specifies or the query strongly implies a time constraint."

This single description includes a format name (ISO), a parenthetical format pattern (YYYY-MM-DD), a quoted example, and behavioral guidance. ChatGPT has everything it needs.


The Business Case

If 59% of your parameters lack examples, you're essentially trusting ChatGPT to infer valid input formats from context alone. For simple cases (a boolean flag, a well-named ID field), that often works. For complex cases (a date format, a currency code, a structured query syntax), it frequently doesn't.

The fix is not complicated. Adding an "e.g." clause to a parameter description takes about 30 seconds. Doing it well (format name + example + guidance) takes maybe a minute. For an app with 20 parameters, that's roughly 20 minutes of work that directly reduces error rates for every user interaction.

The apps that invest in this consistently rank among the most reliable in the store. The apps that don't are leaving quality on the table.


Methodology

This analysis covers 147 third-party apps in the ChatGPT App Store as of February 2025. We excluded integrations built and maintained by OpenAI (like GitHub, Linear, Slack, and Google Workspace) to focus on apps that companies built and shipped independently.


Want access to the full dataset? Contact us to learn more.