
How ChatGPT apps capture user intent and context
When a user tells ChatGPT "I need a casual Friday outfit," what happens next depends entirely on how well the connected app captures that context. Does the app just receive a raw search string? Or does it also receive structured metadata that tells its backend this is an outfit_look request, not a search for a single product?
We analyzed every parameter across 147 third-party apps in the ChatGPT App Store and found that 66 of them actively capture user intent through their tool parameters. That means nearly half of all apps in the store are doing something smart: they are using the AI layer to understand what the user wants before the API call even happens. And the companies doing it best are turning that understanding into better search results, smarter product recommendations, and valuable product intelligence.
Most Apps Are Already Capturing User Context
We identified 105 intent-related parameters across those 66 apps (44.9% of the store). The approaches break down like this:
| Approach | Params | Share |
|---|---|---|
| Free-text (unstructured query or intent string) | 99 | 94.3% |
| Enum-constrained (structured classification) | 4 | 3.8% |
| Hybrid (enum with free-text fallback) | 2 | 1.9% |
The fact that 94% of intent parameters are free-text strings is not a failure. These apps are capturing what the user is looking for, and that is genuinely valuable. When Uber Eats asks the AI model to pass along a "high-level natural language summary of the user's original food-related intent" with examples like "hungry now," "spicy food," or "spanish-themed dinner party for 4," that is a meaningful signal. It tells the backend something about the customer's mindset, not just their search keywords.
Even a simple query parameter with a description like "Specific home service need (e.g., 'repair a leaking pipe', 'install new roof')" (Angi's approach) tells the AI model what kind of string to construct. The model translates the user's conversational request into a format the app's search engine can work with. That translation step is itself a form of intent capture.
The real question is not whether free-text is good or bad. It is whether there are cases where going further, by adding structured metadata on top of the free-text query, creates meaningful business value. The answer, based on a handful of standout apps, is a clear yes.
How the Best Apps Structure User Intent
Four parameters in the entire store use what developers call "enums," which are parameters that limit the AI's response to a predefined set of options rather than accepting any text. These represent the most deliberate approach to intent capture we found, and each one is tied directly to a business outcome.
Target: Routing to the Right Shopping Experience
Target's multiCategorySearchItems tool includes a parameter called intent_type with five possible values:
| Value | Meaning |
|---|---|
single_category | User wants items from one product category |
multi_category | User wants items spanning multiple categories |
outfit_look | User is looking for a complete outfit or styled look |
broad_inspiration | User is browsing without a specific product in mind |
unknown | Intent is unclear |
The description reads: "Model's best interpretation of the user's shopping intent."
This is a small but meaningful piece of engineering. Before the API call even happens, the AI model classifies what type of shopping session the user is in. A user saying "I need a blue dress" gets tagged single_category. A user saying "help me put together a casual Friday outfit" gets tagged outfit_look. A user saying "what's trending for spring" gets tagged broad_inspiration.
The business value here is routing. Target's backend can send outfit_look requests to a styling-oriented experience that surfaces coordinated items across categories, while single_category requests go to a traditional product search with tighter filtering. Without that classification, the backend receives a query string and has to guess which experience to serve. With it, the routing decision is made upfront by the AI model, using the full conversational context that the backend never sees.
This also creates a product intelligence loop. If Target sees that 40% of ChatGPT-originated sessions are broad_inspiration, that tells the product team something important about how users interact with an AI shopping interface versus a traditional search bar. That data can shape how they build the experience over time.
StubHub: Separating Browsers from Buyers
StubHub's event-search tool includes intent_level, a two-value enum backed by one of the most detailed parameter descriptions in the entire App Store:
Classify the user's search query intent level. Must be set to either
'HIGH_INTENT' or 'LOW_INTENT'.
HIGH_INTENT queries seek specific events, performers, or teams, even if
the query is ambiguous or incomplete. HIGH_INTENT includes:
- Artist/performer names (full or partial): 'taylor swift', 'zach',
'megan', 'coldplay'
- Team names: 'knicks', 'yankees', 'lakers'
- Specific events: 'super bowl', 'world series', 'wicked broadway'
- Queries with entity + context: 'coldplay las vegas dec 12',
'knicks vs celtics'
The key distinction: If the user is searching FOR a specific who/what
(even if abbreviated or ambiguous), set to 'HIGH_INTENT'.
LOW_INTENT queries are broad, exploratory, or ask what's available. They
indicate the user doesn't know what specific event they want yet.
LOW_INTENT includes:
- Category browsing: 'concerts', 'sports games', 'theater shows',
'comedy'
- Location-based discovery: 'concerts in las vegas',
'basketball games in nyc'
- Venue-based discovery: 'events at madison square garden',
'what is at the sphere'
- Category + filters: 'nba games in nyc next 60 days',
'college basketball', 'motorsport events'
- Genre-based searches: 'rock concerts', 'EDM music', 'broadway shows'
- Exploratory: 'what to do', 'things to do in vegas',
'events tonight'
The key distinction: If the user is asking what's available in a
category/location, set to 'LOW_INTENT'.
This is prompt engineering embedded directly in the tool metadata. StubHub is essentially training the AI model, through the parameter description, to distinguish between "I want to see Taylor Swift" and "what's happening in Vegas this weekend." Those are fundamentally different search intents that should produce fundamentally different results: the first needs ticket listings for a specific performer, the second needs a curated discovery experience.
The level of detail is notable. StubHub does not just say "classify intent as high or low." They provide concrete examples, edge cases (partial names like "zach" still count as high-intent), and a clear decision rule. This kind of guidance is what makes the classification reliable in practice.
From a business perspective, the distinction between a high-intent user (who is close to purchasing tickets for a specific event) and a low-intent user (who is browsing for ideas) affects everything: which results to show, how to sort them, whether to emphasize price or variety, and what follow-up prompts to offer. StubHub is making that determination before the search even runs.
Intuit Credit Karma: Shaping Financial Product Recommendations
Intuit Credit Karma's personal_loans.recommendations.list tool has a required loanPurpose parameter with six values:
CONSOLIDATE_DEBTCOVER_UNEXPECTED_COSTHOME_IMPROVEMENTMAJOR_PURCHASEOTHERREFINANCE_CREDIT_CARD
The description adds a thoughtful detail: "When listing available options, don't use the enum values, use the natural language descriptions instead." This tells the model to present the options in human-friendly language when asking the user what they need a loan for, while keeping the API values machine-readable.
Loan purpose is one of the most consequential intent signals in financial services. It determines which products to recommend, what rates to show, what terms are appropriate, and what regulatory disclosures are required. A user consolidating debt has different needs than a user covering an unexpected cost. By capturing this as a structured enum, Credit Karma ensures the AI model always sends a value their backend knows how to handle, and the recommendation engine can confidently select the right products from the start.
This parameter also demonstrates something important about designing for AI interfaces: the user never sees the enum values. They have a natural conversation about wanting to renovate their kitchen, and the AI model translates that into HOME_IMPROVEMENT behind the scenes. The structured data is for the backend. The conversational experience is for the user. Both benefit.
The Hybrid Approach: Structured Categories with Room to Interpret
Two parameters take a middle path, offering a predefined set of values while leaving room for the model to interpret conversational context.
Dupe, a shopping app for finding product alternatives, has a search_category parameter with the description: "Categorize the user's search intent into one of: all, Fashion, Furniture, Home Decor, Other." This gives the model a classification scheme while keeping the categories broad enough that most queries can be confidently categorized. It is a practical way to segment search results without requiring the model to make fine-grained distinctions.
Insurify has userCoverageLevelIntent on its compare_car_insurance_rates tool, described as: "User's intent for coverage level. Should be 'price_sensitive' when user mentions words like 'cheap' or 'most affordable' otherwise 'standard'." This is particularly clever because the description tells the model exactly what linguistic signals to look for. It essentially builds a mini classifier into the parameter metadata: if the user says "cheap," they are price-sensitive; otherwise, assume standard coverage. Insurify can then adjust which quotes and carriers to prioritize.
The Smart Free-Text Approach: Capturing Context Without Enums
Not every app needs enum-constrained parameters to capture useful intent. Several apps use free-text intent parameters in ways that go beyond a basic query string, and the results are worth paying attention to.
Uber Eats passes both a query and a separate intent parameter on its search tool. The intent parameter asks for a "high-level natural language summary of the user's original food-related intent," with examples like "hungry now," "lunch," "spicy food," "healthy dinner," "order from Sweetgreen," or "spanish-themed dinner party for 4." This is a meaningful design choice. The query is what the search engine processes. The intent is a broader context signal that tells the backend why the user is searching. A query for "Thai food" paired with an intent of "quick dinner for one" is a different request than the same query paired with "dinner party for 8."
Jam, a developer tools app, takes the most systematic approach we found. It includes a free-text intent parameter ("The intent of the user for this tool call") across six of its seven tools: getDetails, getNetworkRequests, getConsoleLogs, getUserEvents, getScreenshots, and analyzeVideo. Every tool call explicitly asks the model to describe what the user is trying to accomplish. The intent strings are not structured, but they create a consistent log of user goals across every interaction. Over time, that log becomes a product analytics resource: what are users actually trying to do with each tool?
Angi shows how even a single well-described query parameter can capture meaningful context. Its get_angi_pros tool describes the query as "Specific home service need (e.g., 'repair a leaking pipe', 'install new roof')." Compared to a generic "Search query." description, this guidance helps the AI model produce strings that match how Angi's search engine works and reflect the actual service the user needs.
Who Is Capturing the Most Intent?
Apps vary widely in how many intent-related parameters they include:
| App | Category | Intent Params |
|---|---|---|
| Jam | Developer Tools | 7 |
| Canva | Design | 5 |
| Conductor | Business | 5 |
| Hugging Face | Developer Tools | 5 |
| Adobe Photoshop | Design | 3 |
| Figma | Design | 3 |
The number of intent parameters is not the whole story, but it does indicate how many touchpoints an app has for understanding what the user wants. Canva, for example, captures intent through five different query parameters spread across search-designs, search, search-folders, generate-design, and search-brand-templates. Every searchable surface in Canva's integration accepts a query string, which means every interaction generates a signal about user needs.
Intent Capture by Category
The distribution of intent parameters across app categories reflects which types of apps naturally need to understand what users want:
| Category | Intent Params | Apps | Params per App |
|---|---|---|---|
| Business | 18 | 11 | 1.6 |
| Design | 14 | 5 | 2.8 |
| Developer Tools | 14 | 4 | 3.5 |
| Shopping | 12 | 8 | 1.5 |
| Lifestyle | 12 | 9 | 1.3 |
| Finance | 8 | 7 | 1.1 |
| Productivity | 7 | 6 | 1.2 |
| Collaboration | 3 | 2 | 1.5 |
| Travel | 5 | 4 | 1.3 |
| Food | 5 | 3 | 1.7 |
| Education | 4 | 4 | 1.0 |
| Entertainment | 1 | 1 | 1.0 |
Developer Tools apps have the highest intent density (3.5 params per app), driven by Jam's 7-parameter approach and Hugging Face's 5 search-oriented params. Design apps come in second at 2.8, with Canva, Adobe Photoshop, and Figma all exposing multiple intent-capturing surfaces. Shopping and Lifestyle categories capture the most intent in absolute terms, which makes sense since understanding what a customer wants is core to those product experiences.
The Business Case for Capturing User Context
The difference between a generic query string and a structured intent_type enum is not just a technical detail. It connects to three real business outcomes.
Better experience routing. When Target knows a user has outfit_look intent, it can serve a fundamentally different experience than for single_category intent. The query string alone ("casual Friday outfit") might eventually lead to the same place, but the intent classification lets the backend make that decision immediately, without parsing natural language on the server side.
Better conversion. StubHub's intent_level distinction means a high-intent search for "knicks" returns ticket listings sorted by date and price, while a low-intent search for "basketball games in nyc" returns a curated browse experience. Matching the result type to the user's intent reduces friction and increases the likelihood of a transaction.
Better product intelligence. When intent is captured as a structured value, you can measure it. What percentage of your searches are high-intent vs. low-intent? Are outfit_look users converting at a higher rate than broad_inspiration users? With a free-text query field, answering these questions requires natural language processing after the fact. With an enum, you get the answer as a byproduct of the interaction. And even with free-text intent strings (like Jam's approach), you build a corpus of user goals that can inform product roadmap decisions.
How to Capture User Context That Benefits Your Product
If you are building a ChatGPT app, the data from these 147 apps points to a few practical strategies.
Start by identifying the user signals that matter for your business. Before you think about parameters and enums, ask what user context would change how you handle a request. For a travel app, knowing whether someone is planning a trip or ready to book changes the entire response. For a B2B tool, knowing whether a user is exploring or troubleshooting determines what content to surface. The apps that capture intent well start from a clear understanding of which user signals drive different business outcomes.
Default to open-ended intent parameters. A free-text intent field (like Uber Eats' intent or Jam's per-tool intent) is the right starting point for most apps. It captures nuanced user context that you can analyze later to discover patterns, refine your product, and inform retargeting. The AI model is good at summarizing user intent in natural language, and that richness is lost when you force it into a fixed dropdown. Only graduate to enum constraints when you have clearly distinct categories that drive fundamentally different backend behavior (like Credit Karma's loan purposes, where regulatory requirements differ by category).
Write detailed parameter descriptions, even for free-text fields. The parameter description is your primary channel for guiding the AI model's behavior. StubHub's intent_level description is the standard to aim for: concrete examples, edge cases, and a clear decision rule. Compare that to "Search query." and consider which one will produce more useful results. Even if you stick with a free-text query parameter, a good description makes the difference between receiving "stuff for home" and "repair a leaking pipe."
Consider separating the search query from the intent signal. Uber Eats' approach of using both a query and a separate intent parameter is worth studying. The query feeds the search engine. The intent provides broader context that can inform ranking, filtering, and follow-up suggestions. These serve different purposes, and separating them lets you optimize each independently.
Think about intent data as a product resource, not just a routing mechanism. Every intent parameter, whether structured or free-text, generates data about what your users are trying to accomplish. That data is valuable beyond the immediate interaction. It can inform product decisions (what use cases are growing?), marketing strategy (what language do users actually use?), and feature development (what intents does your app handle poorly?). Designing your intent capture with analytics in mind from the start means you build a feedback loop between user behavior and product improvement.
The ChatGPT App Store is still early. Most apps are shipping a basic integration: a query string, a simple description, and whatever the model figures out. But the best apps, from Target to StubHub to Credit Karma, are showing that the tool metadata itself can be a strategic asset. They are using it to capture user context that benefits both the user experience and the business intelligence behind it.
Methodology
This analysis covers 147 third-party apps in the ChatGPT App Store as of February 2025. We excluded integrations built and maintained by OpenAI (like GitHub, Linear, Slack, and Google Workspace) to focus on apps that companies built and shipped independently.
Intent parameters were identified by matching parameter names (e.g., query, search, intent, goal, reason) and scanning parameter descriptions for keywords like "intent," "purpose," and related terms. Parameters were classified as enum-constrained if they included a defined set of allowed values, free-text if they accepted unstructured strings, and hybrid if they combined both approaches.
Want access to the full dataset? Contact us to learn more.