Skip to content

FoodFinder: AI-powered & barcode food identification for carb entry#2404

Open
taylorpatterson-T1D wants to merge 208 commits intoLoopKit:devfrom
TaylorJPatterson:feat/FoodFinder
Open

FoodFinder: AI-powered & barcode food identification for carb entry#2404
taylorpatterson-T1D wants to merge 208 commits intoLoopKit:devfrom
TaylorJPatterson:feat/FoodFinder

Conversation

@taylorpatterson-T1D
Copy link
Copy Markdown

@taylorpatterson-T1D taylorpatterson-T1D commented Feb 11, 2026

This PR replaces the legacy FoodFinder PR #2329

Summary

FoodFinder adds AI-powered food identification to Loop's carb entry screen, helping people with diabetes quickly and accurately log meals. It integrates directly into the existing Add Carb Entry flow with zero changes to Loop's core dosing logic. Supports Loop 3.10.0.

The Problem We're Solving:

Carb counting is the single hardest daily task for people managing diabetes with Loop. Every meal requires estimating carbohydrate content — and getting it wrong directly impacts Time in Range. Current workflow: the user mentally estimates carbs, types a number, and hopes for the best. There's no assistance, no database lookup, no learning from past meals.

What FoodFinder Does

FoodFinder adds AI-powered food identification directly into Loop's existing Add Carb Entry screen. It provides four ways to identify food and auto-populate carb values:

Search modes:

  • Text search — OpenFoodFacts database lookup with barcode scanning
  • AI image analysis — Take a photo of your plate; AI identifies foods, portions, and macros
  • Voice/dictation — Describe your meal naturally; AI generates nutritional analysis
  • Menu/recipe OCR — Photograph a menu or recipe in multiple languages; on-device OCR extracts text for AI analysis

Key features:

  • BYO API key model — users configure their own AI provider (OpenAI, Anthropic, Google, or any compatible endpoint)
  • Secure API key storage via iOS Keychain
  • Feature toggle (FoodFinder_FeatureFlags.isEnabled) — completely dormant when disabled
  • Analysis history with configurable retention (7/14/30/90 days)
  • Favorite food integration with thumbnail persistence
  • AI-recommended absorption times with conservative model anchored to Loop's 3-hour default
  • Advanced dosing notes (FPU, fiber, GI, insulin timing) behind a separate toggle
  • LoopInsights data hooks for future AI-driven tuning recommendations

User-Configurable Settings

All settings are in Loop Settings → FoodFinder:

Setting Description
FoodFinder Toggle Master on/off for the entire feature
AI Provider BYO API key — supports any image processing OpenAI-compatible endpoint, Anthropic Messages API, or Google Generative AI
API Key Stored in iOS Keychain (encrypted at rest, excluded from backups)
Model Selection User picks their preferred model (e.g., gpt-4o, claude-sonnet-4-5-20250929, gemini-2.0-flash) (Be sure you use models supporting image processing)
Analysis History Retention Last 24 hours, 7 days, 14 days, or 30 days
Advanced Dosing Recommendations Optional AI-generated dosing context (disabled by default)
Advanced API Settings Custom endpoint paths for self-hosted or Azure deployments

Safety Considerations

  • FoodFinder never boluses automatically. It only pre-populates the carb entry field — the user always reviews, edits, and confirms before any insulin action occurs.
  • Confidence scoring is displayed for every AI analysis (20–97% range). Low-confidence results are visually flagged.
  • The user can edit all values. AI-suggested carbs, absorption time, and serving count are starting points, not final answers.
  • API keys are stored in iOS Keychain — encrypted at rest, never in UserDefaults or plain text.
  • Feature toggle allows complete disable without removing code.
  • No data leaves the device except the API call to the user's own configured AI provider. No analytics, no telemetry, no third-party tracking.
  • Absorption time model — conservative sensible adjustments based on all known nutriments: most meals stay at 3–3.5 hours, max cap of 5 hours

Architecture and impact on existing Loop code

FoodFinder was designed for minimal integration footprint and easy containment within Loop:

  • 34 new files in FoodFinder/ subdirectories, all prefixed FoodFinder_
  • ~29 lines of integration across 3 existing files (CarbEntryView, SettingsView, FavoriteFoodDetailView)
  • No LoopKit modifications — everything lives in the Loop target
  • All FoodFinder code compiles out cleanly when the feature flag is disabled

Modified existing files:

File What changed
CarbEntryView.swift Embeds FoodFinder_EntryPoint (~5 lines) + analysis history picker
SettingsView.swift Adds FoodFinder settings row + navigation link
CarbEntryViewModel.swift Adds analysis history state + restored result bindings
FavoriteFoodDetailView.swift Adds "Analyze with AI" button for saved favorites
FavoriteFoodsView.swift Adds FoodFinder thumbnail support on favorite food rows
AddEditFavoriteFoodView.swift Accepts pre-populated name/image from FoodFinder
AddEditFavoriteFoodViewModel.swift Passes through thumbnail image binding

New file locations (all under Loop/):

Directory Files Purpose
Models/FoodFinder/ 3 Data models, analysis records, input result types
View Models/FoodFinder/ 1 Search state, AI analysis, product selection logic
Views/FoodFinder/ 8 All FoodFinder UI (entry point, camera, scanner, settings, search, etc.)
Services/FoodFinder/ 12 AI analysis, API clients, barcode scanning, voice, image storage, routing
Resources/FoodFinder/ 1 Feature flags and configuration
Documentation/FoodFinder/ 1 README
LoopTests/FoodFinder/ 3 Unit tests for barcode, OpenFoodFacts, and voice search

Screenshots

  1. FoodFinder search bar with text search results
IMG_1590
  1. Barcode scanner camera capture
IMG_1591
  1. AI camera view analyzing a plate of food
IMG_1592
  1. Product info card with nutrition circles and confidence badge
IMG_1593
  1. Menu/recipe analysis showing translated items
IMG_1594
  1. FoodFinder settings page with AI provider configuration
IMG_1595 IMG_1596
  1. Analysis history picker showing recent meals
IMG_1597
  1. Favorite food with FoodFinder thumbnail
IMG_1598

Video Demo

YouTube Demo: https://youtu.be/i8xToAYBe4M

Test plan

  • Enable FoodFinder in Settings → verify toggle works
  • Text search for single word common foods → verify carbs populate correctly
  • Scan a barcode → verify product lookup and nutrition
  • Take a photo of a meal → verify AI identifies foods with reasonable carb estimates
  • Dictate a meal description → text search, tap mic button, describe a meal, verify OCR extracts text and AI analyzes
  • Photograph a menu → verify AI generates nutrition analysis
  • Adjust servings on AI result → verify carbs and nutriments scale proportionally
  • Tap "Why X hrs?" pill → verify reasoning expands
  • Save a food as favorite → verify thumbnail and nutrition persist
  • Select from recent analyses → verify history picker works
  • Disable FoodFinder → verify carb entry screen returns to normal

Requesting review by @marionbarker based on availability.

FoodFinder adds barcode scanning, AI camera analysis, voice search, and
text-based food lookup to Loop's carb entry workflow. All feature code
lives in dedicated FoodFinder/ subdirectories with FoodFinder_ prefixed
filenames for clean isolation and portability to other Loop forks.

Integration touchpoints: ~29 lines across 3 existing files
(CarbEntryView, SettingsView, FavoriteFoodDetailView). Feature is
controlled by a single toggle in FoodFinder_FeatureFlags.swift.

New files: 34 (11 views, 3 models, 13 services, 2 view models,
1 feature flags, 1 documentation, 3 tests)
Voice search (microphone button) now uses the AI analysis pipeline
instead of USDA text search, enabling natural language food descriptions
like "a medium bowl of spicy ramen and a side of gyoza". Text-typed
searches continue using USDA/OpenFoodFacts as before.

Changes:
- SearchBar: Add mic button with voice search callback
- SearchRouter: Add analyzeFoodByDescription() routing through AI providers
- SearchViewModel: Add performVoiceSearch() async method
- EntryPoint: Wire VoiceSearchView sheet to AI analysis pipeline
Replace the separate mic button with automatic natural language detection.
When the user dictates into the search field via iOS keyboard dictation,
the text is analyzed: short queries (1-3 words like "apple") use USDA,
while longer descriptive phrases (4+ words like "a medium bowl of spicy
ramen and a side of gyoza") automatically route to the AI analysis path.

Changes:
- SearchBar: Remove mic button and onVoiceSearchTapped parameter
- SearchViewModel: Add isNaturalLanguageQuery() heuristic, route detected
  natural language through performVoiceSearch in performFoodSearch
- EntryPoint: Remove voice search sheet, wire onGenerativeSearchResult
  callback to handleAIFoodAnalysis
The Python script created group definitions but didn't properly attach
all of them to their parent groups. Fixes:
- Services group → now child of Loop app root (was orphaned)
- Resources group → now child of Loop app root (was orphaned)
- Documentation group → now child of project root (was orphaned)
- ViewModels/FoodFinder → moved from Loop root to View Models group
- Tests/FoodFinder → moved from project root to LoopTests group
…, analysis history

- Fix triple barcode fire by consuming scan result immediately in Combine sink
- Replace AsyncImage with pre-downloaded thumbnail to avoid SwiftUI rebuild issues
- Use smallest OFF thumbnail (100px) with static food icon fallback for slow servers
- Add secure Keychain storage for AI provider API keys
- Add analysis history tracking with FoodFinder_AnalysisRecord
- Consolidate AI provider settings and remove BYOTestConfig
- Remove barcode connectivity pre-check that added 3+ seconds latency per scan
- Add NSCache to ImageDownloader for thumbnail deduplication (50 items, 10MB)
- Remove artificial minimumSearchDuration delay from search and error paths
- Merge duplicate Combine observers into single combineLatest for AI recomputation
- Decode image_thumb_url from OpenFoodFacts API for smallest available thumbnail
- Wrap 369 bare print() calls in #if DEBUG across 8 FoodFinder files
…eaders

File consolidations (6 files removed, 2 new files created):

1. FoodFinder_ScanResult.swift + FoodFinder_VoiceResult.swift
   → FoodFinder_InputResults.swift

2. FoodFinder_FavoriteDetailView.swift + FoodFinder_FavoriteEditView.swift
   + FoodFinder_FavoritesView.swift → FoodFinder_FavoritesHelpers.swift

3. FoodFinder_AISettingsManager.swift
   → absorbed into FoodFinder_AIProviderConfig.swift

4. FoodFinder_FavoritesViewModel.swift
   → absorbed into FoodFinder_SearchViewModel.swift

Other changes:
- Fix long analysis titles overflowing the screen by programmatically
  truncating picker row names and constraining food type to 20 chars
- Improve AI prompts for menu/recipe/text image analysis
- Add text-only AI analysis path in AIServiceManager
- Increase AI token budget for multi-item responses
- Standardize all 26 FoodFinder file headers with consistent format
- Add originalAICarbs and aiConfidencePercent fields to
  FoodFinder_AnalysisRecord for tracking AI estimate accuracy
- Add Notification.Name.foodFinderMealLogged for real-time
  meal event observation
- Add MealDataProvider protocol with date-range query interface
  and AnalysisHistoryStore conformance
- Add "Last 30 days" retention option to Analysis History settings
- Add originalAICarbs and aiConfidencePercent fields to
  FoodFinder_AnalysisRecord for tracking AI estimate accuracy
- Add Notification.Name.foodFinderMealLogged for real-time
  meal event observation
- Add MealDataProvider protocol with date-range query interface
  and AnalysisHistoryStore conformance
- Add "Last 30 days" retention option to Analysis History settings
- Absorption time model: conservative adjustments anchored to Loop's
  3-hour default. FPU adds +0/+0.5/+1.0 hr (was +1/+2.5/+4), fiber
  +0/+0.25/+0.5 (was +0/+1/+2), meal size +0/+0.25/+0.5 (was +0/+1/+2).
  Cap reduced from 8 to 5 hours. Updated AI prompt and 3 examples.
- OCR routing fix: raised menu detection threshold from 1 to 5 significant
  lines and always include image on menu path to prevent food photo
  misclassification (fixes "Unidentifiable Food Item" on food photos).
- Inline "Why X hrs?" pill on Absorption Time row replaces standalone
  DisclosureGroup row. Purple centered pill with fixed width, expands
  reasoning on tap. Uses AIAbsorptionTimePickerRow when AI-generated.
@taylorpatterson-T1D taylorpatterson-T1D changed the title FoodFinder: AI-powered food identification for carb entry FoodFinder: AI-powered food identification for carb entry (supports Loop 3.10.0) Feb 11, 2026
taylorpatterson-T1D and others added 11 commits February 12, 2026 13:26
Add LoopInsights feature: an AI-driven therapy settings advisor that analyzes
glucose, insulin, and carb data to suggest adjustments to Carb Ratios, Insulin
Sensitivity Factors, and Basal Rates.

Core components:
- Dashboard with therapy settings overview, pattern detection, and AI suggestions
- Configurable AI provider (OpenAI, Anthropic, Gemini, Grok, self-hosted)
- Data aggregation pipeline with test data fixtures from Tidepool
- Suggestion lifecycle: pending → applied/dismissed with full history
- AI personality settings (Supportive Coach, Clinical Expert, Dry Wit, Tough Love)
- Developer mode with auto-apply and test data toggles
- Secure API key storage via Keychain
- Safety guardrails: max 20% change per adjustment, one setting at a time
- Unit tests for models, data aggregation, and suggestion store

22 new files, 4 modified files across Views, View Models, Models, Services,
Managers, Resources, and Tests.
…ng, and UI refinements

- Wire real therapy settings writes via LoopInsightsSettingsWriter closure
- Schedule splitting: insert new entries when AI suggests times not in user's schedule
- Revert feature: restore pre-apply settings from suggestion history
- Settings Score (0-100) with TIR, Safety, Stability, GMI breakdown
- Clinical reasoning framework: AI now understands AID-specific patterns
  (corrections/day, basal/bolus ratio, time-of-day analysis, cross-setting interactions)
- All three settings visible in every AI prompt for cross-setting reasoning
- Pre-computed red flags injected into prompt (algorithm workload, basal % alerts)
- Stale-data guard: excludes manually reverted changes from recent context
- Suggestion merge: consolidates split AI responses into single cards
- Pre-Fill Editor: editable proposed values before applying
- Auto-applied notification banner
- Debug log with Copy Full Log for troubleshooting AI behavior
- Temperature forced to 0.0 for deterministic analysis
… advisor UI

Add Ask LoopInsights chat with AI advisor powered by therapy context and glucose data.
Background monitoring with configurable frequency and notification banners. New Trends
& Insights view with Daily/Weekly/Monthly/Stats/Advisor tabs. Dark gradient styling
for chat and trends views. Banner now includes Ask button to open chat directly.
…ports

Add clinical goal tracking (TIR, A1C, below-range, custom) with progress bars,
AI-powered 30-day pattern discovery with sick day and negative basal detection,
timestamped reflection journal with mood tags, and HTML-to-PDF report generation
with share sheet. Goals & Patterns accessible from the Dashboard navigation section.
… analysis

Add HealthKit biometric data (heart rate, HRV, steps, sleep, active energy, weight)
to the AI analysis and chat pipelines. Biometrics are read-only, independently
authorized, and gracefully degrade when individual types are unavailable.

New file: LoopInsights_HealthKitManager.swift
Modified: Models, DataAggregator, AIAnalysis, ChatViewModel, Coordinator,
FeatureFlags, SettingsView, DashboardView, pbxproj, Localizable.xcstrings
…nsights, Nightscout import

- Ambulatory Glucose Profile (AGP) chart with percentile bands and median line
- Clarity-style dashboard redesign: Glucose card, Time in Range 5-zone stacked bar,
  capsule period picker with exact Clarity colors (#C14F0C, #F0CA4C, #74A52E, #D36265, #7F0302)
- Caffeine tracker with half-life decay modeling and glucose correlation
- Meal insights with food response analysis and per-meal glucose impact
- Nightscout data import support
- Advanced analyzers for pattern detection
- 5-zone TIR breakdown (Very High/High/In Range/Low/Very Low) replacing 3-zone model
- Compact list section spacing for tighter dashboard layout
- Chat view UI refinements
…card fixes

P1: Parallel HealthKit queries via async let (6 concurrent fetches)
P2: Single-pass TIR zone counting (5-zone) replacing multiple filter passes
P3: Pre-fetch raw data in DataAggregator, cache for cross-component reuse
P4: Binary search for glucose lookups in FoodResponseAnalyzer
P5: Pre-sorted glucose samples with binary search in AdvancedAnalyzers
P6: Pre-compute AGP data in ViewModel instead of SwiftUI view body
P7: Static DateFormatter in LoopInsightsTimeBlock.formatTime
P8: Pre-sort schedule items before dose loops, pre-sort in ViewModel
P9: Pre-convert glucose to parallel arrays avoiding repeated doubleValue calls
P10: Pass precomputed hourly averages to circadian profile builder

Also: enhanced step/activity data in AI prompts with time-of-day breakdowns
and activity-glucose correlation analysis (2h lag), and meal card layout cleanup.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…y fixes

Glucose chart now operates in two modes: standard Ambulatory Glucose Profile
(24-hour overlay with percentile bands) for 14-day lookback, and Glucose Profile
(multi-day time series) for all other periods. Both modes include an info button
explaining the visualization. HealthKit glucose data supplements Loop store for
longer analysis periods. Chart data clears on period change to prevent stale labels.

Additional fixes across 22 files: improved HealthKit data pipeline reliability,
enhanced test data provider, refined food response analysis, and minor bug fixes
in background monitor, coordinator, caffeine tracker, and goals/trends views.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…y fixes

Glucose chart now operates in two modes: standard Ambulatory Glucose Profile
(24-hour overlay with percentile bands) for 14-day lookback, and Glucose Profile
(multi-day time series) for all other periods. Both modes include an info button
explaining the visualization. HealthKit glucose data supplements Loop store for
longer analysis periods. Chart data clears on period change to prevent stale labels.

Additional fixes across 22 files: improved HealthKit data pipeline reliability,
enhanced test data provider, refined food response analysis, and minor bug fixes
in background monitor, coordinator, caffeine tracker, and goals/trends views.
Bump all body text, headers, and stat values to full white for readability
on dark backgrounds. Replace .toolbarColorScheme (iOS 16+) with manual
toolbar principal title for compatibility. Restore UINavigationBarAppearance
approach in ChatView.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@taylorpatterson-T1D taylorpatterson-T1D changed the title FoodFinder: AI-powered food identification for carb entry (supports Loop 3.10.0) FoodFinder: AI-powered & barcode food identification for carb entry (supports Loop 3.10.0) Feb 14, 2026
The 20-char limit was truncating food names (e.g. "Baked pastry with f…")
which made them unreadable in LoopInsights Meal Insights. The RowEmojiTextField
maxLength only restricts keyboard input, so longer programmatic values are safe.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ntial stats

Three concurrent async let stats calls each created their own HKHealthStore,
causing malloc heap corruption on real devices with large HealthKit datasets.
Resolves "freed pointer was not the last allocation" SIGABRT on dashboard use.
…ncy window

Two bugs caused Walking to confirm on insufficient activity:
1. After reset, stale pedometer callbacks from the old subscription immediately
   re-triggered the step threshold. Fixed with generation counter.
2. Recency limit inflated with iOS timer delay (30s + delay), letting step data
   from 2+ minutes ago pass. Fixed to a flat 60s window.
@culdeus
Copy link
Copy Markdown

culdeus commented Apr 6, 2026

IMG_20260405_210155 IMG_20260405_210155 (1)

Bug 1: Food description text overflows screen width
In the AI result card, the detected food items list (e.g. "Roasted rack of lamb, Roasted baby potatoes, Roasted cauliflower florets") clips horizontally off the right edge of the screen rather than wrapping or truncating. The content is partially unreadable.
Expected: Text wraps to multiple lines or truncates with ellipsis
Actual: Text runs off-screen to the right

Bug 2: Recent AI Analyses not persisting
After completing an AI food analysis, opening a new carb entry shows "None" under the Recent AI Analyses section — the previous analysis is not stored or surfaced for reuse.
Expected: Recent AI analyses appear for quick selection on subsequent carb entries
Actual: Always shows "None"
Screenshots attached

Text generated by claude

Add multilineTextAlignment and fixedSize to servingSizeDisplay text
so long food item lists wrap instead of clipping off-screen.
@taylorpatterson-T1D
Copy link
Copy Markdown
Author

@culdeus
Bug 1: Food description text overflows screen width - should be repaired now. Rebuild to test.
Bug 2: Recent AI Analyses not persisting - The prior use case was if you tap None, it pulls down all recent. However this was not intuitive so I modified this so that None is replaced with the actual last AI analysed item by name and now when you tap "Choose Recent" it should pull down the picker. Also added a -None fallback- so that None only shows if analysisHistory is empty. Also added a delete all trashcan to clear AI history. Rebuild to test.

…tes spacing

- Fix ScrollView horizontal overflow by constraining content width via GeometryReader
- Remove "cal" unit from inside Calories nutrition circle (label below is sufficient)
- Deduplicate analysis history records by name on save
- Add red trash button to clear all recent AI analyses
- Reduce Food Type max length from 50 to 25 chars to prevent row overflow
- Add "Things to watch out for" subheader above revert warnings in reasoning panel
- Move "Choose Recent:" tap target to the label, show last analysis name instead of "None"
- Add spacing above FAVORITE FOODS section
- LoopInsightsChatMessage + Role now Codable
- New LoopInsightsChatTranscript model (id, startedAt, messages, preview)
- LoopInsights_ChatHistoryStore: JSON persistence to applicationSupport/LoopInsights/,
  10-session cap, 30-day retention, prune-on-save
- LoopInsights_ChatHistoryView: conversation list with swipe-to-delete and clear-all;
  LoopInsights_ChatTranscriptView: read-only dark-theme replay of a saved session
- ChatViewModel.saveSessionIfNeeded() called on chat dismiss and on clear
- ChatView: clock history button in toolbar alongside existing trash button
- Localizable.xcstrings: 6 new keys registered by Xcode
# Conflicts:
#	Loop/Localizable.xcstrings
…d steps

Switch averageDailySteps calculation from HKSampleQuery to
HKStatisticsCollectionQuery(.cumulativeSum), which uses HealthKit's own
source-deduplication engine — matching what the Health app displays.
Raw HKSampleQuery sums every source independently, inflating totals on
multi-device setups. Hourly distribution retains the raw query since
relative shape (peak hour, glucose correlation) is unaffected by
uniform bias across sources.
…he key

- LocationService: strengthen prompt instruction so AI always opens
  diabetes_considerations with a visible location callout (restaurant
  name, data source, and correction hint) rather than optionally
  mentioning it somewhere in the response
- AIAnalysis: include location name in image cache key so a cached
  result without location context is never served when location has
  since resolved
- Reframe LoopInsights prompts: algorithm corrections are normal AID behavior,
  focus on glucose outcomes instead of algorithm workload
- FoodFinder default temperature 0.3 → 0.0 for deterministic analysis
- Clarify HealthKitManager step count hourly bucketing comment
- New FoodFinder_ImageCropView: interactive crop with draggable handles,
  dimmed overlay, rule-of-thirds grid, crop & analyze or skip
- Camera flow now: capture → crop → analyze (crop is optional)
- AI prompt: new identification priority order — GPS+menu lookup first,
  visual branding second, regional cuisine third, image-only as fallback
- Location service now captures city + country for regional cuisine context
- Location tagging defaults to ON (privacy-safe one-shot GPS)
- FoodFinder temperature set to 0.0 for deterministic analysis
- Title appends " - Venue Name" when geo-locate identifies a restaurant
- Description capped at 140 chars with Show More/Less toggle
- Title line limit increased to 2 for restaurant name to fit
…f-increments

- Temperature 0.0 caused deterministic extreme nutrition values; 0.05
  is nearly deterministic without locking into hallucination paths
- Servings stepper now increments/decrements by 0.5 instead of 0.25
…splay

- Move RECENT AI ANALYSES from Add Carb Entry into FoodFinder Settings
- Re-use button auto-navigates to Add Carb Entry with data pre-filled
- Fix bug: selecting None now clears carb entry fields (was showing 0)
- Add ±Xg confidence range next to confidence badge based on CV data
- Add FoodFinder_ImageCropView.swift to Xcode project (was missing from pbxproj)
- Fix carbsValue out-of-scope in confidence range by recomputing at usage site
…mbnail

- Defer restoredAnalysisResult assignment to onAppear so FoodFinder_EntryPoint's
  .onChange observer is active and triggers full UI restore (circles, thumbnail, etc.)
- Move checkForPendingReUse() from init to CarbEntryView.onAppear
…ttings

Selecting an item shows its details and Re-use button below the picker.
Tap 'Recent Analyses' to expand a scrollable list with food thumbnails.
Select an item to see details and Re-use button. List collapses on selection.
One tap to re-use — no intermediate selection step needed.
Vision API requests (especially Anthropic) regularly take 30-45s for the
API call alone. Combined with OCR + image encoding overhead, 60s was too
tight and caused consistent timeouts on Photos-sourced images.
Root cause: servings was computed as the SUM of per-item USDA serving
multipliers (e.g. salmon=2 + egg=1 + potatoes=3 = 6+ servings) instead
of representing 1 plate/meal. This inflated numberOfServings to 10+ and
scaled all nutrition values up accordingly.

Fix: AI analysis = 1 serving = 1 plate as photographed. The per-item
USDA multipliers are internal to the item-level breakdown, not the
user-facing serving count. originalServings is now always 1.0, and
convertAIResultToFoodProduct no longer divides by the aggregate USDA
serving count.
Implements three programmatic safeguards from Street (2026) "Frontier LLMs
require explicit clinical context to avoid training-data anchoring for
insulin pump settings" (https://www.diabettech.com/wp-content/uploads/2026/04/diabettech_preprint-3.pdf):

1. Bounds clamping (recommendation #2): Values exceeding the max change %
   by up to 1.5x are clamped to the limit instead of hard-rejected, with a
   logged validation note.

2. Citation verification (recommendation LoopKit#3): Glucose values cited in AI
   reasoning are cross-checked against the actual hourly averages sent to
   the model (±2 mg/dL tolerance). Misattribution rate is reported as a
   validation note.

3. Confidence adjustment by data availability: CR analysis with <5 meals or
   ISF analysis with <3 corrections is capped at Low confidence with an
   explicit warning about anchor substitution risk. Basal (the most data-
   driven setting per Street's anchoring spectrum) is not capped.

All changes are in the response parsing layer — no prompt modifications:
- LoopInsights_AIAnalysis.swift: parseResponse() + extractCitedGlucoseValues()
- LoopInsights_Models.swift: validationNotes field on LoopInsightsAnalysisResponse
- LoopInsights_DashboardView.swift: validation notes display in UI
Additional safeguards addressing Street (2026) findings that LLMs cannot
distinguish "I have evidence" from "I should generate something plausible":

1. Stronger refusal mandate in system prompt: Model MUST return empty
   suggestions when data is insufficient to validate a setting. Explicitly
   forbids echoing current values with medium/high confidence when no
   supporting data pattern exists.

2. Explicit AID system context in user prompt: States "Loop (oref-based)"
   with DIA prominently positioned and marked as DO NOT CHANGE. Prevents
   catastrophic DIA collapse to 4-5h textbook values (Street 2026 found
   100% of models defaulted to textbook DIA without this context).

3. Contradiction detection (validation pass LoopKit#4): Scans reasoning for
   admissions like "cannot be derived", "insufficient data", "no meal data"
   while confidence is medium/high. If found, suppresses the suggestion
   entirely — the model was filling gaps with training priors.

All changes in LoopInsights_AIAnalysis.swift (system prompt, user prompt,
and parseResponse validation layer).

Reference: Street T. "Frontier LLMs require explicit clinical context to
avoid training-data anchoring for insulin pump settings." April 2026.
https://www.diabettech.com/wp-content/uploads/2026/04/diabettech_preprint-3.pdf
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants