Flutter + React Native examples: rewrite with 4-tab nav, orange theme, dark mode#482
Flutter + React Native examples: rewrite with 4-tab nav, orange theme, dark mode#482Siddhesh2377 wants to merge 5 commits intomainfrom
Conversation
- Replace old app with fresh Flutter project using Riverpod, GoRouter, Material 3 - Orange theme with full light/dark mode support (Inter + JetBrains Mono fonts) - 4-tab navigation: Chat, Vision, More, Settings - Intro screen with real SDK initialization progress (per-step status) - Chat: streaming/non-streaming LLM, markdown rendering, typing indicator, analytics - Vision: camera preview, single capture, gallery picker, auto-streaming VLM - STT: batch recording with transcription, mode toggle - TTS: text input, speech rate control, WAV playback - Voice Assistant: full STT→LLM→TTS pipeline with session state tracking - RAG: document picker (PDF/JSON), Q&A chat interface - Structured Output: schema templates with stream toggle - Models: browse/download/load with real-time download progress bar - Settings: generation params, system prompt, tool calling toggle - SDK provider with all model registrations (LlamaCpp, Genie NPU, ONNX, VLM, embeddings) - Restrict Android ABI to arm64-v8a to fix native lib build failures Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Too many files changed for review. ( |
📝 WalkthroughWalkthroughLarge refactor of the RunAnywhereAI Flutter example: migrated Android Gradle Groovy to Kotlin DSL, reworked iOS Pod/CocoaPods wiring and manifests, replaced many ChangeNotifier viewmodels and monolithic screens with Riverpod Notifier providers and modular widgets/screens, added a new theme/design system, and introduced an SDK initialization provider; many legacy services and views were removed. Changes
Sequence Diagram(s)sequenceDiagram
participant Intro as "IntroScreen\n(Client UI)"
participant Controller as "IntroController\n(Notifier)"
participant SDK as "initializeSDK()\n(SDK provider)"
participant Registry as "RunAnywhere SDK\nModel Registry/Backends"
Intro->>Controller: user opens app / onLoad
Controller->>Controller: build() -> schedule _initialize()
Controller->>SDK: initializeSDK(onProgress)
SDK->>Registry: register core SDK & backends (LlamaCpp, ONNX, etc.)
Registry-->>SDK: backend registration progress/events
SDK-->>Controller: progress callbacks (progress,status)
Controller-->>Intro: update UI progress
SDK->>Registry: conditional Genie NPU registration (if available)
Registry-->>SDK: model registration complete
SDK-->>Controller: final progress (complete)
Controller-->>Intro: navigate to /home/chat
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (8)
examples/flutter/RunAnywhereAI/lib/features/voice/stt_controller.dart-84-85 (1)
84-85:⚠️ Potential issue | 🟡 MinorPermission denial exits silently
If mic permission is denied, the method returns with no user-visible error, leaving the screen with no actionable feedback.
🛠️ Proposed fix
- if (!await _recorder.hasPermission()) return; + if (!await _recorder.hasPermission()) { + state = state.copyWith( + recordingState: RecordingState.idle, + errorMessage: 'Microphone permission is required for STT.', + ); + return; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/voice/stt_controller.dart` around lines 84 - 85, The early return on permission denial (the check using _recorder.hasPermission()) silently exits and should notify the UI; update the function that performs the permission check (e.g., startRecording / the method containing _recorder.hasPermission()) to handle denial by reporting an error instead of returning silently — for example, call a provided onPermissionDenied callback or set an error state and show a SnackBar/AlertDialog, and also log the denial (processLogger or debugPrint) so callers and users get actionable feedback.examples/flutter/RunAnywhereAI/android/app/src/main/AndroidManifest.xml-3-3 (1)
3-3:⚠️ Potential issue | 🟡 MinorApp label should be user-friendly.
runanywhere_aiwith underscores will appear in the device's app launcher. Consider using a human-readable name like "RunAnywhere AI".Proposed fix
<application - android:label="runanywhere_ai" + android:label="RunAnywhere AI" android:name="${applicationName}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/android/app/src/main/AndroidManifest.xml` at line 3, The Android manifest uses a machine-oriented label android:label="runanywhere_ai" which will show raw underscores in the launcher; update the android:label to a user-friendly display name (e.g., "RunAnywhere AI") or better point it to a string resource (e.g., `@string/app_name`) and update the corresponding strings.xml (app_name = "RunAnywhere AI") so the launcher shows a human-readable app name; locate the android:label attribute in AndroidManifest.xml and adjust accordingly.examples/flutter/RunAnywhereAI/ios/Runner/Info.plist-10-10 (1)
10-10:⚠️ Potential issue | 🟡 MinorRestore the app name’s intended capitalization.
Runanywhere Ailooks like an accidental brand regression and is what users will see on the home screen and in system UI.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/ios/Runner/Info.plist` at line 10, Update the app name entry in Info.plist: replace the current string value "Runanywhere Ai" used for the CFBundleDisplayName/CFBundleName entry with the properly capitalized brand name (e.g., "RunAnywhere AI") so the name appears correctly in the home screen and system UI.examples/flutter/RunAnywhereAI/lib/features/settings/settings_controller.dart-46-63 (1)
46-63:⚠️ Potential issue | 🟡 MinorRace condition: user changes may be overwritten by
_load()completing.
build()returns the defaultSettingsStateimmediately while_load()runs asynchronously. If the user modifies a setting before_load()completes, the loaded values will overwrite the user's changes when_load()finishes.Consider using
FutureProviderorAsyncNotifierto properly await initialization, or add a guard in_load()to avoid overwriting user modifications.♻️ Alternative: Use AsyncNotifier pattern
final settingsControllerProvider = AsyncNotifierProvider<SettingsController, SettingsState>( SettingsController.new, ); class SettingsController extends AsyncNotifier<SettingsState> { `@override` Future<SettingsState> build() async { final prefs = await SharedPreferences.getInstance(); return SettingsState( useStreaming: prefs.getBool('useStreaming') ?? true, systemPrompt: prefs.getString('systemPrompt') ?? 'You are a helpful AI assistant.', temperature: prefs.getDouble('temperature') ?? 0.8, maxTokens: prefs.getInt('maxTokens') ?? 512, topP: prefs.getDouble('topP') ?? 1.0, toolCallingEnabled: prefs.getBool('toolCallingEnabled') ?? false, ); } // ... setters update state.value instead }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/settings/settings_controller.dart` around lines 46 - 63, The current build() in SettingsController triggers async _load() and immediately returns a default SettingsState which lets user edits be overwritten when _load() completes; fix by initializing in a way that awaits preferences or guards against overwriting user changes: either convert the provider to AsyncNotifierProvider and implement build() as an async initializer that returns the loaded SettingsState (move logic from _load() into async build()), or if keeping the current controller add an _initialized/_loading flag and in _load() check whether state has been mutated from the default before assigning (or merge loaded values into existing state only for unset fields) so user modifications made before _load() finishes are not clobbered. Ensure you update references to build(), _load(), settingsControllerProvider and SettingsController accordingly.examples/flutter/RunAnywhereAI/lib/features/vision/vision_controller.dart-123-125 (1)
123-125:⚠️ Potential issue | 🟡 MinorWrap
deleteSyncin a try-catch to prevent unhandled exceptions.
File.deleteSync()can throwFileSystemExceptionif the file is locked, already deleted, or the path is invalid. This could crash the app or leave it in an inconsistent state.🛡️ Proposed fix
if (!File(path).path.contains('image_picker')) { - File(path).deleteSync(); + try { + File(path).deleteSync(); + } on FileSystemException { + // Ignore cleanup failures for temporary files + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/vision/vision_controller.dart` around lines 123 - 125, The deletion call File(path).deleteSync() in the VisionController cleanup branch can throw FileSystemException; wrap the deletion in a try-catch inside the same conditional (the block that checks !File(path).path.contains('image_picker')) so the app won't crash—catch FileSystemException (and optionally any other Exception), optionally guard with File(path).existsSync() before deleting, and log or ignore the error using the controller's logger rather than allowing the exception to bubble.examples/flutter/RunAnywhereAI/lib/features/structured_output/structured_output_screen.dart-43-50 (1)
43-50:⚠️ Potential issue | 🟡 MinorState fields are inadvertently cleared when updating single properties.
Both
selectExampleandtoggleStreamingrecreate_ScreenStatewithout preservingresponseanderrorMessage. This means any generated output is lost when the user toggles the streaming option or changes the schema template.Consider adding a
copyWithmethod to_ScreenStatefor partial updates, or explicitly preserve these fields if clearing them is unintentional.♻️ Proposed fix: Add copyWith to _ScreenState
class _ScreenState { const _ScreenState({ this.selectedExample = StructuredExample.recipe, this.isGenerating = false, this.response = '', this.useStreaming = false, this.errorMessage, }); final StructuredExample selectedExample; final bool isGenerating; final String response; final bool useStreaming; final String? errorMessage; + + _ScreenState copyWith({ + StructuredExample? selectedExample, + bool? isGenerating, + String? response, + bool? useStreaming, + String? errorMessage, + bool clearError = false, + }) { + return _ScreenState( + selectedExample: selectedExample ?? this.selectedExample, + isGenerating: isGenerating ?? this.isGenerating, + response: response ?? this.response, + useStreaming: useStreaming ?? this.useStreaming, + errorMessage: clearError ? null : (errorMessage ?? this.errorMessage), + ); + } }Then update the methods:
void selectExample(StructuredExample ex) => - state = _ScreenState(selectedExample: ex, useStreaming: state.useStreaming); + state = state.copyWith(selectedExample: ex); void toggleStreaming() => - state = _ScreenState( - selectedExample: state.selectedExample, - useStreaming: !state.useStreaming, - ); + state = state.copyWith(useStreaming: !state.useStreaming);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/structured_output/structured_output_screen.dart` around lines 43 - 50, selectExample and toggleStreaming recreate a new _ScreenState without preserving response and errorMessage, causing generated output to be lost; update _ScreenState to add a copyWith({StructuredExample? selectedExample, bool? useStreaming, String? response, String? errorMessage}) and then change selectExample and toggleStreaming to call state = state.copyWith(selectedExample: ex) and state = state.copyWith(useStreaming: !state.useStreaming) so response and errorMessage are retained (or explicitly pass through response/errorMessage when constructing a new _ScreenState).examples/flutter/RunAnywhereAI/lib/features/intro/intro_controller.dart-24-32 (1)
24-32:⚠️ Potential issue | 🟡 MinorConsider whether navigating after failure is the intended behavior.
On SDK initialization failure, the code sets
isComplete: trueafter a 2-second delay, which triggers navigation to/home/chat. The user may not realize the SDK failed to initialize, leading to a broken experience (e.g., "No model loaded" errors when attempting to chat).Consider either:
- Retaining an error state that the intro screen displays with a retry option
- Navigating to a dedicated error/recovery screen
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/intro/intro_controller.dart` around lines 24 - 32, The current failure path in IntroController's SDK init catch block sets state.progress to 1.0 and state.isComplete = true (via state.copyWith) after a 2s delay, which triggers navigation to /home/chat despite the SDK not being initialized; instead preserve an explicit error state and provide a retry flow: remove setting isComplete to true on failure, set a new error fields on state (e.g., statusText = 'Setup failed', hasError = true or errorMessage = e.toString()), and expose a retryInit() method that retries the SDK initialization; update the UI to show an error panel with retry that calls retryInit rather than navigating automatically on state.isComplete.examples/flutter/RunAnywhereAI/lib/features/intro/intro_screen.dart-189-202 (1)
189-202:⚠️ Potential issue | 🟡 MinorTweenAnimationBuilder always animates from 0, causing visual jumps.
The
begin: 0means every timeprogressupdates (e.g., 0.5 → 0.7), the animation restarts from 0 rather than animating from the previous value to the new value. This causes the progress bar to visually reset.Consider using the
TweenAnimationBuilder's implicit behavior by omittingbeginor tracking the previous progress value:Proposed fix
child: TweenAnimationBuilder<double>( - tween: Tween(begin: 0, end: progress), + tween: Tween(begin: progress, end: progress), duration: const Duration(milliseconds: 400), curve: Curves.easeInOut,Alternatively, remove the
TweenAnimationBuilderand letLinearProgressIndicatorhandle the transition directly since it already supports animation whenvaluechanges, especially with implicit animations in Material 3.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/intro/intro_screen.dart` around lines 189 - 202, TweenAnimationBuilder is always restarting from begin: 0 which causes the progress bar to jump; change the code so the tween animates from the previous progress value to the new one (e.g., track a state field previousProgress and use Tween(begin: previousProgress, end: progress) and update previousProgress after animation) or simply remove the TweenAnimationBuilder and pass progress directly into LinearProgressIndicator (it will animate value changes), updating/removing the begin: 0 usage in the TweenAnimationBuilder block (symbols: TweenAnimationBuilder, Tween(begin: …, end: progress), previousProgress state, LinearProgressIndicator, progress).
🧹 Nitpick comments (19)
examples/flutter/RunAnywhereAI/analysis_options.yaml (1)
11-11: Re-enableavoid_printto keep a guardrail on sensitive logs.Line 11 disables a useful lint for an app that may process user prompts/transcripts. Prefer keeping the lint enabled and using an explicit logger in places where console output is intentional.
Suggested config tweak
- avoid_print: false + avoid_print: true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/analysis_options.yaml` at line 11, The lint rule disablement for avoid_print is currently set to false; re-enable it by setting avoid_print to true in analysis_options.yaml to restore the guardrail against accidental console prints, and update any places using print to use a proper logging abstraction (e.g., a Logger or debug/log functions) so intentional console output remains explicit; target the avoid_print entry in analysis_options.yaml and replace the false value with true, then refactor any direct print calls to the app's logger.examples/flutter/RunAnywhereAI/lib/features/voice/stt_screen.dart (1)
67-73: Show a model-required message in empty stateWhen
isModelLoadedis false, Line 72 still says “Tap the mic…”, but the button is disabled. Show a model-loading hint to avoid a dead-end UX.💡 Proposed tweak
- sttState.isTranscribing - ? 'Transcribing...' - : 'Tap the mic to start recording', + !sttState.isModelLoaded + ? 'Load an STT model from the Models screen first' + : sttState.isTranscribing + ? 'Transcribing...' + : 'Tap the mic to start recording',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/voice/stt_screen.dart` around lines 67 - 73, The empty-state text currently shows "Tap the mic to start recording" even when the model isn't loaded; update the conditional in the widget that renders sttState.transcription (inside stt_screen.dart) to check sttState.isModelLoaded and display a model-loading hint (e.g., "Loading speech model… please wait") when isModelLoaded is false, otherwise keep the existing "Tap the mic to start recording"; locate the check around sttState.isTranscribing / sttState.transcription and adjust the ternary to include sttState.isModelLoaded so the UI gives a clear hint when the mic button is disabled.examples/flutter/RunAnywhereAI/android/build.gradle.kts (1)
14-20: Consider merging duplicatesubprojectsblocks.Two separate
subprojectsblocks can be consolidated for better readability.Proposed consolidation
subprojects { val newSubprojectBuildDir: Directory = newBuildDir.dir(project.name) project.layout.buildDirectory.value(newSubprojectBuildDir) -} -subprojects { project.evaluationDependsOn(":app") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/android/build.gradle.kts` around lines 14 - 20, There are two separate subprojects blocks; merge them into a single subprojects { ... } block so both behaviors remain: set val newSubprojectBuildDir = newBuildDir.dir(project.name) and assign project.layout.buildDirectory.value(newSubprojectBuildDir), and also call project.evaluationDependsOn(":app") in the same block; update/remove the duplicate block (symbols: subprojects, newSubprojectBuildDir, newBuildDir, project.layout.buildDirectory, project.evaluationDependsOn(":app")) to keep semantics identical and improve readability.examples/flutter/RunAnywhereAI/android/settings.gradle.kts (2)
2-9: Handle missinglocal.propertiesgracefully.If
local.propertiesdoesn't exist (common in fresh clones or CI environments),file("local.properties").inputStream()will throw aFileNotFoundExceptionbefore reaching therequirecheck. Consider adding existence validation.Proposed fix for better error handling
val flutterSdkPath = run { + val localPropertiesFile = file("local.properties") + require(localPropertiesFile.exists()) { "local.properties file not found. Please run 'flutter pub get' first." } val properties = java.util.Properties() - file("local.properties").inputStream().use { properties.load(it) } + localPropertiesFile.inputStream().use { properties.load(it) } val flutterSdkPath = properties.getProperty("flutter.sdk") require(flutterSdkPath != null) { "flutter.sdk not set in local.properties" } flutterSdkPath }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/android/settings.gradle.kts` around lines 2 - 9, The current flutterSdkPath block reads local.properties unconditionally and will throw FileNotFoundException; update the block that defines val flutterSdkPath to first check file("local.properties").exists() (or handle absence via fallback) before calling inputStream(), and if the file is missing call require(...) with a clear message like "local.properties not found; set flutter.sdk in local.properties or provide alternate configuration" (or alternatively read from an environment variable as a fallback); keep the rest of the logic that loads java.util.Properties, reads properties.getProperty("flutter.sdk") and requires it non-null.
20-24: Consider using the version catalog for consistency.Plugin versions in
settings.gradle.ktsare hardcoded rather than sourced fromlibs.versions.toml. This creates minor inconsistency with the root project's declared versions (Kotlin2.1.21vs2.2.20, AGP8.11.2vs8.11.1). Centralizing version management in the catalog improves maintainability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/android/settings.gradle.kts` around lines 20 - 24, The plugin versions in the plugins block (ids: "dev.flutter.flutter-plugin-loader", "com.android.application", "org.jetbrains.kotlin.android") are hardcoded and diverge from the root catalog; change these to read their versions from the project's version catalog (libs.versions.toml) instead of literal strings — e.g., replace version "8.11.1" and "2.2.20" with the corresponding catalog properties (like libs.versions.androidGradlePlugin and libs.versions.kotlin), and update the plugins block in settings.gradle.kts to reference those catalog entries so plugin versions remain centralized and consistent with the root project.examples/flutter/RunAnywhereAI/lib/features/vision/vision_controller.dart (1)
96-101: Consider capturing immediately when auto-streaming starts.Currently, the first capture happens after a 2500ms delay. If immediate feedback is desired, consider calling
captureAndDescribe()right after starting the timer.♻️ Optional: Capture immediately
void _startAutoStreaming() { state = state.copyWith(isAutoStreaming: true); + captureAndDescribe(); // Immediate first capture _streamTimer = Timer.periodic(const Duration(milliseconds: 2500), (_) { captureAndDescribe(); }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/vision/vision_controller.dart` around lines 96 - 101, The first capture in _startAutoStreaming is delayed by the 2500ms Timer; call captureAndDescribe() immediately when enabling auto-streaming so the user gets instant feedback. In _startAutoStreaming (which sets state.copyWith(isAutoStreaming: true) and assigns _streamTimer), invoke captureAndDescribe() once right after updating state (or just before starting Timer.periodic) and then keep the periodic Timer to continue regular captures.examples/flutter/RunAnywhereAI/lib/features/more/more_screen.dart (1)
71-86: Consider usingTheme.of(context)for consistency.
_SectionHeaderreceivesThemeDataas a parameter while_FeatureTileretrieves it viaTheme.of(context). For consistency across the codebase, consider using the same approach in both widgets.♻️ Optional: Use Theme.of(context)
class _SectionHeader extends StatelessWidget { - const _SectionHeader({required this.title, required this.theme}); + const _SectionHeader({required this.title}); final String title; - final ThemeData theme; `@override` Widget build(BuildContext context) { + final theme = Theme.of(context); return Text( title, style: AppTypography.labelLarge.copyWith( color: theme.colorScheme.onSurfaceVariant, ), ); } }Then update call sites:
- _SectionHeader(title: 'Voice & Audio', theme: theme), + const _SectionHeader(title: 'Voice & Audio'),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/more/more_screen.dart` around lines 71 - 86, _SectionHeader currently accepts a ThemeData parameter but other widgets like _FeatureTile use Theme.of(context); remove the theme parameter from the _SectionHeader constructor and fields, update its build to obtain ThemeData via Theme.of(context) instead, and then update all call sites that pass theme to stop supplying it (adjust constructor usages to const _SectionHeader(title: ...)). Ensure you update the constructor signature and any references to the removed field.examples/flutter/RunAnywhereAI/lib/features/settings/settings_screen.dart (1)
164-179: Consider getting theme from context in_SectionTitle.
_SectionTitlereceivesthemeas a parameter while_ToolItemretrieves it viaTheme.of(context). For consistency,_SectionTitlecould also useTheme.of(context)internally, reducing the parameter count.Optional simplification
class _SectionTitle extends StatelessWidget { - const _SectionTitle(this.title, this.theme); + const _SectionTitle(this.title); final String title; - final ThemeData theme; `@override` Widget build(BuildContext context) { + final theme = Theme.of(context); return Text( title,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/settings/settings_screen.dart` around lines 164 - 179, The _SectionTitle widget currently takes a ThemeData parameter but should obtain theme via Theme.of(context) for consistency with _ToolItem; update the _SectionTitle constructor to remove the ThemeData parameter, remove the final ThemeData theme field, and in the build method call Theme.of(context) to get the ThemeData and use its colorScheme.onSurfaceVariant; then update all call sites to stop passing a theme argument (search for uses of _SectionTitle(..., theme) and remove the extra parameter).examples/flutter/RunAnywhereAI/lib/features/rag/rag_screen.dart (1)
124-126: Consider visual distinction for loading and error states.The
_DocumentBanneronly distinguishes betweenloadedand non-loaded states visually. IfRagDocumentStatusincludesloadingorerrorvariants, users won't see visual feedback for those states (e.g., a spinner during loading, or error styling on failure).Example enhancement
color: switch (status) { RagDocumentStatus.loaded => theme.colorScheme.primaryContainer.withValues(alpha: 0.3), RagDocumentStatus.loading => theme.colorScheme.secondaryContainer, RagDocumentStatus.error => theme.colorScheme.errorContainer, _ => theme.colorScheme.surfaceContainerHighest, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/rag/rag_screen.dart` around lines 124 - 126, The _DocumentBanner currently only branches on status == RagDocumentStatus.loaded and treats all other cases the same; update the color (and optionally add spinner/error icon) logic to handle RagDocumentStatus.loading and RagDocumentStatus.error distinctly by switching on the status (reference: _DocumentBanner, the status variable, and RagDocumentStatus enum) — map loading to a loading visual (e.g., secondaryContainer or show a CircularProgressIndicator), map error to error styling (e.g., errorContainer and/or error icon), keep loaded as primaryContainer.withValues(alpha: 0.3) and default to surfaceContainerHighest for unknown states.examples/flutter/RunAnywhereAI/lib/features/chat/chat_screen.dart (1)
47-49: Potential excessive scrolling during streaming.The listener on
streamingContentwill fire on every token received during streaming, calling_scrollToBottom()repeatedly. While the 50ms delay provides some debouncing, rapid token generation could still queue many scroll animations.Consider throttling or using a flag to prevent overlapping scroll animations:
Optional throttle approach
bool _isScrolling = false; void _scrollToBottom() { if (!_scrollController.hasClients || _isScrolling) return; _isScrolling = true; Future.delayed(const Duration(milliseconds: 50), () { if (_scrollController.hasClients) { _scrollController.animateTo( _scrollController.position.maxScrollExtent, duration: const Duration(milliseconds: 200), curve: Curves.easeOut, ); } _isScrolling = false; }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/chat/chat_screen.dart` around lines 47 - 49, The listener on chatControllerProvider.select((s) => s.streamingContent) currently calls _scrollToBottom() for every token which can queue many overlapping animations; update _scrollToBottom() to guard against concurrent animations (e.g., add a private bool like _isScrolling and return early if true or if !_scrollController.hasClients), set _isScrolling = true before starting the delayed/animated scroll and reset it to false after completion, or implement a simple throttle/debounce inside _scrollToBottom() so rapid calls are coalesced rather than starting new animateTo() calls.examples/flutter/RunAnywhereAI/lib/data/models/chat_message.dart (1)
27-27: Consider reusing a single Uuid instance.Creating
const Uuid()for eachChatMessageinstantiation works but allocates a new instance each time. A static or top-levelUuidinstance would be slightly more efficient.💡 Reuse Uuid instance
+const _uuid = Uuid(); + class ChatMessage { ChatMessage({ String? id, required this.role, required this.content, this.thinkingContent, this.analytics, DateTime? timestamp, - }) : id = id ?? const Uuid().v4(), + }) : id = id ?? _uuid.v4(), timestamp = timestamp ?? DateTime.now();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/data/models/chat_message.dart` at line 27, The ChatMessage constructor currently calls const Uuid() for each instance (id = id ?? const Uuid().v4()); replace this by reusing a single Uuid instance: declare a top-level or static final Uuid (e.g., uuid or _uuid) and call that instance's v4() in the ChatMessage constructor (use uuid.v4() instead of const Uuid().v4()) so you avoid allocating a new Uuid on every ChatMessage creation.examples/flutter/RunAnywhereAI/lib/core/providers/sdk_provider.dart (2)
114-129: Consider making Genie model definitions more maintainable.The inline record syntax with multiple fields per model is concise but hard to read and modify. Consider extracting this to a separate data structure or configuration file for better maintainability.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/core/providers/sdk_provider.dart` around lines 114 - 129, Extract the inline record list genieModels into a reusable data structure (e.g., a ModelDefinition class or const List<ModelDefinition> or external JSON/asset) and update the loop that calls Genie.addModel to consume that structure; specifically create a ModelDefinition with fields slug, name, mem, quant, chips, populate it with the current entries (replacing the inline record tuples), and update the for-loop that references genieModels, as well as usages of NPUChip (chip.downloadUrl, chip.identifier, chip.displayName) and Genie.addModel to read from ModelDefinition to keep model metadata centralized and easier to maintain.
10-43: Add error handling for SDK initialization failures.If any step in
initializeSDKthrows (e.g.,sdk.RunAnywhere.initialize(),LlamaCpp.register(), orGenie.register()), the error will propagate up without a meaningful status update. Consider wrapping critical sections in try-catch to provide user-friendly error feedback.🛡️ Proposed error handling pattern
Future<void> initializeSDK(ProgressCallback onProgress) async { + try { onProgress(0.05, 'Initializing core SDK...'); await sdk.RunAnywhere.initialize(); debugPrint('SDK initialized'); // ... rest of initialization onProgress(1.0, 'Ready'); + } on Exception catch (e) { + onProgress(-1.0, 'Initialization failed: ${e.toString()}'); + rethrow; + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/core/providers/sdk_provider.dart` around lines 10 - 43, Wrap the critical operations inside initializeSDK (calls to sdk.RunAnywhere.initialize, LlamaCpp.register, _registerGenieModels and other backend/model registration calls) in try-catch blocks so any thrown exception is caught and reported via onProgress with a failure message (e.g., onProgress(1.0, 'Initialization failed: <brief message>')) and optionally log the error before rethrowing or returning; ensure each catch references the specific failing step (e.g., "RunAnywhere.initialize failed", "LlamaCpp.register failed", "_registerGenieModels failed") so the UI shows a clear, user-friendly status update when initialization fails.examples/flutter/RunAnywhereAI/lib/features/vision/vision_screen.dart (2)
17-24: Consider error handling for camera initialization.The
initCamera()call ininitStatedoesn't handle potential failures (e.g., no cameras available, permission denied). The controller'sinitCamera()method returns early if no cameras exist, but errors fromcontroller.initialize()would be unhandled.🛡️ Proposed enhancement with error handling
`@override` void initState() { super.initState(); Future.microtask(() { - ref.read(visionControllerProvider.notifier).initCamera(); + ref.read(visionControllerProvider.notifier).initCamera().catchError((e) { + // Error is handled in controller via state.errorMessage + }); }); }Alternatively, ensure the
VisionController.initCamera()method wraps its body in a try-catch and updatesstate.errorMessageon failure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/vision/vision_screen.dart` around lines 17 - 24, The initCamera() call in _VisionScreenState.initState currently ignores failures; either await and catch errors here or add robust try/catch inside VisionController.initCamera to surface failures to UI. Specifically, update _VisionScreenState.initState to call ref.read(visionControllerProvider.notifier).initCamera() and handle the Future with .catchError or async/await + try/catch to set an error state or log, or modify VisionController.initCamera to wrap controller.initialize() in try/catch and set state.errorMessage (and appropriate state) when initialization fails so the UI can react to permission/camera errors.
71-96: Display error state when present.The screen checks
visionState.isModelLoadedbut doesn't displayvisionState.errorMessagewhen it's set. Consider showing an error banner or snackbar whenerrorMessageis non-null.💡 Proposed error display
body: !visionState.isModelLoaded ? _NoModelView(theme: theme) + : visionState.errorMessage != null + ? Center( + child: Padding( + padding: AppSpacing.pagePadding, + child: Text( + visionState.errorMessage!, + style: theme.textTheme.bodyMedium?.copyWith( + color: theme.colorScheme.error, + ), + textAlign: TextAlign.center, + ), + ), + ) : Column(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/vision/vision_screen.dart` around lines 71 - 96, The UI currently only branches on visionState.isModelLoaded and never surfaces visionState.errorMessage; update the build logic in vision_screen.dart to check visionState.errorMessage (from the VisionState) and display an inline error banner or SnackBar when it's non-null—e.g., if errorMessage is present show a top Banner/Alert or call ScaffoldMessenger to show a SnackBar with visionState.errorMessage before/above rendering _NoModelView or the main Column; ensure the error clears or is dismissible and reference the same visionState and the visionControllerProvider.notifier if you need to acknowledge/dismiss the error.examples/flutter/RunAnywhereAI/lib/features/models/models_screen.dart (1)
97-112: Add exhaustive handling for ModelCategory switch.The switch uses a wildcard (
_) fallback for unknown categories. If new categories are added toModelCategory, this could silently fall back to generic styling. Consider handling all known categories explicitly.💡 Explicit category handling
Color get _categoryColor => switch (model.category) { ModelCategory.language => theme.colorScheme.primary, ModelCategory.multimodal => theme.colorScheme.tertiary, ModelCategory.speechRecognition => theme.colorScheme.secondary, ModelCategory.speechSynthesis => theme.colorScheme.error, + ModelCategory.embedding => theme.colorScheme.inversePrimary, _ => theme.colorScheme.onSurfaceVariant, };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/models/models_screen.dart` around lines 97 - 112, Update the two getters (_categoryLabel and _categoryColor) to handle every ModelCategory enum member explicitly instead of using the wildcard `_` fallback; replace the `switch (model.category) { ... _ => ... }` patterns with exhaustive case arms for each ModelCategory value declared in the enum so the analyzer will force updates when new categories are added (or if you must keep a fallback, use an explicit unreachable/assert/throw in the default branch). This change should be applied to the _categoryLabel and _categoryColor getters and reference the ModelCategory enum members by name so new enum values cannot silently fall back to generic styling.examples/flutter/RunAnywhereAI/lib/features/chat/widgets/chat_input_bar.dart (1)
67-85: Clarify send behavior for multiline input.With
textInputAction: TextInputAction.newline, the Enter key inserts a newline rather than triggeringonSubmitted. This is likely intentional for multi-line messages, but users expecting Enter to send may be confused. Consider adding a keyboard shortcut (e.g., Ctrl+Enter or Shift+Enter to send) or documenting the expected behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/chat/widgets/chat_input_bar.dart` around lines 67 - 85, The current TextField in ChatInputBar uses textInputAction: TextInputAction.newline so Enter inserts a newline and onSubmitted isn't triggered; to support sending with a keyboard shortcut, wrap the TextField in a RawKeyboardListener (or use Shortcuts/Actions) inside the ChatInputBar and detect Enter with modifier keys (e.g., Ctrl+Enter or Shift+Enter) to call the existing _handleSend() method while leaving plain Enter to insert a newline; keep textInputAction as TextInputAction.newline, update the widget tree around the TextField reference, and ensure focus and key event handling only trigger _handleSend() when widget.hasModel is true and the input is non-empty.examples/flutter/RunAnywhereAI/lib/core/theme/app_typography.dart (1)
7-66: MaketextThemethe single source of truth for these tokens.
AppThemereadsAppTypography.textTheme, but that getter currently returns the package defaults and never applies thedisplayLarge/headlineLarge/titleLargestyles defined below. Any widget usingTheme.of(context).textThemewill miss the sizes and weights you've declared here. Prefer atextTheme(TextTheme base)helper that wraps the base theme andcopyWiths these token overrides onto it, which is also the pattern shown in thegoogle_fontsdocs. (pub.dev)♻️ Suggested direction
- static TextTheme get textTheme => GoogleFonts.interTextTheme(); + static TextTheme textTheme(TextTheme base) => + GoogleFonts.interTextTheme(base).copyWith( + displayLarge: displayLarge, + headlineLarge: headlineLarge, + headlineMedium: headlineMedium, + titleLarge: titleLarge, + titleMedium: titleMedium, + bodyLarge: bodyLarge, + bodyMedium: bodyMedium, + bodySmall: bodySmall, + labelLarge: labelLarge, + labelSmall: labelSmall, + );Then have the theme builder pass its generated base
TextThemeinto this helper.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/core/theme/app_typography.dart` around lines 7 - 66, The current textTheme getter returns GoogleFonts.interTextTheme() but never applies the custom token styles (displayLarge, headlineLarge, headlineMedium, titleLarge, titleMedium, bodyLarge, bodyMedium, bodySmall, labelLarge, labelSmall, mono), so widgets using Theme.of(context).textTheme miss your overrides; replace the no-arg getter with a helper like textTheme(TextTheme base) that takes a base TextTheme and returns base.copyWith(...) mapping each relevant field (e.g. displayLarge, headlineLarge, headlineMedium, titleLarge, titleMedium, bodyLarge, bodyMedium, bodySmall, labelLarge, labelSmall, and a mono fallback) to the GoogleFonts.inter / jetBrainsMono styles defined in the class, then update the theme builder (where AppTypography.textTheme is used) to pass the generated base TextTheme into this helper so the package defaults are preserved and your token overrides are applied.examples/flutter/RunAnywhereAI/lib/core/theme/app_theme.dart (1)
9-225: Extract the shared theme builder.These two getters are almost identical, so future theme tweaks will drift quickly. A private
_buildTheme(Brightness brightness)helper with a few light/dark overrides would keep the component themes in one place.♻️ Suggested shape
- static ThemeData get light { - final colorScheme = ColorScheme.fromSeed( - seedColor: AppColors.seed, - brightness: Brightness.light, - ); - return ThemeData( - ... - ); - } - - static ThemeData get dark { - final colorScheme = ColorScheme.fromSeed( - seedColor: AppColors.seed, - brightness: Brightness.dark, - ); - return ThemeData( - ... - ); - } + static ThemeData get light => _buildTheme(Brightness.light); + + static ThemeData get dark => _buildTheme(Brightness.dark); + + static ThemeData _buildTheme(Brightness brightness) { + final colorScheme = ColorScheme.fromSeed( + seedColor: AppColors.seed, + brightness: brightness, + ); + + final isDark = brightness == Brightness.dark; + + return ThemeData( + useMaterial3: true, + colorScheme: colorScheme, + textTheme: AppTypography.textTheme, + scaffoldBackgroundColor: colorScheme.surface, + appBarTheme: AppBarTheme( + centerTitle: false, + elevation: 0, + scrolledUnderElevation: 1, + backgroundColor: colorScheme.surface, + foregroundColor: colorScheme.onSurface, + titleTextStyle: AppTypography.titleLarge.copyWith( + color: colorScheme.onSurface, + ), + ), + navigationBarTheme: NavigationBarThemeData( + elevation: 0, + height: 64, + indicatorColor: colorScheme.primaryContainer, + labelBehavior: NavigationDestinationLabelBehavior.alwaysShow, + labelTextStyle: WidgetStatePropertyAll( + AppTypography.labelSmall.copyWith(color: colorScheme.onSurface), + ), + ), + cardTheme: CardThemeData( + elevation: 0, + shape: RoundedRectangleBorder( + borderRadius: BorderRadius.circular(AppSpacing.radiusMd), + side: BorderSide(color: colorScheme.outlineVariant), + ), + color: isDark ? colorScheme.surfaceContainer : colorScheme.surface, + ), + ... + ); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/core/theme/app_theme.dart` around lines 9 - 225, The light and dark ThemeData getters are nearly identical; extract a private ThemeData _buildTheme(Brightness brightness) that constructs colorScheme via ColorScheme.fromSeed(seedColor: AppColors.seed, brightness: brightness) and builds the shared ThemeData (useMaterial3, textTheme, appBarTheme, navigationBarTheme, button themes, chipTheme, dividerTheme, bottomSheetTheme, snackBarTheme, etc.). Move all common widget theme construction into _buildTheme and inside it branch on brightness (brightness == Brightness.light) to set the differing values: scaffoldBackgroundColor/cardTheme.color/inputDecorationTheme.fillColor (surface vs surfaceContainer, surfaceContainerLowest vs surfaceContainerHighest), plus any other small per-theme differences observed. Replace the light getter with return _buildTheme(Brightness.light) and the dark getter with return _buildTheme(Brightness.dark).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f5678792-1e96-4664-a9ea-7b6f2594a907
⛔ Files ignored due to path filters (10)
examples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-hdpi/ic_launcher.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-hdpi/ic_launcher_foreground.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-mdpi/ic_launcher.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-mdpi/ic_launcher_foreground.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xhdpi/ic_launcher.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xhdpi/ic_launcher_foreground.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xxhdpi/ic_launcher.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xxhdpi/ic_launcher_foreground.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher_foreground.pngis excluded by!**/*.png
📒 Files selected for processing (110)
examples/flutter/RunAnywhereAI/.gitignoreexamples/flutter/RunAnywhereAI/README.mdexamples/flutter/RunAnywhereAI/analysis_options.yamlexamples/flutter/RunAnywhereAI/android/app/build.gradleexamples/flutter/RunAnywhereAI/android/app/build.gradle.ktsexamples/flutter/RunAnywhereAI/android/app/src/main/AndroidManifest.xmlexamples/flutter/RunAnywhereAI/android/app/src/main/java/io/flutter/plugins/GeneratedPluginRegistrant.javaexamples/flutter/RunAnywhereAI/android/app/src/main/kotlin/com/runanywhere/runanywhere_ai/MainActivity.ktexamples/flutter/RunAnywhereAI/android/app/src/main/kotlin/com/runanywhere/runanywhere_ai/PlatformChannelHandler.ktexamples/flutter/RunAnywhereAI/android/app/src/main/res/drawable/ic_launcher_foreground.xmlexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher.xmlexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher_round.xmlexamples/flutter/RunAnywhereAI/android/app/src/main/res/values/colors.xmlexamples/flutter/RunAnywhereAI/android/app/src/main/res/values/styles.xmlexamples/flutter/RunAnywhereAI/android/build.gradleexamples/flutter/RunAnywhereAI/android/build.gradle.ktsexamples/flutter/RunAnywhereAI/android/gradle/wrapper/gradle-wrapper.propertiesexamples/flutter/RunAnywhereAI/android/gradlew.batexamples/flutter/RunAnywhereAI/android/settings.gradleexamples/flutter/RunAnywhereAI/android/settings.gradle.ktsexamples/flutter/RunAnywhereAI/ios/Flutter/AppFrameworkInfo.plistexamples/flutter/RunAnywhereAI/ios/Flutter/Debug.xcconfigexamples/flutter/RunAnywhereAI/ios/Flutter/Release.xcconfigexamples/flutter/RunAnywhereAI/ios/Podfileexamples/flutter/RunAnywhereAI/ios/Runner.xcodeproj/project.pbxprojexamples/flutter/RunAnywhereAI/ios/Runner.xcworkspace/contents.xcworkspacedataexamples/flutter/RunAnywhereAI/ios/Runner/AppDelegate.swiftexamples/flutter/RunAnywhereAI/ios/Runner/Assets.xcassets/LaunchImage.imageset/README.mdexamples/flutter/RunAnywhereAI/ios/Runner/GeneratedPluginRegistrant.mexamples/flutter/RunAnywhereAI/ios/Runner/Info.plistexamples/flutter/RunAnywhereAI/ios/Runner/SceneDelegate.swiftexamples/flutter/RunAnywhereAI/lib/app.dartexamples/flutter/RunAnywhereAI/lib/app/content_view.dartexamples/flutter/RunAnywhereAI/lib/app/runanywhere_ai_app.dartexamples/flutter/RunAnywhereAI/lib/core/design_system/app_colors.dartexamples/flutter/RunAnywhereAI/lib/core/design_system/app_spacing.dartexamples/flutter/RunAnywhereAI/lib/core/design_system/typography.dartexamples/flutter/RunAnywhereAI/lib/core/models/app_types.dartexamples/flutter/RunAnywhereAI/lib/core/providers/sdk_provider.dartexamples/flutter/RunAnywhereAI/lib/core/services/audio_player_service.dartexamples/flutter/RunAnywhereAI/lib/core/services/audio_recording_service.dartexamples/flutter/RunAnywhereAI/lib/core/services/conversation_store.dartexamples/flutter/RunAnywhereAI/lib/core/services/device_info_service.dartexamples/flutter/RunAnywhereAI/lib/core/services/keychain_service.dartexamples/flutter/RunAnywhereAI/lib/core/services/model_manager.dartexamples/flutter/RunAnywhereAI/lib/core/services/permission_service.dartexamples/flutter/RunAnywhereAI/lib/core/theme/app_colors.dartexamples/flutter/RunAnywhereAI/lib/core/theme/app_spacing.dartexamples/flutter/RunAnywhereAI/lib/core/theme/app_theme.dartexamples/flutter/RunAnywhereAI/lib/core/theme/app_typography.dartexamples/flutter/RunAnywhereAI/lib/core/utilities/constants.dartexamples/flutter/RunAnywhereAI/lib/core/utilities/keychain_helper.dartexamples/flutter/RunAnywhereAI/lib/core/widgets/app_shell.dartexamples/flutter/RunAnywhereAI/lib/data/models/chat_message.dartexamples/flutter/RunAnywhereAI/lib/features/chat/chat_controller.dartexamples/flutter/RunAnywhereAI/lib/features/chat/chat_interface_view.dartexamples/flutter/RunAnywhereAI/lib/features/chat/chat_screen.dartexamples/flutter/RunAnywhereAI/lib/features/chat/chat_state.dartexamples/flutter/RunAnywhereAI/lib/features/chat/tool_call_views.dartexamples/flutter/RunAnywhereAI/lib/features/chat/widgets/chat_input_bar.dartexamples/flutter/RunAnywhereAI/lib/features/chat/widgets/message_bubble.dartexamples/flutter/RunAnywhereAI/lib/features/chat/widgets/model_status_banner.dartexamples/flutter/RunAnywhereAI/lib/features/chat/widgets/streaming_bubble.dartexamples/flutter/RunAnywhereAI/lib/features/intro/intro_controller.dartexamples/flutter/RunAnywhereAI/lib/features/intro/intro_screen.dartexamples/flutter/RunAnywhereAI/lib/features/intro/intro_state.dartexamples/flutter/RunAnywhereAI/lib/features/models/add_model_from_url_view.dartexamples/flutter/RunAnywhereAI/lib/features/models/model_components.dartexamples/flutter/RunAnywhereAI/lib/features/models/model_list_view_model.dartexamples/flutter/RunAnywhereAI/lib/features/models/model_selection_sheet.dartexamples/flutter/RunAnywhereAI/lib/features/models/model_status_components.dartexamples/flutter/RunAnywhereAI/lib/features/models/model_types.dartexamples/flutter/RunAnywhereAI/lib/features/models/models_controller.dartexamples/flutter/RunAnywhereAI/lib/features/models/models_screen.dartexamples/flutter/RunAnywhereAI/lib/features/models/models_view.dartexamples/flutter/RunAnywhereAI/lib/features/more/more_screen.dartexamples/flutter/RunAnywhereAI/lib/features/rag/document_service.dartexamples/flutter/RunAnywhereAI/lib/features/rag/rag_controller.dartexamples/flutter/RunAnywhereAI/lib/features/rag/rag_demo_view.dartexamples/flutter/RunAnywhereAI/lib/features/rag/rag_screen.dartexamples/flutter/RunAnywhereAI/lib/features/rag/rag_view_model.dartexamples/flutter/RunAnywhereAI/lib/features/settings/combined_settings_view.dartexamples/flutter/RunAnywhereAI/lib/features/settings/settings_controller.dartexamples/flutter/RunAnywhereAI/lib/features/settings/settings_screen.dartexamples/flutter/RunAnywhereAI/lib/features/settings/tool_settings_view_model.dartexamples/flutter/RunAnywhereAI/lib/features/structured_output/structured_output_screen.dartexamples/flutter/RunAnywhereAI/lib/features/structured_output/structured_output_view.dartexamples/flutter/RunAnywhereAI/lib/features/tools/tools_view.dartexamples/flutter/RunAnywhereAI/lib/features/vision/vision_controller.dartexamples/flutter/RunAnywhereAI/lib/features/vision/vision_hub_view.dartexamples/flutter/RunAnywhereAI/lib/features/vision/vision_screen.dartexamples/flutter/RunAnywhereAI/lib/features/vision/vision_state.dartexamples/flutter/RunAnywhereAI/lib/features/vision/vlm_camera_view.dartexamples/flutter/RunAnywhereAI/lib/features/vision/vlm_view_model.dartexamples/flutter/RunAnywhereAI/lib/features/voice/speech_to_text_view.dartexamples/flutter/RunAnywhereAI/lib/features/voice/stt_controller.dartexamples/flutter/RunAnywhereAI/lib/features/voice/stt_screen.dartexamples/flutter/RunAnywhereAI/lib/features/voice/text_to_speech_view.dartexamples/flutter/RunAnywhereAI/lib/features/voice/tts_controller.dartexamples/flutter/RunAnywhereAI/lib/features/voice/tts_screen.dartexamples/flutter/RunAnywhereAI/lib/features/voice/voice_assistant_controller.dartexamples/flutter/RunAnywhereAI/lib/features/voice/voice_assistant_screen.dartexamples/flutter/RunAnywhereAI/lib/features/voice/voice_assistant_view.dartexamples/flutter/RunAnywhereAI/lib/helpers/adaptive_layout.dartexamples/flutter/RunAnywhereAI/lib/main.dartexamples/flutter/RunAnywhereAI/lib/router.dartexamples/flutter/RunAnywhereAI/pubspec.yamlexamples/flutter/RunAnywhereAI/rule.txtexamples/flutter/RunAnywhereAI/test/widget_test.dartsdk/runanywhere-flutter/packages/runanywhere/android/build.gradle
💤 Files with no reviewable changes (49)
- examples/flutter/RunAnywhereAI/ios/Flutter/Debug.xcconfig
- examples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher.xml
- examples/flutter/RunAnywhereAI/android/app/src/main/res/drawable/ic_launcher_foreground.xml
- examples/flutter/RunAnywhereAI/ios/Flutter/Release.xcconfig
- examples/flutter/RunAnywhereAI/ios/Runner/GeneratedPluginRegistrant.m
- examples/flutter/RunAnywhereAI/ios/Runner.xcworkspace/contents.xcworkspacedata
- examples/flutter/RunAnywhereAI/android/app/src/main/res/values/colors.xml
- examples/flutter/RunAnywhereAI/lib/core/services/audio_player_service.dart
- examples/flutter/RunAnywhereAI/android/app/src/main/kotlin/com/runanywhere/runanywhere_ai/PlatformChannelHandler.kt
- examples/flutter/RunAnywhereAI/ios/Podfile
- examples/flutter/RunAnywhereAI/android/settings.gradle
- examples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher_round.xml
- examples/flutter/RunAnywhereAI/lib/features/vision/vision_hub_view.dart
- examples/flutter/RunAnywhereAI/lib/core/design_system/app_colors.dart
- examples/flutter/RunAnywhereAI/lib/core/services/device_info_service.dart
- examples/flutter/RunAnywhereAI/lib/features/rag/document_service.dart
- examples/flutter/RunAnywhereAI/lib/app/content_view.dart
- examples/flutter/RunAnywhereAI/lib/features/vision/vlm_camera_view.dart
- examples/flutter/RunAnywhereAI/android/app/build.gradle
- examples/flutter/RunAnywhereAI/lib/features/structured_output/structured_output_view.dart
- examples/flutter/RunAnywhereAI/lib/features/settings/combined_settings_view.dart
- examples/flutter/RunAnywhereAI/lib/core/services/model_manager.dart
- examples/flutter/RunAnywhereAI/lib/core/utilities/keychain_helper.dart
- examples/flutter/RunAnywhereAI/lib/features/rag/rag_demo_view.dart
- examples/flutter/RunAnywhereAI/lib/features/tools/tools_view.dart
- examples/flutter/RunAnywhereAI/lib/core/utilities/constants.dart
- examples/flutter/RunAnywhereAI/lib/core/services/keychain_service.dart
- examples/flutter/RunAnywhereAI/lib/app/runanywhere_ai_app.dart
- examples/flutter/RunAnywhereAI/android/build.gradle
- examples/flutter/RunAnywhereAI/lib/features/models/model_types.dart
- examples/flutter/RunAnywhereAI/lib/core/services/permission_service.dart
- examples/flutter/RunAnywhereAI/lib/features/chat/chat_interface_view.dart
- examples/flutter/RunAnywhereAI/lib/features/models/models_view.dart
- examples/flutter/RunAnywhereAI/lib/features/models/model_selection_sheet.dart
- examples/flutter/RunAnywhereAI/lib/features/models/model_components.dart
- examples/flutter/RunAnywhereAI/lib/core/models/app_types.dart
- examples/flutter/RunAnywhereAI/lib/features/settings/tool_settings_view_model.dart
- examples/flutter/RunAnywhereAI/lib/features/voice/speech_to_text_view.dart
- examples/flutter/RunAnywhereAI/lib/features/rag/rag_view_model.dart
- examples/flutter/RunAnywhereAI/lib/features/chat/tool_call_views.dart
- examples/flutter/RunAnywhereAI/lib/features/vision/vlm_view_model.dart
- examples/flutter/RunAnywhereAI/lib/features/models/model_status_components.dart
- examples/flutter/RunAnywhereAI/lib/core/design_system/typography.dart
- examples/flutter/RunAnywhereAI/lib/core/design_system/app_spacing.dart
- examples/flutter/RunAnywhereAI/lib/features/models/model_list_view_model.dart
- examples/flutter/RunAnywhereAI/lib/core/services/audio_recording_service.dart
- examples/flutter/RunAnywhereAI/lib/features/models/add_model_from_url_view.dart
- examples/flutter/RunAnywhereAI/ios/Flutter/AppFrameworkInfo.plist
- examples/flutter/RunAnywhereAI/lib/core/services/conversation_store.dart
| defaultConfig { | ||
| // TODO: Specify your own unique Application ID (https://developer.android.com/studio/build/application-id.html). | ||
| applicationId = "com.runanywhere.runanywhere_ai" | ||
| // You can update the following values to match your application needs. | ||
| // For more information, see: https://flutter.dev/to/review-gradle-config. | ||
| minSdk = flutter.minSdkVersion | ||
| targetSdk = flutter.targetSdkVersion | ||
| versionCode = flutter.versionCode | ||
| versionName = flutter.versionName | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for ABI filter configuration in any Gradle files
rg -n "abiFilters" --type-add 'gradle:*.gradle*' --type gradleRepository: RunanywhereAI/runanywhere-sdks
Length of output: 1807
🏁 Script executed:
cat -n examples/flutter/RunAnywhereAI/android/app/build.gradle.ktsRepository: RunanywhereAI/runanywhere-sdks
Length of output: 1789
🏁 Script executed:
# Look for PR-related files or documentation
find . -type f \( -name "*.md" -o -name "*.txt" \) | grep -iE "(pr|objective|readme)" | head -20Repository: RunanywhereAI/runanywhere-sdks
Length of output: 1000
🏁 Script executed:
# Check git log for commit messages mentioning ABI or arm64
git log --oneline --all --grep="ABI\|arm64" | head -20Repository: RunanywhereAI/runanywhere-sdks
Length of output: 1385
🏁 Script executed:
git log -1 --format="%B" 1cbeaf5bRepository: RunanywhereAI/runanywhere-sdks
Length of output: 1239
🏁 Script executed:
# Also check for any recent PR or branch information
git show 1cbeaf5b --stat | head -50Repository: RunanywhereAI/runanywhere-sdks
Length of output: 3046
Add ABI filter to match stated objective in commit.
The commit message states "Restrict Android ABI to arm64-v8a to fix native lib build failures", but the Flutter app's build.gradle.kts is missing this configuration.
Suggested fix
defaultConfig {
// TODO: Specify your own unique Application ID (https://developer.android.com/studio/build/application-id.html).
applicationId = "com.runanywhere.runanywhere_ai"
// You can update the following values to match your application needs.
// For more information, see: https://flutter.dev/to/review-gradle-config.
minSdk = flutter.minSdkVersion
targetSdk = flutter.targetSdkVersion
versionCode = flutter.versionCode
versionName = flutter.versionName
+ ndk {
+ abiFilters += listOf("arm64-v8a")
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| defaultConfig { | |
| // TODO: Specify your own unique Application ID (https://developer.android.com/studio/build/application-id.html). | |
| applicationId = "com.runanywhere.runanywhere_ai" | |
| // You can update the following values to match your application needs. | |
| // For more information, see: https://flutter.dev/to/review-gradle-config. | |
| minSdk = flutter.minSdkVersion | |
| targetSdk = flutter.targetSdkVersion | |
| versionCode = flutter.versionCode | |
| versionName = flutter.versionName | |
| } | |
| defaultConfig { | |
| // TODO: Specify your own unique Application ID (https://developer.android.com/studio/build/application-id.html). | |
| applicationId = "com.runanywhere.runanywhere_ai" | |
| // You can update the following values to match your application needs. | |
| // For more information, see: https://flutter.dev/to/review-gradle-config. | |
| minSdk = flutter.minSdkVersion | |
| targetSdk = flutter.targetSdkVersion | |
| versionCode = flutter.versionCode | |
| versionName = flutter.versionName | |
| ndk { | |
| abiFilters += listOf("arm64-v8a") | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/android/app/build.gradle.kts` around lines 22
- 31, The defaultConfig block is missing an ABI filter to restrict builds to
arm64-v8a as the commit message requires; inside the defaultConfig (or android)
configuration add an NDK ABI filter for "arm64-v8a" (use the defaultConfig.ndk {
abiFilters.add("arm64-v8a") } or equivalent Kotlin-DSL syntax) so the Android
native libs are built only for arm64-v8a.
| zipStoreBase=GRADLE_USER_HOME | ||
| zipStorePath=wrapper/dists | ||
| distributionUrl=https\://services.gradle.org/distributions/gradle-8.11.1-all.zip | ||
| distributionUrl=https\://services.gradle.org/distributions/gradle-8.14-all.zip |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Gradle 8.14 Android Gradle Plugin compatibility April 2026
💡 Result:
Gradle 8.14 is compatible with Android Gradle Plugin (AGP) versions from 7.3 up to 8.9, according to the official Gradle 8.14 compatibility matrix. Higher AGP versions like 8.10+ require minimum Gradle versions higher than 8.14 (e.g., AGP 8.10 requires Gradle 8.11.1, AGP 8.11-8.12 require 8.13, AGP 8.13 requires 8.13, AGP 9.0 requires 9.1.0, AGP 9.1 requires 9.3.1). As of April 2026, the latest AGP is 9.1.0 (released March 2026), which is not compatible with Gradle 8.14. Gradle tests confirm compatibility within the stated range, though versions outside may work unofficially. No AGP release notes specify an upper limit beyond Gradle's tested range.
Citations:
- 1: https://docs.gradle.org/8.14/userguide/compatibility.html
- 2: https://docs.gradle.org/8.14.1/userguide/compatibility.html
- 3: https://developer.android.com/build/releases/about-agp
Downgrade Gradle to 8.13 or upgrade Android Gradle Plugin to a version compatible with Gradle 8.14.
Gradle 8.14 officially supports Android Gradle Plugin (AGP) versions up to 8.9. The project uses AGP 8.11.1, which requires minimum Gradle 8.13 and is outside Gradle 8.14's official compatibility matrix. This mismatch can cause build failures and maintenance issues. Either use Gradle 8.13 (which officially supports AGP 8.11–8.12) or upgrade to AGP ≤8.9.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@examples/flutter/RunAnywhereAI/android/gradle/wrapper/gradle-wrapper.properties`
at line 5, The distributionUrl in gradle-wrapper.properties is set to Gradle
8.14 which is incompatible with the project's Android Gradle Plugin (AGP)
version; either change the distributionUrl value from gradle-8.14-all.zip to
gradle-8.13-all.zip or update the AGP dependency
(com.android.tools.build:gradle) to a version compatible with Gradle 8.14
(≤8.9); update the distributionUrl string
(distributionUrl=...gradle-8.13-all.zip) if downgrading, or bump the classpath
'com.android.tools.build:gradle' version in the build.gradle files if upgrading.
| <key>UIApplicationSceneManifest</key> | ||
| <dict> | ||
| <key>UIApplicationSupportsMultipleScenes</key> | ||
| <false/> | ||
| <key>UISceneConfigurations</key> | ||
| <dict> | ||
| <key>UIWindowSceneSessionRoleApplication</key> | ||
| <array> | ||
| <dict> | ||
| <key>UISceneClassName</key> | ||
| <string>UIWindowScene</string> | ||
| <key>UISceneConfigurationName</key> | ||
| <string>flutter</string> | ||
| <key>UISceneDelegateClassName</key> | ||
| <string>$(PRODUCT_MODULE_NAME).SceneDelegate</string> | ||
| <key>UISceneStoryboardFile</key> | ||
| <string>Main</string> | ||
| </dict> | ||
| </array> | ||
| </dict> | ||
| </dict> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
plist="examples/flutter/RunAnywhereAI/ios/Runner/Info.plist"
pubspec="examples/flutter/RunAnywhereAI/pubspec.yaml"
registrant="examples/flutter/RunAnywhereAI/ios/Runner/GeneratedPluginRegistrant.m"
echo "== Privacy keys present in Info.plist =="
rg -n 'NSCameraUsageDescription|NSMicrophoneUsageDescription|NSSpeechRecognitionUsageDescription' "$plist" || true
echo
echo "== Camera / audio Flutter dependencies =="
rg -n '^\s*(camera|record):' "$pubspec"
echo
echo "== Registered iOS plugins =="
rg -n 'CameraPlugin|RecordIosPlugin' "$registrant"Repository: RunanywhereAI/runanywhere-sdks
Length of output: 616
Add missing iOS privacy usage strings to prevent camera/microphone access failures.
The Info.plist is missing permission descriptions required by the declared camera and record Flutter plugins (pubspec.yaml:38 and 59) that are registered in GeneratedPluginRegistrant.m:115 and 123. Without these privacy keys, iOS will deny access to camera and microphone during Vision, STT, and Voice Assistant flows.
🔧 Suggested plist entries
<key>LSRequiresIPhoneOS</key>
<true/>
+ <key>NSCameraUsageDescription</key>
+ <string>Camera access is required for vision features.</string>
+ <key>NSMicrophoneUsageDescription</key>
+ <string>Microphone access is required for speech input and voice assistant features.</string>
+ <key>NSSpeechRecognitionUsageDescription</key>
+ <string>Speech recognition is required to transcribe spoken input.</string>
<key>UIApplicationSceneManifest</key>
<dict>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/ios/Runner/Info.plist` around lines 29 - 49,
Add the required iOS privacy usage description keys to Info.plist to allow
camera and microphone access for the registered plugins: add
NSCameraUsageDescription and NSMicrophoneUsageDescription (and
NSPhotoLibraryAddUsageDescription if the camera flow writes photos) with clear
user-facing strings so iOS grants permission during Vision, STT, and Voice
Assistant flows; update the plist section containing UIApplicationSceneManifest
(matching the existing UIApplicationSceneManifest/UISceneConfigurations block)
to include these keys so the camera/record plugins referenced by
GeneratedPluginRegistrant.m can function.
| final streamingResult = await sdk.RunAnywhere.generateStream( | ||
| prompt, | ||
| options: options, | ||
| ); | ||
|
|
||
| await for (final token in streamingResult.stream) { | ||
| tokenCount++; | ||
| timeToFirstToken ??= stopwatch.elapsedMilliseconds.toDouble(); | ||
| buffer.write(token); | ||
| state = state.copyWith(streamingContent: buffer.toString()); | ||
| } | ||
|
|
||
| stopwatch.stop(); | ||
| final totalMs = stopwatch.elapsedMilliseconds.toDouble(); | ||
| final tokensPerSec = totalMs > 0 ? (tokenCount / totalMs) * 1000 : 0.0; | ||
|
|
||
| final assistantMessage = ChatMessage( | ||
| role: MessageRole.assistant, | ||
| content: buffer.toString(), | ||
| analytics: MessageAnalytics( | ||
| timeToFirstTokenMs: timeToFirstToken, | ||
| totalGenerationMs: totalMs, | ||
| outputTokens: tokenCount, | ||
| tokensPerSecond: tokensPerSec, | ||
| ), | ||
| ); | ||
|
|
||
| state = state.copyWith( | ||
| messages: [...state.messages, assistantMessage], | ||
| isGenerating: false, | ||
| streamingContent: '', | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the SDK structure to locate type definitions
find . -type f -name "*.dart" | grep -E "(sdk|lib)" | head -20Repository: RunanywhereAI/runanywhere-sdks
Length of output: 1472
🏁 Script executed:
# Search for LLMStreamingResult definition
rg -t dart "class LLMStreamingResult|class LLMGenerationResult" -A 10Repository: RunanywhereAI/runanywhere-sdks
Length of output: 2668
🏁 Script executed:
# Examine the chat_controller.dart file
find . -type f -name "chat_controller.dart"Repository: RunanywhereAI/runanywhere-sdks
Length of output: 145
🏁 Script executed:
# Read the chat_controller.dart file to verify the current implementation
cat -n ./examples/flutter/RunAnywhereAI/lib/features/chat/chat_controller.dart | head -150Repository: RunanywhereAI/runanywhere-sdks
Length of output: 5418
🏁 Script executed:
# Get the full context around lines 84-115
sed -n '75,125p' ./examples/flutter/RunAnywhereAI/lib/features/chat/chat_controller.dartRepository: RunanywhereAI/runanywhere-sdks
Length of output: 1576
🏁 Script executed:
# Double-check LLMGenerationResult field names to ensure responseTokens doesn't exist elsewhere
rg -t dart "responseTokens|tokensUsed" sdk/runanywhere-flutter/Repository: RunanywhereAI/runanywhere-sdks
Length of output: 1645
🏁 Script executed:
# Get the full LLMGenerationResult class definition to understand responseTokens
sed -n '/^class LLMGenerationResult/,/^}/p' sdk/runanywhere-flutter/packages/runanywhere/lib/public/types/generation_types.dartRepository: RunanywhereAI/runanywhere-sdks
Length of output: 828
Await the SDK's result future after consuming the streaming tokens.
The code consumes streamingResult.stream but never awaits streamingResult.result, which contains the authoritative metrics and thinkingContent. The current implementation relies on local stopwatch timing and token counting, losing the SDK's final completion signal. This means:
thinkingContentis never populated in the assistant message- Metrics like
totalGenerationMs,outputTokens, andtokensPerSeconduse client-side approximations instead of the SDK's authoritative values - Late failures or cancellations after stream completion are silently ignored
Add await streamingResult.result and use its fields (matching the pattern in _generateNonStreaming):
Suggested fix
await for (final token in streamingResult.stream) {
tokenCount++;
timeToFirstToken ??= stopwatch.elapsedMilliseconds.toDouble();
buffer.write(token);
state = state.copyWith(streamingContent: buffer.toString());
}
+ final result = await streamingResult.result;
+
final assistantMessage = ChatMessage(
role: MessageRole.assistant,
content: buffer.toString(),
+ thinkingContent: result.thinkingContent,
analytics: MessageAnalytics(
- timeToFirstTokenMs: timeToFirstToken,
- totalGenerationMs: stopwatch.elapsedMilliseconds.toDouble(),
- outputTokens: tokenCount,
- tokensPerSecond: totalMs > 0 ? (tokenCount / totalMs) * 1000 : 0.0,
+ timeToFirstTokenMs: result.timeToFirstTokenMs ?? timeToFirstToken,
+ totalGenerationMs: result.latencyMs,
+ outputTokens: result.responseTokens,
+ tokensPerSecond: result.tokensPerSecond,
),
);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/lib/features/chat/chat_controller.dart` around
lines 84 - 115, After consuming streamingResult.stream, await
streamingResult.result and use its authoritative fields to construct the final
assistant message and metrics instead of the local stopwatch/tokenCount
approximations: await streamingResult.result into a variable (e.g.,
finalResult), then populate ChatMessage.analytics with
finalResult.totalGenerationMs, finalResult.outputTokens,
finalResult.tokensPerSecond (or compute tokensPerSecond from finalResult values
if needed) and set the message's thinkingContent from
finalResult.thinkingContent; finally update state via state.copyWith(...)
(messages, isGenerating:false, streamingContent:'') using these SDK-provided
values and handle any error/cancellation reported by finalResult.
| if (progress.state.isCompleted || progress.state.isFailed) { | ||
| break; | ||
| } | ||
| } | ||
|
|
||
| // Remove from active downloads and refresh models list | ||
| final updatedDownloads = Map<String, ModelDownloadInfo>.from( | ||
| state.downloads, | ||
| )..remove(model.id); | ||
| state = state.copyWith(downloads: updatedDownloads); | ||
| await _loadModels(); |
There was a problem hiding this comment.
Failed downloads are dropped on the success path.
When the stream reports isFailed, the loop breaks into the same cleanup flow as a completed download: the entry is removed and _loadModels() runs, but no error is surfaced. If the SDK reports failure via progress state instead of throwing, the download just disappears from the UI with no explanation.
💡 Suggested fix
- if (progress.state.isCompleted || progress.state.isFailed) {
- break;
- }
+ if (progress.state.isFailed) {
+ throw Exception('Download failed for ${model.id}');
+ }
+ if (progress.state.isCompleted) {
+ break;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (progress.state.isCompleted || progress.state.isFailed) { | |
| break; | |
| } | |
| } | |
| // Remove from active downloads and refresh models list | |
| final updatedDownloads = Map<String, ModelDownloadInfo>.from( | |
| state.downloads, | |
| )..remove(model.id); | |
| state = state.copyWith(downloads: updatedDownloads); | |
| await _loadModels(); | |
| if (progress.state.isFailed) { | |
| throw Exception('Download failed for ${model.id}'); | |
| } | |
| if (progress.state.isCompleted) { | |
| break; | |
| } | |
| } | |
| // Remove from active downloads and refresh models list | |
| final updatedDownloads = Map<String, ModelDownloadInfo>.from( | |
| state.downloads, | |
| )..remove(model.id); | |
| state = state.copyWith(downloads: updatedDownloads); | |
| await _loadModels(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/lib/features/models/models_controller.dart`
around lines 112 - 122, The current cleanup treats progress.state.isFailed the
same as success and removes the download entry without surfacing errors; update
the loop after detecting progress.state.isFailed to record the failure (e.g.,
set an error/status on the ModelDownloadInfo entry in state.downloads for
model.id) instead of immediately removing it, then call state =
state.copyWith(downloads: updatedDownloadsWithError) and finally call
_loadModels(); use the existing Map<String, ModelDownloadInfo>.from(...) pattern
to create updatedDownloadsWithError and ensure progress.state.isFailed handling
preserves the entry and attaches the failure details so the UI can show the
error.
| Future<void> initCamera() async { | ||
| final cameras = await availableCameras(); | ||
| if (cameras.isEmpty) return; | ||
|
|
||
| final back = cameras.firstWhere( | ||
| (c) => c.lensDirection == CameraLensDirection.back, | ||
| orElse: () => cameras.first, | ||
| ); | ||
|
|
||
| final controller = CameraController(back, ResolutionPreset.medium); | ||
| await controller.initialize(); | ||
| ref.read(cameraControllerProvider.notifier).state = controller; | ||
| state = state.copyWith(isCameraReady: true); | ||
| } |
There was a problem hiding this comment.
Add error handling for camera initialization.
controller.initialize() can throw exceptions (e.g., permission denied, camera unavailable). Without a try-catch, failures will propagate unhandled and the app may crash or leave the camera in an inconsistent state.
🛡️ Proposed fix
Future<void> initCamera() async {
- final cameras = await availableCameras();
- if (cameras.isEmpty) return;
-
- final back = cameras.firstWhere(
- (c) => c.lensDirection == CameraLensDirection.back,
- orElse: () => cameras.first,
- );
-
- final controller = CameraController(back, ResolutionPreset.medium);
- await controller.initialize();
- ref.read(cameraControllerProvider.notifier).state = controller;
- state = state.copyWith(isCameraReady: true);
+ try {
+ final cameras = await availableCameras();
+ if (cameras.isEmpty) {
+ state = state.copyWith(errorMessage: 'No cameras available');
+ return;
+ }
+
+ final back = cameras.firstWhere(
+ (c) => c.lensDirection == CameraLensDirection.back,
+ orElse: () => cameras.first,
+ );
+
+ final controller = CameraController(back, ResolutionPreset.medium);
+ await controller.initialize();
+ ref.read(cameraControllerProvider.notifier).state = controller;
+ state = state.copyWith(isCameraReady: true);
+ } on Exception catch (e) {
+ state = state.copyWith(errorMessage: 'Camera init failed: $e');
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/lib/features/vision/vision_controller.dart`
around lines 31 - 44, Wrap the camera initialization inside initCamera in a
try-catch around creating and calling controller.initialize() (the
CameraController and its initialize() call) so any thrown exceptions are caught;
on error dispose the controller if it was created, set the camera provider state
appropriately (ref.read(cameraControllerProvider.notifier).state = null), update
state to indicate isCameraReady: false and store/log the error (e.g., set an
error message in state via state.copyWith(error: ...)), and rethrow only if you
need upstream handling—otherwise swallow after logging to avoid crashing the
app.
| _syncModelState(); | ||
| return const SttState(); | ||
| } | ||
|
|
||
| Future<void> _syncModelState() async { | ||
| final loaded = sdk.RunAnywhere.isSTTModelLoaded; | ||
| state = state.copyWith(isModelLoaded: loaded); | ||
| } |
There was a problem hiding this comment.
Refresh model-loaded status before recording actions
isModelLoaded is synced only once at provider build. If a user loads STT later, this provider can stay stale and keep STT blocked until app restart/provider rebuild.
🛠️ Proposed fix
Future<void> toggleRecording() async {
+ final loaded = sdk.RunAnywhere.isSTTModelLoaded;
+ if (loaded != state.isModelLoaded) {
+ state = state.copyWith(isModelLoaded: loaded);
+ }
+ if (!loaded) {
+ state = state.copyWith(
+ errorMessage: 'No STT model loaded. Load one from Models first.',
+ );
+ return;
+ }
+
if (state.isRecording) {
await _stopAndTranscribe();
} else {
await _startRecording();
}
}Also applies to: 74-80
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/lib/features/voice/stt_controller.dart` around
lines 61 - 68, The provider only calls _syncModelState at build so isModelLoaded
can become stale; update it before any recording actions by invoking
_syncModelState at the start of methods that begin recording/preview (e.g., the
startRecording and startPreview methods referenced around lines 74-80) and/or
register a listener to sdk.RunAnywhere.isSTTModelLoaded changes if the SDK
exposes events; ensure you use sdk.RunAnywhere.isSTTModelLoaded inside
_syncModelState and then set state = state.copyWith(isModelLoaded: loaded)
before proceeding with any recording logic.
| state = state.copyWith(isModelLoaded: loaded); | ||
| } | ||
|
|
||
| void setMode(SttMode mode) => state = state.copyWith(mode: mode); |
There was a problem hiding this comment.
SttMode.live currently has no behavioral effect
setMode() updates state, but _stopAndTranscribe() always runs the same batch flow. Selecting Live does not change behavior, which is misleading.
🛠️ Minimal guard until live mode is implemented
Future<void> _stopAndTranscribe() async {
try {
+ if (state.mode == SttMode.live) {
+ state = state.copyWith(
+ recordingState: RecordingState.idle,
+ errorMessage: 'Live mode is not implemented yet.',
+ );
+ return;
+ }
+
final path = await _recorder.stop();Also applies to: 102-117
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/lib/features/voice/stt_controller.dart` at
line 70, The SttMode.live selection is ineffective because setMode(SttMode) only
updates state while _stopAndTranscribe() always runs the batch transcription
flow; add a minimal guard in _stopAndTranscribe() (and any related methods
around the same block at _stopAndTranscribe / lines referenced 102-117) to
short-circuit when state.mode == SttMode.live so live mode does not trigger the
batch flow—either return early or route to the live-handling stub (e.g., call a
placeholder _startLiveTranscription()) until full live behavior is implemented;
ensure you reference and check state.mode and the SttMode enum when implementing
this guard.
| final path = await _recorder.stop(); | ||
| if (path == null) return; | ||
|
|
There was a problem hiding this comment.
Reset state when recorder stop returns null
On Line 105, path == null returns early without resetting recordingState, which can leave UI stuck in recording state.
🛠️ Proposed fix
final path = await _recorder.stop();
- if (path == null) return;
+ if (path == null) {
+ state = state.copyWith(
+ recordingState: RecordingState.idle,
+ errorMessage: 'Recording stopped without an audio file.',
+ );
+ return;
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/lib/features/voice/stt_controller.dart` around
lines 104 - 106, When _recorder.stop() returns null the method returns early but
never resets recordingState, leaving the UI stuck; in the same method that
contains the `final path = await _recorder.stop(); if (path == null) return;`
branch, set `recordingState` to the appropriate non-recording enum/value (e.g.,
RecordingState.idle or RecordingState.stopped) and update listeners
(notifyListeners() or setState()) before returning so the UI reflects that
recording has ended.
| # runanywhere_ai | ||
|
|
||
| <p align="center"> | ||
| <img src="../../../examples/logo.svg" alt="RunAnywhere Logo" width="120"/> | ||
| </p> | ||
| A new Flutter project. | ||
|
|
||
| <p align="center"> | ||
| <img src="https://img.shields.io/badge/Platform-iOS%2013.0%2B%20%7C%20Android%207.0%2B-02569B?style=flat-square&logo=flutter&logoColor=white" alt="iOS 13.0+ | Android 7.0+" /> | ||
| <img src="https://img.shields.io/badge/Flutter-3.10%2B-02569B?style=flat-square&logo=flutter&logoColor=white" alt="Flutter 3.10+" /> | ||
| <img src="https://img.shields.io/badge/Dart-3.0%2B-0175C2?style=flat-square&logo=dart&logoColor=white" alt="Dart 3.0+" /> | ||
| <img src="https://img.shields.io/badge/License-Apache%202.0-blue?style=flat-square" alt="License" /> | ||
| </p> | ||
| ## Getting Started | ||
|
|
||
| **A production-ready reference app demonstrating the [RunAnywhere Flutter SDK](../../../sdk/runanywhere-flutter/) capabilities for on-device AI.** This app showcases how to build privacy-first, offline-capable AI features with LLM chat, speech-to-text, text-to-speech, and a complete voice assistant pipeline—all running locally on your device. | ||
| This project is a starting point for a Flutter application. | ||
|
|
||
| --- | ||
| A few resources to get you started if this is your first Flutter project: | ||
|
|
||
| ## 🚀 Running This App (Local Development) | ||
| - [Learn Flutter](https://docs.flutter.dev/get-started/learn-flutter) | ||
| - [Write your first Flutter app](https://docs.flutter.dev/get-started/codelab) | ||
| - [Flutter learning resources](https://docs.flutter.dev/reference/learning-resources) | ||
|
|
||
| > **Important:** This sample app consumes the [RunAnywhere Flutter SDK](../../../sdk/runanywhere-flutter/) as local path dependencies. Before opening this project, you must first build the SDK's native libraries. | ||
|
|
||
| ### First-Time Setup | ||
|
|
||
| ```bash | ||
| # 1. Navigate to the Flutter SDK directory | ||
| cd runanywhere-sdks/sdk/runanywhere-flutter | ||
|
|
||
| # 2. Run the setup script (~10-20 minutes on first run) | ||
| # This builds the native C++ frameworks/libraries and enables local mode | ||
| ./scripts/build-flutter.sh --setup | ||
|
|
||
| # 3. Navigate to this sample app | ||
| cd ../../examples/flutter/RunAnywhereAI | ||
|
|
||
| # 4. Install dependencies | ||
| flutter pub get | ||
|
|
||
| # 5. For iOS: Install pods | ||
| cd ios && pod install && cd .. | ||
|
|
||
| # 6. Run the app | ||
| flutter run | ||
|
|
||
| # Or open in Android Studio / VS Code and run from there | ||
| ``` | ||
|
|
||
| ### How It Works | ||
|
|
||
| This sample app's `pubspec.yaml` uses path dependencies to reference the local Flutter SDK packages: | ||
|
|
||
| ``` | ||
| This Sample App → Local Flutter SDK packages (sdk/runanywhere-flutter/packages/) | ||
| ↓ | ||
| Local XCFrameworks/JNI libs (in each package's ios/Frameworks/ and android/jniLibs/) | ||
| ↑ | ||
| Built by: ./scripts/build-flutter.sh --setup | ||
| ``` | ||
|
|
||
| The `build-flutter.sh --setup` script: | ||
| 1. Downloads dependencies (ONNX Runtime, Sherpa-ONNX) | ||
| 2. Builds the native C++ libraries from `runanywhere-commons` | ||
| 3. Copies XCFrameworks to `packages/*/ios/Frameworks/` | ||
| 4. Copies JNI `.so` files to `packages/*/android/src/main/jniLibs/` | ||
| 5. Creates `.testlocal` marker files (enables local library consumption) | ||
|
|
||
| ### After Modifying the SDK | ||
|
|
||
| - **Dart SDK code changes**: Run `flutter run` again (hot reload works for most changes) | ||
| - **C++ code changes** (in `runanywhere-commons`): | ||
| ```bash | ||
| cd sdk/runanywhere-flutter | ||
| ./scripts/build-flutter.sh --local --rebuild-commons | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## See It In Action | ||
|
|
||
| <p align="center"> | ||
| <a href="https://apps.apple.com/us/app/runanywhere/id6756506307"> | ||
| <img src="https://img.shields.io/badge/App_Store-Download-0D96F6?style=for-the-badge&logo=apple&logoColor=white" alt="Download on App Store" /> | ||
| </a> | ||
| <a href="https://play.google.com/store/apps/details?id=com.runanywhere.runanywhereai"> | ||
| <img src="https://img.shields.io/badge/Google_Play-Download-3DDC84?style=for-the-badge&logo=android&logoColor=white" alt="Get it on Google Play" /> | ||
| </a> | ||
| </p> | ||
|
|
||
| Try the native iOS and Android apps to experience on-device AI capabilities immediately. The Flutter sample app demonstrates the same features using the cross-platform Flutter SDK. | ||
|
|
||
| --- | ||
|
|
||
| ## Screenshots | ||
|
|
||
| <p align="center"> | ||
| <img src="../../../docs/screenshots/main-screenshot.jpg" alt="RunAnywhere AI Chat Interface" width="220"/> | ||
| </p> | ||
|
|
||
| --- | ||
|
|
||
| ## Features | ||
|
|
||
| This sample app demonstrates the full power of the RunAnywhere Flutter SDK: | ||
|
|
||
| | Feature | Description | SDK Integration | | ||
| |---------|-------------|-----------------| | ||
| | **AI Chat** | Interactive LLM conversations with streaming responses | `RunAnywhere.generateStream()` | | ||
| | **Thinking Mode** | Support for models with `<think>...</think>` reasoning | Thinking tag parsing | | ||
| | **Real-time Analytics** | Token speed, generation time, inference metrics | `MessageAnalytics` | | ||
| | **Speech-to-Text** | Voice transcription with batch & live modes | `RunAnywhere.transcribe()` | | ||
| | **Text-to-Speech** | Neural voice synthesis with Piper TTS | `RunAnywhere.synthesize()` | | ||
| | **Voice Assistant** | Full STT to LLM to TTS pipeline with auto-detection | `VoiceSession` API | | ||
| | **Model Management** | Download, load, and manage multiple AI models | `ModelManager` | | ||
| | **Storage Management** | View storage usage and delete models | `RunAnywhere.getStorageInfo()` | | ||
| | **Offline Support** | All features work without internet | On-device inference | | ||
|
|
||
| --- | ||
|
|
||
| ## Architecture | ||
|
|
||
| The app follows Flutter best practices with a clean architecture pattern: | ||
|
|
||
| ``` | ||
| ┌─────────────────────────────────────────────────────────────────────┐ | ||
| │ Flutter/Material UI │ | ||
| │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌────────────┐ │ | ||
| │ │ Chat │ │ STT │ │ TTS │ │ Voice │ │ Settings │ │ | ||
| │ │Interface │ │ View │ │ View │ │Assistant │ │ View │ │ | ||
| │ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ └─────┬──────┘ │ | ||
| ├───────┼────────────┼────────────┼────────────┼─────────────┼────────┤ | ||
| │ ▼ ▼ ▼ ▼ ▼ │ | ||
| │ ┌──────────────────────────────────────────────────────────────┐ │ | ||
| │ │ Provider State Management │ │ | ||
| │ │ (ModelManager, Services) │ │ | ||
| │ └──────────────────────────────────────────────────────────────┘ │ | ||
| ├─────────────────────────────────────────────────────────────────────┤ | ||
| │ │ | ||
| │ RunAnywhere Flutter SDK │ | ||
| │ ┌──────────────────────────────────────────────────────────────┐ │ | ||
| │ │ Core API (generate, transcribe, synthesize) │ │ | ||
| │ │ Model Management (download, load, unload, delete) │ │ | ||
| │ │ Voice Session (STT → LLM → TTS pipeline) │ │ | ||
| │ └──────────────────────────────────────────────────────────────┘ │ | ||
| │ │ │ | ||
| │ ┌──────────────────┴──────────────────┐ │ | ||
| │ ▼ ▼ │ | ||
| │ ┌─────────────────┐ ┌─────────────────┐ │ | ||
| │ │ LlamaCpp │ │ ONNX Runtime │ │ | ||
| │ │ (LLM/GGUF) │ │ (STT/TTS) │ │ | ||
| │ └─────────────────┘ └─────────────────┘ │ | ||
| └─────────────────────────────────────────────────────────────────────┘ | ||
| ``` | ||
|
|
||
| ### Key Architecture Decisions | ||
|
|
||
| - **Provider Pattern** — `ChangeNotifier` + `Provider` for state management | ||
| - **Feature-First Structure** — Each feature is self-contained with its own views and logic | ||
| - **Shared Core Services** — `ModelManager`, `AudioRecordingService`, `AudioPlayerService` | ||
| - **Design System** — Consistent `AppColors`, `AppTypography`, `AppSpacing` tokens | ||
| - **SDK Integration** — Direct SDK calls with async/await and Stream support | ||
|
|
||
| --- | ||
|
|
||
| ## Project Structure | ||
|
|
||
| ``` | ||
| RunAnywhereAI/ | ||
| ├── lib/ | ||
| │ ├── main.dart # App entry point | ||
| │ │ | ||
| │ ├── app/ | ||
| │ │ ├── runanywhere_ai_app.dart # SDK initialization, model registration | ||
| │ │ └── content_view.dart # Main tab navigation (5 tabs) | ||
| │ │ | ||
| │ ├── core/ | ||
| │ │ ├── design_system/ | ||
| │ │ │ ├── app_colors.dart # Color palette with dark mode support | ||
| │ │ │ ├── app_spacing.dart # Spacing constants | ||
| │ │ │ └── typography.dart # Text styles | ||
| │ │ │ | ||
| │ │ ├── models/ | ||
| │ │ │ └── app_types.dart # Shared type definitions | ||
| │ │ │ | ||
| │ │ ├── services/ | ||
| │ │ │ ├── model_manager.dart # SDK model management wrapper | ||
| │ │ │ ├── audio_recording_service.dart # Microphone capture | ||
| │ │ │ ├── audio_player_service.dart # TTS playback | ||
| │ │ │ ├── permission_service.dart # Permission handling | ||
| │ │ │ ├── conversation_store.dart # Chat history persistence | ||
| │ │ │ └── device_info_service.dart # Device capabilities | ||
| │ │ │ | ||
| │ │ └── utilities/ | ||
| │ │ ├── constants.dart # Preference keys, defaults | ||
| │ │ └── keychain_helper.dart # Secure storage wrapper | ||
| │ │ | ||
| │ ├── features/ | ||
| │ │ ├── chat/ | ||
| │ │ │ └── chat_interface_view.dart # LLM chat with streaming | ||
| │ │ │ | ||
| │ │ ├── voice/ | ||
| │ │ │ ├── speech_to_text_view.dart # Batch & live STT | ||
| │ │ │ ├── text_to_speech_view.dart # TTS synthesis & playback | ||
| │ │ │ └── voice_assistant_view.dart # Full STT→LLM→TTS pipeline | ||
| │ │ │ | ||
| │ │ ├── models/ | ||
| │ │ │ ├── models_view.dart # Model browser | ||
| │ │ │ ├── model_selection_sheet.dart # Model picker bottom sheet | ||
| │ │ │ ├── model_list_view_model.dart # Model list logic | ||
| │ │ │ ├── model_components.dart # Reusable model UI widgets | ||
| │ │ │ ├── model_status_components.dart # Status badges, indicators | ||
| │ │ │ ├── model_types.dart # Framework enums, model info | ||
| │ │ │ └── add_model_from_url_view.dart # Import custom models | ||
| │ │ │ | ||
| │ │ └── settings/ | ||
| │ │ └── combined_settings_view.dart # Storage & logging config | ||
| │ │ | ||
| │ └── helpers/ | ||
| │ └── adaptive_layout.dart # Responsive layout utilities | ||
| │ | ||
| ├── pubspec.yaml # Dependencies, SDK references | ||
| ├── android/ # Android platform config | ||
| ├── ios/ # iOS platform config | ||
| └── README.md # This file | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## Quick Start | ||
|
|
||
| ### Prerequisites | ||
|
|
||
| - **Flutter** 3.10.0 or later ([install guide](https://flutter.dev/docs/get-started/install)) | ||
| - **Dart** 3.0.0 or later (included with Flutter) | ||
| - **iOS** — Xcode 14+ (for iOS builds) | ||
| - **Android** — Android Studio + SDK 21+ (for Android builds) | ||
| - **~2GB** free storage for AI models | ||
| - **Device** — Physical device recommended for best performance | ||
|
|
||
| ### Clone & Build | ||
|
|
||
| ```bash | ||
| # Clone the repository | ||
| git clone https://github.com/RunanywhereAI/runanywhere-sdks.git | ||
| cd runanywhere-sdks/examples/flutter/RunAnywhereAI | ||
|
|
||
| # Install dependencies | ||
| flutter pub get | ||
|
|
||
| # Run on connected device | ||
| flutter run | ||
| ``` | ||
|
|
||
| ### Run via IDE | ||
|
|
||
| 1. Open the project in VS Code or Android Studio | ||
| 2. Wait for Flutter dependencies to resolve | ||
| 3. Select a physical device (iOS or Android) | ||
| 4. Press **F5** (VS Code) or **Run** (Android Studio) | ||
|
|
||
| ### Build Release APK/IPA | ||
|
|
||
| ```bash | ||
| # Android APK | ||
| flutter build apk --release | ||
|
|
||
| # Android App Bundle | ||
| flutter build appbundle --release | ||
|
|
||
| # iOS (requires Xcode) | ||
| flutter build ios --release | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## SDK Integration Examples | ||
|
|
||
| ### Initialize the SDK | ||
|
|
||
| The SDK is initialized in `runanywhere_ai_app.dart`: | ||
|
|
||
| ```dart | ||
| import 'package:runanywhere/runanywhere.dart'; | ||
| import 'package:runanywhere_llamacpp/runanywhere_llamacpp.dart'; | ||
| import 'package:runanywhere_onnx/runanywhere_onnx.dart'; | ||
|
|
||
| // 1. Initialize SDK in development mode | ||
| await RunAnywhere.initialize(); | ||
|
|
||
| // 2. Register LlamaCpp module for LLM models (GGUF) | ||
| await LlamaCpp.register(); | ||
| LlamaCpp.addModel( | ||
| id: 'smollm2-360m-q8_0', | ||
| name: 'SmolLM2 360M Q8_0', | ||
| url: 'https://huggingface.co/prithivMLmods/SmolLM2-360M-GGUF/resolve/main/SmolLM2-360M.Q8_0.gguf', | ||
| memoryRequirement: 500000000, | ||
| ); | ||
|
|
||
| // 3. Register ONNX module for STT/TTS models | ||
| await Onnx.register(); | ||
| Onnx.addModel( | ||
| id: 'sherpa-onnx-whisper-tiny.en', | ||
| name: 'Sherpa Whisper Tiny (ONNX)', | ||
| url: 'https://github.com/RunanywhereAI/sherpa-onnx/releases/download/runanywhere-models-v1/sherpa-onnx-whisper-tiny.en.tar.gz', | ||
| modality: ModelCategory.speechRecognition, | ||
| memoryRequirement: 75000000, | ||
| ); | ||
| ``` | ||
|
|
||
| ### Download & Load a Model | ||
|
|
||
| ```dart | ||
| // Download with progress tracking (via ModelManager) | ||
| await ModelManager.shared.downloadModel(modelInfo); | ||
|
|
||
| // Load LLM model | ||
| await sdk.RunAnywhere.loadLLMModel('smollm2-360m-q8_0'); | ||
|
|
||
| // Check if model is loaded | ||
| final isLoaded = sdk.RunAnywhere.isModelLoaded; | ||
| ``` | ||
|
|
||
| ### Stream Text Generation | ||
|
|
||
| ```dart | ||
| // Generate with streaming (real-time tokens) | ||
| final streamResult = await RunAnywhere.generateStream(prompt, options: options); | ||
|
|
||
| await for (final token in streamResult.stream) { | ||
| // Display each token as it arrives | ||
| setState(() { | ||
| _responseText += token; | ||
| }); | ||
| } | ||
|
|
||
| // Or non-streaming | ||
| final result = await RunAnywhere.generate(prompt, options: options); | ||
| print('Response: ${result.text}'); | ||
| print('Speed: ${result.tokensPerSecond} tok/s'); | ||
| ``` | ||
|
|
||
| ### Speech-to-Text | ||
|
|
||
| ```dart | ||
| // Load STT model | ||
| await RunAnywhere.loadSTTModel('sherpa-onnx-whisper-tiny.en'); | ||
|
|
||
| // Transcribe audio bytes | ||
| final transcription = await RunAnywhere.transcribe(audioBytes); | ||
| print('Transcription: $transcription'); | ||
| ``` | ||
|
|
||
| ### Text-to-Speech | ||
|
|
||
| ```dart | ||
| // Load TTS voice | ||
| await RunAnywhere.loadTTSVoice('vits-piper-en_US-lessac-medium'); | ||
|
|
||
| // Synthesize speech with options | ||
| final result = await RunAnywhere.synthesize( | ||
| text, | ||
| rate: 1.0, | ||
| pitch: 1.0, | ||
| volume: 1.0, | ||
| ); | ||
|
|
||
| // Play audio (result.samples is Float32List) | ||
| await audioPlayer.play(result.samples, result.sampleRate); | ||
| ``` | ||
|
|
||
| ### Voice Assistant Pipeline (STT to LLM to TTS) | ||
|
|
||
| ```dart | ||
| // Start voice session | ||
| final session = await RunAnywhere.startVoiceSession( | ||
| config: VoiceSessionConfig(), | ||
| ); | ||
|
|
||
| // Listen to session events | ||
| session.events.listen((event) { | ||
| if (event is VoiceSessionTranscribed) { | ||
| print('User said: ${event.text}'); | ||
| } else if (event is VoiceSessionResponded) { | ||
| print('AI response: ${event.text}'); | ||
| } else if (event is VoiceSessionSpeaking) { | ||
| // Audio is being played | ||
| } | ||
| }); | ||
|
|
||
| // Stop session | ||
| session.stop(); | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## Key Screens Explained | ||
|
|
||
| ### 1. Chat Screen (`chat_interface_view.dart`) | ||
|
|
||
| **What it demonstrates:** | ||
| - Streaming text generation with real-time token display | ||
| - Thinking mode support (`<think>...</think>` tags) | ||
| - Message analytics (tokens/sec, generation time) | ||
| - Conversation history with Markdown rendering | ||
| - Model selection bottom sheet integration | ||
|
|
||
| **Key SDK APIs:** | ||
| - `RunAnywhere.generateStream()` — Streaming generation | ||
| - `RunAnywhere.generate()` — Non-streaming generation | ||
| - `RunAnywhere.currentLLMModel()` — Get loaded model info | ||
|
|
||
| ### 2. Speech-to-Text Screen (`speech_to_text_view.dart`) | ||
|
|
||
| **What it demonstrates:** | ||
| - Batch mode: Record full audio, then transcribe | ||
| - Live mode: Real-time streaming transcription (when supported) | ||
| - Audio level visualization | ||
| - Mode selection (batch vs. live) | ||
|
|
||
| **Key SDK APIs:** | ||
| - `RunAnywhere.loadSTTModel()` — Load Whisper model | ||
| - `RunAnywhere.transcribe()` — Batch transcription | ||
| - `RunAnywhere.isSTTModelLoaded` — Check model status | ||
|
|
||
| ### 3. Text-to-Speech Screen (`text_to_speech_view.dart`) | ||
|
|
||
| **What it demonstrates:** | ||
| - Neural voice synthesis with Piper TTS | ||
| - Speed and pitch controls with sliders | ||
| - Audio playback with progress indicator | ||
| - Audio metadata display (duration, sample rate, size) | ||
|
|
||
| **Key SDK APIs:** | ||
| - `RunAnywhere.loadTTSVoice()` — Load TTS model | ||
| - `RunAnywhere.synthesize()` — Generate speech audio | ||
| - `RunAnywhere.isTTSVoiceLoaded` — Check voice status | ||
|
|
||
| ### 4. Voice Assistant Screen (`voice_assistant_view.dart`) | ||
|
|
||
| **What it demonstrates:** | ||
| - Complete voice AI pipeline (STT to LLM to TTS) | ||
| - Model configuration for all 3 components | ||
| - Audio level visualization during recording | ||
| - Conversation turn management | ||
| - Session state machine (connecting, listening, processing, speaking) | ||
|
|
||
| **Key SDK APIs:** | ||
| - `RunAnywhere.startVoiceSession()` — Start voice session | ||
| - `RunAnywhere.isVoiceAgentReady` — Check all components loaded | ||
| - `VoiceSessionEvent` — Session event stream | ||
|
|
||
| ### 5. Settings Screen (`combined_settings_view.dart`) | ||
|
|
||
| **What it demonstrates:** | ||
| - Storage usage overview (total, available, model storage) | ||
| - Downloaded model list with details | ||
| - Model deletion with confirmation dialog | ||
| - Analytics logging toggle | ||
|
|
||
| **Key SDK APIs:** | ||
| - `RunAnywhere.getStorageInfo()` — Get storage details | ||
| - `RunAnywhere.getDownloadedModelsWithInfo()` — List models | ||
| - `RunAnywhere.deleteStoredModel()` — Remove model | ||
|
|
||
| --- | ||
|
|
||
| ## Supported Models | ||
|
|
||
| ### LLM Models (LlamaCpp/GGUF) | ||
|
|
||
| | Model | Size | Memory | Description | | ||
| |-------|------|--------|-------------| | ||
| | SmolLM2 360M Q8_0 | ~400MB | 500MB | Fast, lightweight chat | | ||
| | Qwen 2.5 0.5B Q6_K | ~500MB | 600MB | Multilingual, efficient | | ||
| | LFM2 350M Q4_K_M | ~200MB | 250MB | LiquidAI, ultra-compact | | ||
| | LFM2 350M Q8_0 | ~350MB | 400MB | Higher quality version | | ||
| | Llama 2 7B Chat Q4_K_M | ~4GB | 4GB | Powerful, larger model | | ||
| | Mistral 7B Instruct Q4_K_M | ~4GB | 4GB | High quality responses | | ||
|
|
||
| ### STT Models (ONNX/Whisper) | ||
|
|
||
| | Model | Size | Description | | ||
| |-------|------|-------------| | ||
| | Sherpa Whisper Tiny (EN) | ~75MB | Fast English transcription | | ||
| | Sherpa Whisper Small (EN) | ~250MB | Higher accuracy | | ||
|
|
||
| ### TTS Models (ONNX/Piper) | ||
|
|
||
| | Model | Size | Description | | ||
| |-------|------|-------------| | ||
| | Piper US English (Medium) | ~65MB | Natural American voice | | ||
| | Piper British English (Medium) | ~65MB | British accent | | ||
|
|
||
| --- | ||
|
|
||
| ## Testing | ||
|
|
||
| ### Run Tests | ||
|
|
||
| ```bash | ||
| # Run all tests | ||
| flutter test | ||
|
|
||
| # Run with coverage | ||
| flutter test --coverage | ||
|
|
||
| # Run specific test file | ||
| flutter test test/widget_test.dart | ||
| ``` | ||
|
|
||
| ### Run Lint & Analysis | ||
|
|
||
| ```bash | ||
| # Analyze code quality | ||
| flutter analyze | ||
|
|
||
| # Format code | ||
| dart format lib/ test/ | ||
|
|
||
| # Fix issues automatically | ||
| dart fix --apply | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## Debugging | ||
|
|
||
| ### Enable Verbose Logging | ||
|
|
||
| The app uses `debugPrint()` extensively. Filter logs by: | ||
|
|
||
| ```bash | ||
| # Flutter logs | ||
| flutter logs | grep -E "RunAnywhere|SDK" | ||
| ``` | ||
|
|
||
| ### Common Debug Messages | ||
|
|
||
| | Log Prefix | Description | | ||
| |------------|-------------| | ||
| | `SDK` | SDK initialization | | ||
| | `SUCCESS` | Success operations | | ||
| | `ERROR` | Error conditions | | ||
| | `MODULE` | Module registration | | ||
| | `LOADING` | Loading/processing | | ||
| | `AUDIO` | Audio operations | | ||
| | `RECORDING` | Recording operations | | ||
|
|
||
| ### Memory Profiling | ||
|
|
||
| 1. Run app in profile mode: `flutter run --profile` | ||
| 2. Open DevTools: Press `p` in terminal | ||
| 3. Navigate to Memory tab | ||
| 4. Expected: ~300MB-2GB depending on model size | ||
|
|
||
| --- | ||
|
|
||
| ## Configuration | ||
|
|
||
| ### Environment Setup | ||
|
|
||
| The SDK automatically detects the environment: | ||
|
|
||
| ```dart | ||
| // Development mode (default) | ||
| if (kDebugMode) { | ||
| await RunAnywhere.initialize(); | ||
| } | ||
|
|
||
| // Production mode | ||
| else { | ||
| await RunAnywhere.initialize( | ||
| apiKey: 'your-api-key', | ||
| baseURL: 'https://api.runanywhere.ai', | ||
| environment: SDKEnvironment.production, | ||
| ); | ||
| } | ||
| ``` | ||
|
|
||
| ### Preference Keys | ||
|
|
||
| User preferences are stored via `SharedPreferences`: | ||
|
|
||
| | Key | Type | Default | Description | | ||
| |-----|------|---------|-------------| | ||
| | `useStreaming` | bool | `true` | Enable streaming generation | | ||
| | `defaultTemperature` | double | `0.7` | LLM temperature | | ||
| | `defaultMaxTokens` | int | `500` | Max tokens per generation | | ||
|
|
||
| --- | ||
|
|
||
| ## Known Limitations | ||
|
|
||
| - **ARM64 Recommended** — Native libraries optimized for arm64 (x86 emulators may be slow) | ||
| - **Memory Usage** — Large models (7B+) require devices with 6GB+ RAM | ||
| - **First Load** — Initial model loading takes 1-3 seconds (cached afterward) | ||
| - **Live STT** — Requires WhisperKit-compatible models (limited in ONNX) | ||
| - **Platform Channels** — Some SDK features use FFI/platform channels | ||
|
|
||
| --- | ||
|
|
||
| ## Contributing | ||
|
|
||
| We welcome contributions! See [CONTRIBUTING.md](../../../CONTRIBUTING.md) for guidelines. | ||
|
|
||
| ### Development Setup | ||
|
|
||
| ```bash | ||
| # Fork and clone | ||
| git clone https://github.com/YOUR_USERNAME/runanywhere-sdks.git | ||
| cd runanywhere-sdks/examples/flutter/RunAnywhereAI | ||
|
|
||
| # Create feature branch | ||
| git checkout -b feature/your-feature | ||
|
|
||
| # Make changes and test | ||
| flutter pub get | ||
| flutter analyze | ||
| flutter test | ||
|
|
||
| # Commit and push | ||
| git commit -m "feat: your feature description" | ||
| git push origin feature/your-feature | ||
|
|
||
| # Open Pull Request | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## License | ||
|
|
||
| This project is licensed under the Apache License 2.0 - see [LICENSE](../../../LICENSE) for details. | ||
|
|
||
| --- | ||
|
|
||
| ## Support | ||
|
|
||
| - **Discord**: [Join our community](https://discord.gg/N359FBbDVd) | ||
| - **GitHub Issues**: [Report bugs](https://github.com/RunanywhereAI/runanywhere-sdks/issues) | ||
| - **Email**: san@runanywhere.ai | ||
| - **Twitter**: [@RunanywhereAI](https://twitter.com/RunanywhereAI) | ||
|
|
||
| --- | ||
|
|
||
| ## Related Documentation | ||
|
|
||
| - [RunAnywhere Flutter SDK](../../../sdk/runanywhere-flutter/README.md) — Full SDK documentation | ||
| - [iOS Example App](../../ios/RunAnywhereAI/README.md) — iOS counterpart | ||
| - [Android Example App](../../android/RunAnywhereAI/README.md) — Android counterpart | ||
| - [React Native Example](../../react-native/RunAnywhereAI/README.md) — React Native option | ||
| - [Main README](../../../README.md) — Project overview | ||
| For help getting started with Flutter development, view the | ||
| [online documentation](https://docs.flutter.dev/), which offers tutorials, | ||
| samples, guidance on mobile development, and a full API reference. |
There was a problem hiding this comment.
Restore RunAnywhereAI-specific README content (current text is too generic).
The new README no longer documents how to run or validate this example app’s SDK-dependent features. For this PR’s scope, that’s a major usability gap for contributors and evaluators.
At minimum, add sections for:
- prerequisites + platform constraints (including Android
arm64-v8anote), - initialization/model download flow,
- feature map (Chat, Vision, STT, TTS, Voice, RAG, Structured Output, Models),
- navigation overview and test checklist from the PR.
Suggested README direction
-# runanywhere_ai
-
-A new Flutter project.
+## RunAnywhereAI Flutter Example
+
+Feature-complete Flutter example using Riverpod + GoRouter + Material 3.
+
+### Prerequisites
+- Flutter SDK: <version>
+- Supported Android ABI: arm64-v8a
+- Device permissions: camera, microphone, storage (as applicable)
+
+### Run
+1. `flutter pub get`
+2. `flutter run`
+3. Wait for intro initialization steps to complete.
+
+### Core Flows to Validate
+- SDK init + models list
+- Model download progress (Downloading → Extracting → Verifying)
+- Chat streaming
+- Vision capture/stream
+- More tab: STT, TTS, Voice, RAG🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/README.md` around lines 1 - 17, Replace the
generic Flutter README with RunAnywhereAI-specific documentation: add a
"Prerequisites" section that lists platform constraints (explicitly call out
Android arm64-v8a and minimum OS/Flutter versions), an "Initialization & Model
Download" section that explains the app startup flow and where/when models are
downloaded/initialized (mention any methods or classes handling this, e.g., app
initialization/bootstrapping code and download handlers), a "Feature Map"
enumerating Chat, Vision, STT, TTS, Voice, RAG, Structured Output, and Models
with brief usage notes and required permissions, and a "Navigation Overview &
Test Checklist" that maps screens/routes to features and provides steps to
validate each feature manually; ensure the README references RunAnywhereAI by
name and includes commands to run the app and verify SDK-dependent features.
…calling - Add reusable model selection sheet with download/load per category - Wire model selection into Chat, STT, TTS, Vision, Voice Assistant screens - Add model delete support with confirmation dialog - Register ONNX backend (was missing — STT/TTS/Embedding couldn't load) - Fix STT recording: use 16kHz mono PCM (was defaulting to 44.1kHz) - Port Swift resolveONNXModelPath to Dart SDK (fixes -422 load error) - Rewrite RAG with proper SDK pipeline (create, ingest, query) - Rewrite tool calling screen with Riverpod + demo tools - Fix Riverpod uninitialized provider crashes (Future.microtask) - Fix .gitignore tools/ rule catching nested dirs Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (1)
examples/flutter/RunAnywhereAI/lib/features/models/models_controller.dart (1)
112-122:⚠️ Potential issue | 🟠 MajorFailed downloads silently disappear without surfacing an error.
When
progress.state.isFailed, the code breaks and removes the download entry without settingerrorMessage. Users see the download disappear with no feedback.🐛 Suggested fix
- if (progress.state.isCompleted || progress.state.isFailed) { - break; - } + if (progress.state.isFailed) { + throw Exception('Download failed for ${model.name}'); + } + if (progress.state.isCompleted) { + break; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/models/models_controller.dart` around lines 112 - 122, The loop exits when progress.state.isFailed but the code immediately removes the download entry and refreshes models, causing failures to vanish; update the logic in models_controller.dart so that when progress.state.isFailed you set or propagate a human-readable error into the ModelDownloadInfo (e.g., populate errorMessage on the entry for model.id) before updating state (using state.copyWith(downloads: ...)), or alternatively retain the failed entry instead of removing it, then call _loadModels(); ensure references to progress.state.isFailed, ModelDownloadInfo, state.copyWith(downloads: ...), model.id and _loadModels() are used to locate and apply the change.
🧹 Nitpick comments (6)
examples/flutter/RunAnywhereAI/lib/core/providers/sdk_provider.dart (1)
11-48: Add re-entrancy guard only ifinitializeSDKcan be called multiple times.Current code doesn't show re-entry paths—
_initialize()runs once per IntroController and navigates away on completion. However, if the provider is invalidated externally,build()could run again. Since model registration (viaLlamaCpp.addModel,Genie.addModel,RunAnywhere.registerModel) is not documented as idempotent, adding a guard is defensive but optional.Backend registrations (
LlamaCpp.register(),Onnx.register()) are already idempotent and safe to call multiple times.Consider adding the guard only if the intro flow can be re-triggered (e.g., via screen pop/push, provider invalidation, or retry on error). Otherwise, the current implementation is sufficient.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/core/providers/sdk_provider.dart` around lines 11 - 48, The initializeSDK function may be called multiple times if the provider is rebuilt, so add a simple re-entrancy guard: introduce a private boolean (e.g., _isInitializing or _initialized) checked at the start of initializeSDK (return immediately if already initializing/initialized), set _isInitializing = true when starting, and set _isInitializing = false on error or set _initialized = true on successful completion; reference initializeSDK and the model-registration helpers (_registerLlamaCppModels, _registerGenieModels, _registerVLMModels, _registerSTTModels, _registerTTSModels, _registerEmbeddingModels) so only the top-level initializer is guarded and ensure the flag is cleared in catch/finally to avoid deadlocks.examples/flutter/RunAnywhereAI/lib/features/chat/chat_screen.dart (1)
28-51: Consider debouncing scroll updates during rapid streaming.
ref.listenonstreamingContenttriggers_scrollToBottom()for every token, and each call schedules a 50ms delayed animation. During fast streaming, this queues many overlapping scroll requests, potentially causing performance issues or janky animations.A simple debounce or throttle would improve this:
Proposed fix using a debounce flag
class _ChatScreenState extends ConsumerState<ChatScreen> { final _scrollController = ScrollController(); + bool _scrollPending = false; `@override` void dispose() { _scrollController.dispose(); super.dispose(); } void _scrollToBottom() { if (!_scrollController.hasClients) return; + if (_scrollPending) return; + _scrollPending = true; Future.delayed(const Duration(milliseconds: 50), () { + _scrollPending = false; if (_scrollController.hasClients) { _scrollController.animateTo( _scrollController.position.maxScrollExtent, duration: const Duration(milliseconds: 200), curve: Curves.easeOut, ); } }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/chat/chat_screen.dart` around lines 28 - 51, The current ref.listen calls (especially the one observing chatControllerProvider.select((s) => s.streamingContent)) invoke _scrollToBottom() for every streaming token which schedules overlapping delayed animations; add a debounce mechanism (e.g. a private bool _isScrollScheduled or a Timer _scrollDebounce in the widget/state) and modify _scrollToBottom and the ref.listen callbacks to check/set the debounce before scheduling a Future.delayed, and clear/reset the flag or cancel the Timer after the animateTo completes so rapid streaming events coalesce into a single scroll action.examples/flutter/RunAnywhereAI/lib/features/chat/chat_controller.dart (1)
144-147: Consider handling partial streaming content on cancellation.When
cancelGeneration()is called, any tokens already buffered instreamingContentare lost sinceisGeneratingis set to false without preserving the partial response. Users may want to see what was generated before cancellation.Proposed enhancement
Future<void> cancelGeneration() async { await sdk.RunAnywhere.cancelGeneration(); - state = state.copyWith(isGenerating: false); + // Preserve partial content if user wants to see what was generated + if (state.streamingContent.isNotEmpty) { + final partialMessage = ChatMessage( + role: MessageRole.assistant, + content: state.streamingContent, + // Optionally mark as cancelled + ); + state = state.copyWith( + messages: [...state.messages, partialMessage], + isGenerating: false, + streamingContent: '', + ); + } else { + state = state.copyWith(isGenerating: false); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/chat/chat_controller.dart` around lines 144 - 147, cancelGeneration() currently calls sdk.RunAnywhere.cancelGeneration() and flips isGenerating to false, which discards any buffered tokens in streamingContent; preserve and surface the partial streaming response before stopping. Update cancelGeneration() to capture the current streaming buffer (e.g., state.streamingContent or whatever field holds streamed tokens), append or commit that partial content into the chat message list or a dedicated partialResponse field on state, then call sdk.RunAnywhere.cancelGeneration() and finally set state = state.copyWith(isGenerating: false, streamingContent: '' or cleared) so users see the partial output; reference the cancelGeneration() method and the state.streamingContent/state.copyWith usages when making this change.examples/flutter/RunAnywhereAI/lib/core/widgets/model_selection_sheet.dart (1)
13-32: Unusedrefparameter inshowModelSelectionSheet.The
WidgetRef refparameter is declared but never used within the function body. Consider removing it to avoid misleading callers.Proposed fix
-Future<ModelInfo?> showModelSelectionSheet( - BuildContext context, - WidgetRef ref, { - required ModelSelectionContext selectionContext, -}) async { +Future<ModelInfo?> showModelSelectionSheet( + BuildContext context, { + required ModelSelectionContext selectionContext, +}) async {Note: This will require updating call sites in
chat_screen.dartandvision_screen.dartto remove therefargument.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/core/widgets/model_selection_sheet.dart` around lines 13 - 32, Remove the unused WidgetRef parameter from the showModelSelectionSheet signature and all its usages: change Future<ModelInfo?> showModelSelectionSheet(BuildContext context, WidgetRef ref, { required ModelSelectionContext selectionContext, }) to omit the ref parameter, update any callers that currently pass a ref (e.g., places in chat_screen.dart and vision_screen.dart) to call showModelSelectionSheet(context, selectionContext: ...), and remove any now-unused WidgetRef imports at the top of model_selection_sheet.dart.examples/flutter/RunAnywhereAI/lib/features/more/more_screen.dart (1)
77-91: Consider retrieving theme from context for consistency.
_SectionHeadertakesthemeas a parameter while_FeatureTileretrieves it fromTheme.of(context). For consistency and to reduce prop drilling, consider usingTheme.of(context)in both widgets.♻️ Optional refactor
class _SectionHeader extends StatelessWidget { - const _SectionHeader({required this.title, required this.theme}); + const _SectionHeader({required this.title}); final String title; - final ThemeData theme; `@override` Widget build(BuildContext context) { + final theme = Theme.of(context); return Text( title, style: AppTypography.labelLarge.copyWith( color: theme.colorScheme.onSurfaceVariant, ), ); } }Then update call sites:
- _SectionHeader(title: 'Voice & Audio', theme: theme), + const _SectionHeader(title: 'Voice & Audio'),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/more/more_screen.dart` around lines 77 - 91, _SectionHeader currently requires a ThemeData prop while _FeatureTile calls Theme.of(context) — remove the theme parameter from _SectionHeader and fetch the theme inside its build method via Theme.of(context); update the constructor signature (remove required this.theme) and any call sites passing theme to instead call _SectionHeader(title: ...); ensure you keep the title field and use theme.colorScheme.onSurfaceVariant exactly as before when building the Text style.examples/flutter/RunAnywhereAI/lib/features/tools/tools_screen.dart (1)
37-45: Consider usingref.onDisposeor lifecycle hooks instead ofFuture.microtaskinbuild().Using
Future.microtaskinsidebuild()can cause issues if the provider is invalidated and rebuilds multiple times, leading to duplicate tool registrations (thoughclearTools()mitigates this). A cleaner pattern would be to use Riverpod's lifecycle hooks.♻️ Suggested improvement
`@override` _ToolsState build() { - Future.microtask(() { - _registerDemoTools(); - _syncModel(); - }); + // Use ref.listenSelf or schedule initialization once + Future.microtask(_initialize); return const _ToolsState(); } + +void _initialize() { + _registerDemoTools(); + _syncModel(); +}Alternatively, consider moving tool registration to
initStatein the widget if it should only happen once per screen visit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/tools/tools_screen.dart` around lines 37 - 45, Replace the Future.microtask workaround in _ToolsController.build(): add a private boolean _initialized field to the controller, call _registerDemoTools() and _syncModel() directly from build() only when !_initialized and then set _initialized = true, and register cleanup with ref.onDispose(() => clearTools()) so tools are cleared on disposal; alternatively, if registration should be tied to the widget lifecycle, move the registration/sync calls into the Widget's initState instead of doing work inside _ToolsController.build().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.gitignore:
- Line 387: The .gitignore currently has a duplicate entry '/tools/' which
narrows scope to repo-root only and also duplicates an existing tools/ pattern;
remove the duplicate '/tools/' line or change it back to the recursive 'tools/'
pattern so nested tools/ directories remain ignored, ensuring you keep the
single intended entry (either a single 'tools/' for recursive matching or a
single '/tools/' if you really want root-only behavior).
In `@examples/flutter/RunAnywhereAI/lib/core/widgets/model_selection_sheet.dart`:
- Around line 85-109: The _downloadModel method currently removes failed
downloads silently; update it to surface failures to the user by (1) when
progress.state.isFailed is detected inside the download loop, set
_downloads[model.id] to a failure state (e.g., progress unchanged, stage from
_stageLabel or "Failed") and show a SnackBar with an error message instead of
just breaking; (2) in the on Exception catch block, show a SnackBar describing
the download failure before removing the entry; ensure you still respect mounted
and call setState around _downloads updates and use
ScaffoldMessenger.of(context).showSnackBar to display the message, then call
_loadModels() only on successful completion as currently done.
- Around line 438-446: The _formatBytes function is using decimal (1000) units
but should follow the SDK's binary (1024) convention; update the thresholds and
divisors in _formatBytes to use 1024-based powers (use 1024, 1024*1024,
1024*1024*1024 or their numeric equivalents 1024, 1048576, 1073741824) while
keeping the existing toStringAsFixed formatting (1 decimal for GB, 0 for MB/KB)
so displayed sizes match the SDK's behavior.
In `@examples/flutter/RunAnywhereAI/lib/features/chat/chat_controller.dart`:
- Around line 10-15: The fire-and-forget call in ChatController.build that
invokes _syncModelState via Future.microtask currently has no error handling;
change the microtask to an async wrapper that awaits _syncModelState and catches
exceptions (catch (e, st)) and logs them (use debugPrint or your logger) to
avoid unobserved errors; also add import 'package:flutter/foundation.dart' if
using debugPrint. Ensure you reference ChatController.build and the
_syncModelState function when applying the change.
In `@examples/flutter/RunAnywhereAI/lib/features/chat/chat_screen.dart`:
- Around line 189-194: The close IconButton currently uses padding:
EdgeInsets.zero and constraints: const BoxConstraints(), producing a tiny 16x16
touch target; update the IconButton (or its surrounding widget) to meet
accessibility touch-target minimums by applying larger constraints or padding
(e.g., ensure a minimum size of 44x44 via BoxConstraints(minWidth: 44,
minHeight: 44) or add sufficient EdgeInsets) while preserving the icon size and
keeping onDismiss as the handler so the tappable area is accessible.
In `@examples/flutter/RunAnywhereAI/lib/features/tools/tools_screen.dart`:
- Around line 47-53: The _syncModel method replaces the entire _ToolsState
causing loss of fields like errorMessage, toolLog, response, and isGenerating;
add a copyWith method to the _ToolsState class that accepts optional parameters
for modelName, tools, errorMessage, toolLog, response, and isGenerating and
returns a new _ToolsState preserving unspecified fields, then change _syncModel
to call state = state.copyWith(modelName: model?.name) (or update tools when
needed) so existing state fields are preserved.
- Around line 166-183: The HTTP calls in tools_screen.dart (the two http.get
calls that produce geoRes and wxRes within the try block) lack timeouts and can
hang; modify those http.get calls to apply a timeout (e.g., use .timeout with a
reasonable Duration) and handle TimeoutException in the same try/catch so you
return an error value (same shape as existing returns, e.g., {'error':
StringToolValue('...')}) when a timeout occurs; ensure both the geocoding
request (geoUrl -> geoRes) and the weather request (wxUrl -> wxRes) are updated
and error handling remains consistent with existing logic.
- Around line 214-239: The _eval method mishandles unary negatives and
mixed-operator precedence and can throw on inputs like "-5" or "-3+2"; update
_eval to either (A) replace the hand-rolled logic with a proper expression
parser (e.g., use an existing library or implement a shunting-yard/infix-to-RPN
evaluator) that supports unary minus and operator precedence, or (B) add input
validation and defensive handling: detect and correctly parse a leading '-' as a
unary operator before splitting, reject or short-circuit mixed-operator
expressions by returning a controlled error instead of parsing them, and wrap
the body of _eval (and the caller _calculate) in try-catch to return a
descriptive error rather than allowing double.parse to throw. Ensure references
to _eval (and _calculate) are updated to handle and surface the error result.
---
Duplicate comments:
In `@examples/flutter/RunAnywhereAI/lib/features/models/models_controller.dart`:
- Around line 112-122: The loop exits when progress.state.isFailed but the code
immediately removes the download entry and refreshes models, causing failures to
vanish; update the logic in models_controller.dart so that when
progress.state.isFailed you set or propagate a human-readable error into the
ModelDownloadInfo (e.g., populate errorMessage on the entry for model.id) before
updating state (using state.copyWith(downloads: ...)), or alternatively retain
the failed entry instead of removing it, then call _loadModels(); ensure
references to progress.state.isFailed, ModelDownloadInfo,
state.copyWith(downloads: ...), model.id and _loadModels() are used to locate
and apply the change.
---
Nitpick comments:
In `@examples/flutter/RunAnywhereAI/lib/core/providers/sdk_provider.dart`:
- Around line 11-48: The initializeSDK function may be called multiple times if
the provider is rebuilt, so add a simple re-entrancy guard: introduce a private
boolean (e.g., _isInitializing or _initialized) checked at the start of
initializeSDK (return immediately if already initializing/initialized), set
_isInitializing = true when starting, and set _isInitializing = false on error
or set _initialized = true on successful completion; reference initializeSDK and
the model-registration helpers (_registerLlamaCppModels, _registerGenieModels,
_registerVLMModels, _registerSTTModels, _registerTTSModels,
_registerEmbeddingModels) so only the top-level initializer is guarded and
ensure the flag is cleared in catch/finally to avoid deadlocks.
In `@examples/flutter/RunAnywhereAI/lib/core/widgets/model_selection_sheet.dart`:
- Around line 13-32: Remove the unused WidgetRef parameter from the
showModelSelectionSheet signature and all its usages: change Future<ModelInfo?>
showModelSelectionSheet(BuildContext context, WidgetRef ref, { required
ModelSelectionContext selectionContext, }) to omit the ref parameter, update any
callers that currently pass a ref (e.g., places in chat_screen.dart and
vision_screen.dart) to call showModelSelectionSheet(context, selectionContext:
...), and remove any now-unused WidgetRef imports at the top of
model_selection_sheet.dart.
In `@examples/flutter/RunAnywhereAI/lib/features/chat/chat_controller.dart`:
- Around line 144-147: cancelGeneration() currently calls
sdk.RunAnywhere.cancelGeneration() and flips isGenerating to false, which
discards any buffered tokens in streamingContent; preserve and surface the
partial streaming response before stopping. Update cancelGeneration() to capture
the current streaming buffer (e.g., state.streamingContent or whatever field
holds streamed tokens), append or commit that partial content into the chat
message list or a dedicated partialResponse field on state, then call
sdk.RunAnywhere.cancelGeneration() and finally set state =
state.copyWith(isGenerating: false, streamingContent: '' or cleared) so users
see the partial output; reference the cancelGeneration() method and the
state.streamingContent/state.copyWith usages when making this change.
In `@examples/flutter/RunAnywhereAI/lib/features/chat/chat_screen.dart`:
- Around line 28-51: The current ref.listen calls (especially the one observing
chatControllerProvider.select((s) => s.streamingContent)) invoke
_scrollToBottom() for every streaming token which schedules overlapping delayed
animations; add a debounce mechanism (e.g. a private bool _isScrollScheduled or
a Timer _scrollDebounce in the widget/state) and modify _scrollToBottom and the
ref.listen callbacks to check/set the debounce before scheduling a
Future.delayed, and clear/reset the flag or cancel the Timer after the animateTo
completes so rapid streaming events coalesce into a single scroll action.
In `@examples/flutter/RunAnywhereAI/lib/features/more/more_screen.dart`:
- Around line 77-91: _SectionHeader currently requires a ThemeData prop while
_FeatureTile calls Theme.of(context) — remove the theme parameter from
_SectionHeader and fetch the theme inside its build method via
Theme.of(context); update the constructor signature (remove required this.theme)
and any call sites passing theme to instead call _SectionHeader(title: ...);
ensure you keep the title field and use theme.colorScheme.onSurfaceVariant
exactly as before when building the Text style.
In `@examples/flutter/RunAnywhereAI/lib/features/tools/tools_screen.dart`:
- Around line 37-45: Replace the Future.microtask workaround in
_ToolsController.build(): add a private boolean _initialized field to the
controller, call _registerDemoTools() and _syncModel() directly from build()
only when !_initialized and then set _initialized = true, and register cleanup
with ref.onDispose(() => clearTools()) so tools are cleared on disposal;
alternatively, if registration should be tied to the widget lifecycle, move the
registration/sync calls into the Widget's initState instead of doing work inside
_ToolsController.build().
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5639bfa5-f09e-4c59-837a-ac003a70b884
📒 Files selected for processing (24)
.gitignoreexamples/flutter/RunAnywhereAI/lib/core/providers/sdk_provider.dartexamples/flutter/RunAnywhereAI/lib/core/types/model_selection_context.dartexamples/flutter/RunAnywhereAI/lib/core/widgets/model_selection_sheet.dartexamples/flutter/RunAnywhereAI/lib/features/chat/chat_controller.dartexamples/flutter/RunAnywhereAI/lib/features/chat/chat_screen.dartexamples/flutter/RunAnywhereAI/lib/features/chat/widgets/model_status_banner.dartexamples/flutter/RunAnywhereAI/lib/features/models/models_controller.dartexamples/flutter/RunAnywhereAI/lib/features/models/models_screen.dartexamples/flutter/RunAnywhereAI/lib/features/more/more_screen.dartexamples/flutter/RunAnywhereAI/lib/features/rag/rag_controller.dartexamples/flutter/RunAnywhereAI/lib/features/rag/rag_screen.dartexamples/flutter/RunAnywhereAI/lib/features/tools/tools_screen.dartexamples/flutter/RunAnywhereAI/lib/features/vision/vision_controller.dartexamples/flutter/RunAnywhereAI/lib/features/vision/vision_screen.dartexamples/flutter/RunAnywhereAI/lib/features/voice/stt_controller.dartexamples/flutter/RunAnywhereAI/lib/features/voice/stt_screen.dartexamples/flutter/RunAnywhereAI/lib/features/voice/tts_controller.dartexamples/flutter/RunAnywhereAI/lib/features/voice/tts_screen.dartexamples/flutter/RunAnywhereAI/lib/features/voice/voice_assistant_controller.dartexamples/flutter/RunAnywhereAI/lib/features/voice/voice_assistant_screen.dartexamples/flutter/RunAnywhereAI/lib/router.dartexamples/flutter/RunAnywhereAI/pubspec.yamlsdk/runanywhere-flutter/packages/runanywhere/lib/native/dart_bridge_model_paths.dart
✅ Files skipped from review due to trivial changes (2)
- examples/flutter/RunAnywhereAI/lib/core/types/model_selection_context.dart
- examples/flutter/RunAnywhereAI/lib/features/chat/widgets/model_status_banner.dart
🚧 Files skipped from review as they are similar to previous changes (3)
- examples/flutter/RunAnywhereAI/lib/features/vision/vision_screen.dart
- examples/flutter/RunAnywhereAI/lib/features/rag/rag_controller.dart
- examples/flutter/RunAnywhereAI/lib/features/vision/vision_controller.dart
| # React Native - Pre-built xcframeworks (build artifacts) | ||
| sdk/runanywhere-react-native/packages/*/ios/xcframeworks/ | ||
| tools/ | ||
| /tools/ |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Find all tools directories in the repository to verify ignore coverage.
# Search for tools directories, excluding common build artifacts
fd -t d '^tools$' --exclude node_modules --exclude build --exclude .dart_tool --exclude PodsRepository: RunanywhereAI/runanywhere-sdks
Length of output: 231
🏁 Script executed:
#!/bin/bash
# Check context around lines 387 and 399 in .gitignore
sed -n '380,410p' .gitignore | cat -nRepository: RunanywhereAI/runanywhere-sdks
Length of output: 612
🏁 Script executed:
#!/bin/bash
# Search for all tools patterns in .gitignore
rg -n 'tools' .gitignoreRepository: RunanywhereAI/runanywhere-sdks
Length of output: 260
Remove duplicate /tools/ entry; nested tools/ directories will no longer be ignored.
Line 387 duplicates the /tools/ pattern already present on line 399. More critically, changing from tools/ to /tools/ narrows the pattern scope to match only the repository root, which breaks coverage for nested tools/ directories. A search confirms three nested tools/ directories exist in the codebase:
Playground/android-use-agent/app/src/main/java/com/runanywhere/agent/tools/examples/flutter/RunAnywhereAI/lib/features/tools/sdk/runanywhere-commons/tools/
Either revert line 387 to tools/ to maintain recursive matching, or remove it entirely and rely on line 399 if repository-root-only scope is intended.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore at line 387, The .gitignore currently has a duplicate entry
'/tools/' which narrows scope to repo-root only and also duplicates an existing
tools/ pattern; remove the duplicate '/tools/' line or change it back to the
recursive 'tools/' pattern so nested tools/ directories remain ignored, ensuring
you keep the single intended entry (either a single 'tools/' for recursive
matching or a single '/tools/' if you really want root-only behavior).
| Future<void> _downloadModel(ModelInfo model) async { | ||
| setState(() { | ||
| _downloads[model.id] = const _DownloadState(progress: 0, stage: 'Queued'); | ||
| }); | ||
|
|
||
| try { | ||
| await for (final progress in sdk.RunAnywhere.downloadModel(model.id)) { | ||
| if (!mounted) return; | ||
| setState(() { | ||
| _downloads[model.id] = _DownloadState( | ||
| progress: progress.overallProgress, | ||
| stage: _stageLabel(progress.stage), | ||
| ); | ||
| }); | ||
| if (progress.state.isCompleted || progress.state.isFailed) break; | ||
| } | ||
|
|
||
| if (!mounted) return; | ||
| setState(() => _downloads.remove(model.id)); | ||
| await _loadModels(); | ||
| } on Exception { | ||
| if (!mounted) return; | ||
| setState(() => _downloads.remove(model.id)); | ||
| } | ||
| } |
There was a problem hiding this comment.
Download failures are handled silently without user feedback.
When a download fails (either progress.state.isFailed at line 99 or an exception at line 105), the code removes the download state but provides no indication to the user. Failed downloads simply disappear from the UI.
Consider showing a SnackBar on failure (similar to the load failure handling at line 137-139).
Proposed fix
if (progress.state.isCompleted || progress.state.isFailed) break;
}
if (!mounted) return;
- setState(() => _downloads.remove(model.id));
- await _loadModels();
- } on Exception {
+ final failed = _downloads[model.id]?.stage == 'Failed';
+ setState(() => _downloads.remove(model.id));
+ if (failed) {
+ ScaffoldMessenger.maybeOf(context)?.showSnackBar(
+ SnackBar(content: Text('Failed to download ${model.name}')),
+ );
+ } else {
+ await _loadModels();
+ }
+ } on Exception catch (e) {
if (!mounted) return;
setState(() => _downloads.remove(model.id));
+ ScaffoldMessenger.maybeOf(context)?.showSnackBar(
+ SnackBar(content: Text('Download error: $e')),
+ );
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/lib/core/widgets/model_selection_sheet.dart`
around lines 85 - 109, The _downloadModel method currently removes failed
downloads silently; update it to surface failures to the user by (1) when
progress.state.isFailed is detected inside the download loop, set
_downloads[model.id] to a failure state (e.g., progress unchanged, stage from
_stageLabel or "Failed") and show a SnackBar with an error message instead of
just breaking; (2) in the on Exception catch block, show a SnackBar describing
the download failure before removing the entry; ensure you still respect mounted
and call setState around _downloads updates and use
ScaffoldMessenger.of(context).showSnackBar to display the message, then call
_loadModels() only on successful completion as currently done.
| String _formatBytes(int bytes) { | ||
| if (bytes >= 1000000000) { | ||
| return '${(bytes / 1000000000).toStringAsFixed(1)} GB'; | ||
| } | ||
| if (bytes >= 1000000) { | ||
| return '${(bytes / 1000000).toStringAsFixed(0)} MB'; | ||
| } | ||
| return '${(bytes / 1000).toStringAsFixed(0)} KB'; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for how downloadSize is calculated/reported in the SDK
rg -n "downloadSize" --type dart -C 3Repository: RunanywhereAI/runanywhere-sdks
Length of output: 19152
🏁 Script executed:
#!/bin/bash
# Search for memoryRequirement definition and how it's calculated
rg -n "memoryRequirement" --type dart -C 2 | head -100Repository: RunanywhereAI/runanywhere-sdks
Length of output: 9207
🏁 Script executed:
#!/bin/bash
# Search for any documentation or comments about units (binary vs decimal)
rg -n "(1024|binary|GiB|MiB|KiB)" --type dart -C 2 | head -50Repository: RunanywhereAI/runanywhere-sdks
Length of output: 5918
🏁 Script executed:
#!/bin/bash
# Look for the definition and usage of downloadSize in model_types to understand the intent
sed -n '510,660p' sdk/runanywhere-flutter/packages/runanywhere/lib/core/types/model_types.dartRepository: RunanywhereAI/runanywhere-sdks
Length of output: 5132
Align byte formatting with SDK's binary unit convention (1024).
The SDK's own _formatBytes function (in foundation/error_types/sdk_error.dart) uses binary units (1024), but this method uses SI units (1000), creating a ~2–7% display discrepancy. Standardize to match the SDK's established pattern.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/lib/core/widgets/model_selection_sheet.dart`
around lines 438 - 446, The _formatBytes function is using decimal (1000) units
but should follow the SDK's binary (1024) convention; update the thresholds and
divisors in _formatBytes to use 1024-based powers (use 1024, 1024*1024,
1024*1024*1024 or their numeric equivalents 1024, 1048576, 1073741824) while
keeping the existing toStringAsFixed formatting (1 decimal for GB, 0 for MB/KB)
so displayed sizes match the SDK's behavior.
| class ChatController extends Notifier<ChatState> { | ||
| @override | ||
| ChatState build() { | ||
| Future.microtask(_syncModelState); | ||
| return const ChatState(); | ||
| } |
There was a problem hiding this comment.
Unhandled errors in fire-and-forget _syncModelState call.
Future.microtask(_syncModelState) runs without error handling. If sdk.RunAnywhere.currentLLMModel() throws, the error is unobserved and could cause issues in debug mode or crash in release with Zone.current.handleUncaughtError.
Proposed fix
`@override`
ChatState build() {
- Future.microtask(_syncModelState);
+ Future.microtask(() async {
+ try {
+ await _syncModelState();
+ } on Exception catch (e) {
+ // Silently ignore sync errors on startup; model state will be null
+ debugPrint('[ChatController] Failed to sync model state: $e');
+ }
+ });
return const ChatState();
}Note: You'll need to add import 'package:flutter/foundation.dart'; for debugPrint.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/lib/features/chat/chat_controller.dart` around
lines 10 - 15, The fire-and-forget call in ChatController.build that invokes
_syncModelState via Future.microtask currently has no error handling; change the
microtask to an async wrapper that awaits _syncModelState and catches exceptions
(catch (e, st)) and logs them (use debugPrint or your logger) to avoid
unobserved errors; also add import 'package:flutter/foundation.dart' if using
debugPrint. Ensure you reference ChatController.build and the _syncModelState
function when applying the change.
| IconButton( | ||
| icon: const Icon(Icons.close, size: 16), | ||
| onPressed: onDismiss, | ||
| padding: EdgeInsets.zero, | ||
| constraints: const BoxConstraints(), | ||
| ), |
There was a problem hiding this comment.
Small touch target for error dismiss button may impact accessibility.
The combination of padding: EdgeInsets.zero and constraints: const BoxConstraints() creates a minimal 16x16 touch target. WCAG guidelines recommend at least 44x44 points for touch targets.
Proposed fix
IconButton(
icon: const Icon(Icons.close, size: 16),
onPressed: onDismiss,
- padding: EdgeInsets.zero,
- constraints: const BoxConstraints(),
+ padding: const EdgeInsets.all(8),
+ constraints: const BoxConstraints(minWidth: 44, minHeight: 44),
+ visualDensity: VisualDensity.compact,
),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| IconButton( | |
| icon: const Icon(Icons.close, size: 16), | |
| onPressed: onDismiss, | |
| padding: EdgeInsets.zero, | |
| constraints: const BoxConstraints(), | |
| ), | |
| IconButton( | |
| icon: const Icon(Icons.close, size: 16), | |
| onPressed: onDismiss, | |
| padding: const EdgeInsets.all(8), | |
| constraints: const BoxConstraints(minWidth: 44, minHeight: 44), | |
| visualDensity: VisualDensity.compact, | |
| ), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/lib/features/chat/chat_screen.dart` around
lines 189 - 194, The close IconButton currently uses padding: EdgeInsets.zero
and constraints: const BoxConstraints(), producing a tiny 16x16 touch target;
update the IconButton (or its surrounding widget) to meet accessibility
touch-target minimums by applying larger constraints or padding (e.g., ensure a
minimum size of 44x44 via BoxConstraints(minWidth: 44, minHeight: 44) or add
sufficient EdgeInsets) while preserving the icon size and keeping onDismiss as
the handler so the tappable area is accessible.
| Future<void> _syncModel() async { | ||
| final model = await sdk.RunAnywhere.currentLLMModel(); | ||
| state = _ToolsState( | ||
| modelName: model?.name, | ||
| tools: state.tools, | ||
| ); | ||
| } |
There was a problem hiding this comment.
State updates lose previous field values.
_syncModel creates a new _ToolsState with only modelName and tools, losing errorMessage, toolLog, response, and isGenerating. The _ToolsState class lacks a copyWith method, causing state data loss in multiple places throughout the controller.
🐛 Proposed fix: Add copyWith to _ToolsState
class _ToolsState {
const _ToolsState({
this.isGenerating = false,
this.toolLog = '',
this.response = '',
this.modelName,
this.errorMessage,
this.tools = const [],
});
final bool isGenerating;
final String toolLog;
final String response;
final String? modelName;
final String? errorMessage;
final List<ToolDefinition> tools;
+
+ _ToolsState copyWith({
+ bool? isGenerating,
+ String? toolLog,
+ String? response,
+ String? modelName,
+ String? errorMessage,
+ List<ToolDefinition>? tools,
+ bool clearError = false,
+ }) {
+ return _ToolsState(
+ isGenerating: isGenerating ?? this.isGenerating,
+ toolLog: toolLog ?? this.toolLog,
+ response: response ?? this.response,
+ modelName: modelName ?? this.modelName,
+ errorMessage: clearError ? null : (errorMessage ?? this.errorMessage),
+ tools: tools ?? this.tools,
+ );
+ }
}Then update _syncModel:
Future<void> _syncModel() async {
final model = await sdk.RunAnywhere.currentLLMModel();
- state = _ToolsState(
- modelName: model?.name,
- tools: state.tools,
- );
+ state = state.copyWith(modelName: model?.name);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/lib/features/tools/tools_screen.dart` around
lines 47 - 53, The _syncModel method replaces the entire _ToolsState causing
loss of fields like errorMessage, toolLog, response, and isGenerating; add a
copyWith method to the _ToolsState class that accepts optional parameters for
modelName, tools, errorMessage, toolLog, response, and isGenerating and returns
a new _ToolsState preserving unspecified fields, then change _syncModel to call
state = state.copyWith(modelName: model?.name) (or update tools when needed) so
existing state fields are preserved.
| try { | ||
| final geoUrl = Uri.parse( | ||
| 'https://geocoding-api.open-meteo.com/v1/search?name=${Uri.encodeComponent(location)}&count=1&language=en&format=json'); | ||
| final geoRes = await http.get(geoUrl); | ||
| final geoData = jsonDecode(geoRes.body) as Map<String, dynamic>; | ||
| final results = geoData['results'] as List?; | ||
| if (results == null || results.isEmpty) { | ||
| return {'error': StringToolValue('Location not found: $location')}; | ||
| } | ||
|
|
||
| final first = results[0] as Map<String, dynamic>; | ||
| final lat = first['latitude'] as num; | ||
| final lon = first['longitude'] as num; | ||
|
|
||
| final wxUrl = Uri.parse( | ||
| 'https://api.open-meteo.com/v1/forecast?latitude=$lat&longitude=$lon¤t=temperature_2m,relative_humidity_2m,weather_code,wind_speed_10m&temperature_unit=fahrenheit&wind_speed_unit=mph'); | ||
| final wxRes = await http.get(wxUrl); | ||
| final wxData = jsonDecode(wxRes.body) as Map<String, dynamic>; |
There was a problem hiding this comment.
HTTP requests lack timeout.
The http.get calls have no timeout, which could cause indefinite hangs on slow or unresponsive networks. Consider adding a timeout for better UX.
🛡️ Suggested fix
- final geoRes = await http.get(geoUrl);
+ final geoRes = await http.get(geoUrl).timeout(
+ const Duration(seconds: 10),
+ onTimeout: () => throw Exception('Request timed out'),
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| final geoUrl = Uri.parse( | |
| 'https://geocoding-api.open-meteo.com/v1/search?name=${Uri.encodeComponent(location)}&count=1&language=en&format=json'); | |
| final geoRes = await http.get(geoUrl); | |
| final geoData = jsonDecode(geoRes.body) as Map<String, dynamic>; | |
| final results = geoData['results'] as List?; | |
| if (results == null || results.isEmpty) { | |
| return {'error': StringToolValue('Location not found: $location')}; | |
| } | |
| final first = results[0] as Map<String, dynamic>; | |
| final lat = first['latitude'] as num; | |
| final lon = first['longitude'] as num; | |
| final wxUrl = Uri.parse( | |
| 'https://api.open-meteo.com/v1/forecast?latitude=$lat&longitude=$lon¤t=temperature_2m,relative_humidity_2m,weather_code,wind_speed_10m&temperature_unit=fahrenheit&wind_speed_unit=mph'); | |
| final wxRes = await http.get(wxUrl); | |
| final wxData = jsonDecode(wxRes.body) as Map<String, dynamic>; | |
| try { | |
| final geoUrl = Uri.parse( | |
| 'https://geocoding-api.open-meteo.com/v1/search?name=${Uri.encodeComponent(location)}&count=1&language=en&format=json'); | |
| final geoRes = await http.get(geoUrl).timeout( | |
| const Duration(seconds: 10), | |
| onTimeout: () => throw Exception('Request timed out'), | |
| ); | |
| final geoData = jsonDecode(geoRes.body) as Map<String, dynamic>; | |
| final results = geoData['results'] as List?; | |
| if (results == null || results.isEmpty) { | |
| return {'error': StringToolValue('Location not found: $location')}; | |
| } | |
| final first = results[0] as Map<String, dynamic>; | |
| final lat = first['latitude'] as num; | |
| final lon = first['longitude'] as num; | |
| final wxUrl = Uri.parse( | |
| 'https://api.open-meteo.com/v1/forecast?latitude=$lat&longitude=$lon¤t=temperature_2m,relative_humidity_2m,weather_code,wind_speed_10m&temperature_unit=fahrenheit&wind_speed_unit=mph'); | |
| final wxRes = await http.get(wxUrl); | |
| final wxData = jsonDecode(wxRes.body) as Map<String, dynamic>; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/lib/features/tools/tools_screen.dart` around
lines 166 - 183, The HTTP calls in tools_screen.dart (the two http.get calls
that produce geoRes and wxRes within the try block) lack timeouts and can hang;
modify those http.get calls to apply a timeout (e.g., use .timeout with a
reasonable Duration) and handle TimeoutException in the same try/catch so you
return an error value (same shape as existing returns, e.g., {'error':
StringToolValue('...')}) when a timeout occurs; ensure both the geocoding
request (geoUrl -> geoRes) and the weather request (wxUrl -> wxRes) are updated
and error handling remains consistent with existing logic.
| double _eval(String expr) { | ||
| final e = expr.replaceAll(' ', ''); | ||
| if (e.contains('+')) { | ||
| return e.split('+').map(double.parse).reduce((a, b) => a + b); | ||
| } else if (e.contains('-')) { | ||
| final parts = e.split('-'); | ||
| var r = double.parse(parts[0]); | ||
| for (var i = 1; i < parts.length; i++) { | ||
| r -= double.parse(parts[i]); | ||
| } | ||
| return r; | ||
| } else if (e.contains('*')) { | ||
| return e.split('*').map(double.parse).reduce((a, b) => a * b); | ||
| } else if (e.contains('/')) { | ||
| final parts = e.split('/'); | ||
| var r = double.parse(parts[0]); | ||
| for (var i = 1; i < parts.length; i++) { | ||
| r /= double.parse(parts[i]); | ||
| } | ||
| return r; | ||
| } else if (e.contains('^')) { | ||
| final parts = e.split('^'); | ||
| return math.pow(double.parse(parts[0]), double.parse(parts[1])).toDouble(); | ||
| } | ||
| return double.parse(e); | ||
| } |
There was a problem hiding this comment.
Expression evaluator has edge-case crashes and no operator precedence.
The _eval method will throw on:
- Negative numbers:
"-5"→split('-')produces['', '5'],double.parse('')throws - Leading negative:
"-3+2"→ same issue - Mixed operators:
"2+3*4"→ returns 20 instead of 14 (no precedence)
For a demo, this may be acceptable, but consider returning an error for unsupported expressions rather than throwing.
🛡️ Optional defensive improvement
double _eval(String expr) {
final e = expr.replaceAll(' ', '');
+ // Simple validation - only allow digits, operators, and decimal points
+ if (!RegExp(r'^[\d+\-*/^.]+$').hasMatch(e)) {
+ throw FormatException('Unsupported expression format');
+ }
if (e.contains('+')) {Or wrap the entire method in a try-catch and return a more descriptive error in _calculate.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/flutter/RunAnywhereAI/lib/features/tools/tools_screen.dart` around
lines 214 - 239, The _eval method mishandles unary negatives and mixed-operator
precedence and can throw on inputs like "-5" or "-3+2"; update _eval to either
(A) replace the hand-rolled logic with a proper expression parser (e.g., use an
existing library or implement a shunting-yard/infix-to-RPN evaluator) that
supports unary minus and operator precedence, or (B) add input validation and
defensive handling: detect and correctly parse a leading '-' as a unary operator
before splitting, reject or short-circuit mixed-operator expressions by
returning a controlled error instead of parsing them, and wrap the body of _eval
(and the caller _calculate) in try-catch to return a descriptive error rather
than allowing double.parse to throw. Ensure references to _eval (and _calculate)
are updated to handle and surface the error result.
…rget) - Add Podfile with static linkage for RACommons xcframeworks - Add camera, microphone, speech, photo library permission descriptions - Bump iOS deployment target from 13.0 to 14.0 (required by SDK podspecs) - Fix display name to "RunAnywhere AI" - Remove unused Scene manifest from Info.plist Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Copy app icons from Android example to Flutter (all densities + adaptive) - Add app icon to Flutter assets and use it on the intro screen - Add missing ic_launcher_background color resource - Restore missing binary_config.gradle for runanywhere SDK - Fix duplicate librac_commons.so by adding pickFirsts Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
examples/flutter/RunAnywhereAI/lib/features/intro/intro_screen.dart (2)
186-199: Clamp progress before binding toLinearProgressIndicator
LinearProgressIndicator.valueexpects a 0..1 range. Clamping here avoids accidental out-of-range values from future controller changes.Proposed change
ClipRRect( borderRadius: BorderRadius.circular(AppSpacing.radiusFull), child: TweenAnimationBuilder<double>( - tween: Tween(begin: 0, end: progress), + tween: Tween(begin: 0, end: progress.clamp(0.0, 1.0).toDouble()), duration: const Duration(milliseconds: 400), curve: Curves.easeInOut, builder: (context, value, _) => LinearProgressIndicator( - value: value, + value: value.clamp(0.0, 1.0).toDouble(), minHeight: 6,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/intro/intro_screen.dart` around lines 186 - 199, Clamp the animated progress before passing it to LinearProgressIndicator to ensure it stays in the 0..1 range; inside the TweenAnimationBuilder's builder (where the parameter is named value and the widget is LinearProgressIndicator) wrap or replace value with a clamped value (e.g., value.clamp(0.0, 1.0)) for the LinearProgressIndicator.value and keep other properties the same so out-of-range progress from the controller cannot break rendering.
64-68: Guard completion transition before navigatingThis currently navigates on any state where
isComplete == true. Adding an edge guard (false -> true) makes routing resilient if future state updates occur after completion.Proposed change
- ref.listen(introControllerProvider, (prev, next) { - if (next.isComplete) { - context.go('/home/chat'); - } - }); + ref.listen(introControllerProvider, (prev, next) { + final wasComplete = prev?.isComplete ?? false; + if (!wasComplete && next.isComplete) { + context.go('/home/chat'); + } + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/flutter/RunAnywhereAI/lib/features/intro/intro_screen.dart` around lines 64 - 68, The listener on introControllerProvider currently navigates whenever next.isComplete is true; change the guard inside the ref.listen callback so it only triggers the transition on the edge from incomplete to complete (i.e., when prev is null or prev.isComplete == false and next.isComplete == true) before calling context.go('/home/chat'), using the existing ref.listen, introControllerProvider, isComplete, and context.go symbols to locate the code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@examples/flutter/RunAnywhereAI/lib/features/intro/intro_screen.dart`:
- Around line 186-199: Clamp the animated progress before passing it to
LinearProgressIndicator to ensure it stays in the 0..1 range; inside the
TweenAnimationBuilder's builder (where the parameter is named value and the
widget is LinearProgressIndicator) wrap or replace value with a clamped value
(e.g., value.clamp(0.0, 1.0)) for the LinearProgressIndicator.value and keep
other properties the same so out-of-range progress from the controller cannot
break rendering.
- Around line 64-68: The listener on introControllerProvider currently navigates
whenever next.isComplete is true; change the guard inside the ref.listen
callback so it only triggers the transition on the edge from incomplete to
complete (i.e., when prev is null or prev.isComplete == false and
next.isComplete == true) before calling context.go('/home/chat'), using the
existing ref.listen, introControllerProvider, isComplete, and context.go symbols
to locate the code.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: daa91ff7-65dc-4ac5-8e58-b4e49431eebf
⛔ Files ignored due to path filters (19)
examples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-hdpi/ic_launcher.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-hdpi/ic_launcher_foreground.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-hdpi/ic_launcher_round.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-ldpi/ic_launcher.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-ldpi/ic_launcher_round.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-mdpi/ic_launcher.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-mdpi/ic_launcher_foreground.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-mdpi/ic_launcher_round.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xhdpi/ic_launcher.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xhdpi/ic_launcher_foreground.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xhdpi/ic_launcher_round.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xxhdpi/ic_launcher.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xxhdpi/ic_launcher_foreground.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xxhdpi/ic_launcher_round.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher_foreground.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-xxxhdpi/ic_launcher_round.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/assets/app_icon.pngis excluded by!**/*.pngexamples/flutter/RunAnywhereAI/assets/app_icon_foreground.pngis excluded by!**/*.png
📒 Files selected for processing (6)
examples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher.xmlexamples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher_round.xmlexamples/flutter/RunAnywhereAI/android/app/src/main/res/values/colors.xmlexamples/flutter/RunAnywhereAI/lib/features/intro/intro_screen.dartexamples/flutter/RunAnywhereAI/pubspec.yamlsdk/runanywhere-flutter/packages/runanywhere/android/build.gradle
✅ Files skipped from review due to trivial changes (1)
- examples/flutter/RunAnywhereAI/android/app/src/main/res/values/colors.xml
🚧 Files skipped from review as they are similar to previous changes (2)
- examples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher.xml
- examples/flutter/RunAnywhereAI/android/app/src/main/res/mipmap-anydpi-v26/ic_launcher_round.xml
… icons - Collapse 7 tabs to 4 (Chat, Vision, More, Settings); move STT/TTS/Voice/RAG/Models under More stack - Replace iOS-blue theme with orange + B&W palette; add ThemeProvider + useTheme() with system appearance listener and light/dark variants - Swap react-native-vector-icons/Ionicons for lucide-react-native via central AppIcon wrapper - Convert ConversationList + ChatAnalytics from state modals to native-stack modal screens - Add common primitives (ScreenHeader, SectionHeader, FeatureTile) used by new MoreScreen hub - Extract ModelsScreen (FlashList catalog with download/delete/progress) from SettingsScreen - Refactor ChatScreen (FlashList, memoized bubbles) and VLMScreen to useTheme() - Rewrite RAGScreen UI: setup slots, themed message bubbles, send-icon input bar - Android edge-to-edge: transparent status/nav bars, WindowCompat decorFitsSystemWindows=false, tab bar respects useSafeAreaInsets - Light slop cleanup across all screens: remove top docstrings, _unused stubs, stray console.logs, 6k+ non-breaking spaces in Settings - Migrate all screens from react-native SafeAreaView to react-native-safe-area-context
Summary
Ships rewrites of both the Flutter and React Native example apps so they share the same structure, design system, and feature coverage on-device.
Both apps now share the same mental model:
Chat · Vision · More · Settings, orange primary + black/white neutral palette, light + dark themes, Lucide/Tabler-style stroke icons, and the full SDK surface reachable from one hub.What changed — Flutter
setStatecalls and singletons.ShellRoutewith 4 bottom tabs.AppColors,AppSpacing,AppTypographytokens on a seeded M3ColorScheme.Downloading → Extracting → Verifyingstages.PlatformChannelHandler, unused fields, duplicate tool implementations.What changed — React Native
TabNavigatornow has 4 tabs: Chat · Vision · More · Settings.MoreStackNavigatorhosts STT, TTS, Voice Assistant, RAG, Models.RootNavigator(native-stack) withConversationListandChatAnalyticspromoted from ad-hoc<Modal>to proper modal stack screens.ThemeProvider+useTheme()hook with system-appearance listener and persisted override.#E8620A) primary + B&W neutrals; semanticuserBubble/assistantBubble/surface/bordertokens.lightColorsanddarkColorsexported; legacyColors.*keys preserved so every existing consumer still compiles.react-native-vector-icons(Ionicons). Adoptedlucide-react-nativevia a singleAppIconwrapper (48 named icons with a sensible fallback).AlertCirclefallback — properSendicon now.MoreScreen— Flutter-style hub withSectionHeader+FeatureTilecards.ModelsScreen—@shopify/flash-list-backed catalog with per-row download, progress bar, delete, pull-to-refresh; extracted from the monolithicSettingsScreen.ChatScreen— refactored toFlashList, memoizedMessageBubble, navigation-based conversation/analytics modals, orange user bubble, neutral assistant.VLMScreen— fulluseTheme()pass, Lucide icons, cleaner control bar, themed live-streaming badge.RAGScreen— redesigned setup flow: three stackedSetupSlottiles (Embedding Model → LLM → Document) that highlight orange as each step becomes ready; themed message bubbles, rounded-pill input bar with circularSendbutton.SettingsScreen— migrated tosafe-area-context, fixed 6033 non-breaking-space characters left by an earlier cleanup pass (source of runtime "Text strings must be rendered within a<Text>component" warnings).styles.xml: transparentstatusBarColor+navigationBarColor,windowDrawsSystemBarBackgrounds,shortEdgescutout mode.MainActivity.kt:WindowCompat.setDecorFitsSystemWindows(window, false).<StatusBar translucent>inApp.tsx.TabNavigatorheight now58 + insets.bottomviauseSafeAreaInsets()instead of the old iOS-hard-coded84/26vs Android62/8branches.react-native'sSafeAreaViewtoreact-native-safe-area-context.ScreenHeader,SectionHeader,FeatureTile, themedModelStatusBanner/ModelRequiredOverlay/LoadingOverlay.lucide-react-native,react-native-svg,@shopify/flash-list.react-native-vector-icons,@types/react-native-vector-icons,@tabler/icons-react-native(Tabler's root re-exports ~6000 icons and blew up Metro to 7k+ modules; Lucide drops the bundle to ~1150 modules).react-native-keyboard-controller(requiredreact-native-reanimatedpeer; not needed for MVP).react-native.config.js._unusedstubs (_STT_MODEL_IDS,_getRelevantCategories,_availableModels,_isRefreshing), and strayconsole.logcalls across all screens.Test plan — Flutter
Test plan — React Native (Android)
cd sdk/runanywhere-react-native && ./scripts/build-react-native.sh --setup --android --backends=llamacpp,onnx(first-time native build)cd examples/react-native/RunAnywhereAI && npm installnpx react-native run-androidSendicon; download a SmolLM2 model, load, stream a replyTest plan — React Native (iOS)
./scripts/build-react-native.sh --setup --ios --backends=llamacpp,onnxon a macOS machinecd examples/react-native/RunAnywhereAI && npm install && cd ios && pod install && cd ..npx react-native run-ios