Commit 8886699
committed
Added Lora + Fixed Build-All-Work-Flow (#389)
* feat(lora): add LoRA adapter support across SDK + demo app
Implement LoRA (Low-Rank Adaptation) adapter hot-swapping for llama.cpp
backend across all 6 SDK layers (C++ -> C API -> Component -> JNI ->
Kotlin Bridge -> Kotlin Public API).
- Add load/remove/clear/query LoRA adapter operations
- Use vtable dispatch in component layer to decouple librac_commons
from librac_backend_llamacpp (fixes linker errors)
- Add LoRA vtable entries to rac_llm_service_ops_t
- Fix AttachCurrentThread cast for Android NDK C++ JNI build
- Add RunAnyWhereLora Android demo app with Material 3 Q&A UI
- Add comprehensive implementation docs with C/C++ API reference
* feat(ci): add selectable build targets to Build All workflow + fix Swift concurrency errors
Rewrite build-all-test.yml with 9 boolean checkbox inputs so each build
target can be toggled independently from the GitHub Actions UI:
- C++ Android Backends (arm64-v8a, armeabi-v7a, x86_64 matrix)
- C++ iOS Backends (XCFramework)
- Kotlin SDK (JVM + Android)
- Swift SDK (iOS/macOS)
- Web SDK (TypeScript)
- Flutter SDK (Dart analyze via Melos)
- React Native SDK (TypeScript via Lerna)
- Android Example Apps (RunAnywhereAI + RunAnyWhereLora)
- IntelliJ Plugin
Fix two Swift strict-concurrency errors that fail the Swift SDK build:
- LiveTranscriptionSession: add @unchecked Sendable (safe because class
is @mainactor, all access serialized)
- RunAnywhere+VisionLanguage: add Sendable conformance to rac_vlm_image_t
so the C struct can cross the Task boundary in the streaming builder;
simplify StreamingCollector to start timing at init
* fix(swift): resolve strict concurrency errors in LiveTranscriptionSession and VLM streaming
LiveTranscriptionSession.swift:
- Replace [weak self] captures with strong `let session = self` before
closures to avoid captured var in @Sendable/@task contexts (class is
@mainactor @unchecked Sendable so strong ref is safe, bounded by
stream lifecycle)
- Wrap deprecated startStreamingTranscription call in @available helper
to silence deprecation warning until migration to transcribeStream API
RunAnywhere+VisionLanguage.swift:
- Add `let capturedCImage = cImage` before AsyncThrowingStream closure
so the Task captures an immutable let instead of a mutable var
- Add `extension rac_vlm_image_t: @unchecked Sendable {}` for the C
struct to cross Task concurrency boundaries safely
- Simplify StreamingCollector to initialize startTime at init instead
of requiring a separate async start() call
* fix(jni): address CodeRabbit review findings in LoRA JNI functions
- Replace raw -1 returns with RAC_ERROR_INVALID_HANDLE/RAC_ERROR_INVALID_ARGUMENT
to match codebase error handling conventions
- Use getCString() helper instead of raw GetStringUTFChars/ReleaseStringUTFChars
- Add missing result logging to racLlmComponentRemoveLora and racLlmComponentClearLora
- Use rac_free() instead of free() in racLlmComponentGetLoraInfo for consistency
- Clarify LoRA adapter memory ownership comments (adapters freed automatically
with model per llama.cpp b8011 API — llama_adapter_lora_free is deprecated)1 parent 370e2b9 commit 8886699
65 files changed
Lines changed: 3984 additions & 40 deletions
File tree
- .github/workflows
- .idea
- docs/impl
- examples/android/RunAnyWhereLora
- .idea
- app
- src
- androidTest/java/com/runanywhere/run_anywhere_lora
- main
- java/com/runanywhere/run_anywhere_lora
- ui/theme
- res
- drawable
- mipmap-anydpi-v26
- mipmap-hdpi
- mipmap-mdpi
- mipmap-xhdpi
- mipmap-xxhdpi
- mipmap-xxxhdpi
- values
- xml
- test/java/com/runanywhere/run_anywhere_lora
- gradle/wrapper
- sdk
- runanywhere-commons
- include/rac
- backends
- features/llm
- src
- backends/llamacpp
- features/llm
- jni
- runanywhere-kotlin/src
- commonMain/kotlin/com/runanywhere/sdk/public/extensions
- LLM
- jvmAndroidMain/kotlin/com/runanywhere/sdk
- foundation/bridge/extensions
- native/bridge
- public/extensions
- runanywhere-swift/Sources/RunAnywhere/Public
- Extensions/VLM
- Sessions
Some content is hidden
Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
Large diffs are not rendered by default.
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.
0 commit comments