Conversation
General Order of Operations
Building Simulator FirmwareSteps
Notes
Checking Memory UsageSteps
Notes
Memory ComparisonSteps
Notes
|
|
You can check out the workflow artifacts in the actions below! |
|
If it's helpful, here are all the dependent workflows: Main Action
Dependencies
|
|
One change that would be incredible here is if |
It is, except not the exe. We'd need to add that as an extension. https://adafruit-circuit-python.s3.amazonaws.com/index.html?prefix=bin/native_native_sim/en_US/ |
|
@jwcooper Would it be possible to set up IO or grafana to log library size over time? (I don't know what IO's history policy is.) |
|
@tannewt IO will store data for 30 or 60 days depending on usage. I'm not sure what kind of timeline would be required for this. We have an internal grafana, but I'm not sure if we can open it up for this. I only skimmed, but could you just log to a file locally, and periodically generate charts with python? |
@tannewt Okay, that makes sense. Happy to help with that effort as far as CI work goes.
@jwcooper I could set up a central GitHub repository, or a file in the bundle repo for this, but that might make historical graphing slightly convoluted. I have a couple ideas bouncing around my head though in this same vein that I think might work, at least in the immediate term, like keeping a sqllite database in another repo updated via CI and some Pythons scripts for generating charts like you say. |
I'd love to have all historical data for library size.
I definitely wouldn't open an internal one up.
How costly are small files on S3? We already upload artifacts there. We could have a one off batch process merge them together from time to time. |
|
Oh! I forgot to mention - because I own [tool.simulate-circuitpyton.circuitpython-memory-check]
skip-action = false
ignore-libraries = [
"adafruit_xyz.some_submodule",
"adafruit_xyz.other_submodule",
]The action is then free to dynamically react to each repository when the action runs. |
Pretty negligible costs for something like this, I'd think. Would work pretty well, especially since there is already infrastructure setup for this. |
ff96e73 to
be94951
Compare
|
Note to self (and others): I made some small modifications to the underlying actions, so I force pushed the same commit to re-run the actions as another check |
|
@tekktrik what do you think about uploading results to S3? (next to the CP builds). |
|
@tannewt that would be amazing - I can create a new action called something like |
|
I can PR the work needed for this within the coming days to the CircuitPython repo. |
|
Sounds good. I think you'll need to change the toml file to list the exe extension to get it uploaded. |
|
Alright, I'll get started on using the cached versions of the executable. Using "latest" defaults to the latest released tag, so the I'll need to wait for 10.1.5 to be cut before there's an available firmware. :) |
Here's my prototype for a import memory usage analysis GitHub Actions workflow.
I'm trying to put all my thoughts down on here, so apologies for the wall of text that follows:
Important Considerations Upfront
Dependency Risk
Perhaps technically the most "impactful" is that I currently manage this action and the few critical ones it depends on. I am happy to continue owning them or transfer them over to the
adafruitorganization. I intend to support this like I dotekktrik/issue-labeled-pingbut I also recognize that it is technically a dependency risk and would take a patch to convert the libraries after the fact to an Adafruit fork at that time. My vote would be to continue owning and supporting them personally for ease of development and simplicity in initial deployment, and if there comes a time to fork or transfer them then that's all good.Artifacts vs. Comments
This action does not post a comment to the PR but rather uses workflow artifacts. This is done because PRs from public forks do not have write access permissions and therefore the workflow cannot post a comment. Instead, important files are uploaded to the workflow as artifacts (and in many cases are reused by a dependent workflow anyway). The downside is that it is up to reviewers to check the artifacts if they are concerned about memory, but the upside is that it will not clutter the discussion. If comment posting is desired/required, the way I would recommend would be to have a secondary, dependent workflow in the repositories that has elevated permissions to post the results of the analysis. This separates the running code and the posting of a simple text comment into appropriate levels of permissions, reducing the likelihood of malicious activity.
New Workflow File
Because this action only looks at the difference in memory used between some point and the latest commit in the main branch, it only makes sense for it to be triggered on the
pull_requestevent. This means it requires its own workflow flow, and not added to the typical build workflow (which runs on bothpull_requestandpush.Artifacts are Temporary
Artifacts are only stored for 90 days. Logs are not indefinite either, but it does mean that results may be lost if a PR is stale. I personally didn't see this as a major blocker, but worth mentioning. Re-pushing the commit will re-trigger a workflow anyway.
Pinning Action Versions
Similar to the other workflows, actions here are pinned to
mainso that they receive updates automatically. Workflows used intekktrik/circuitpython-memory-checkare, however, pinned to specific versions to ensure the all work together as intended. This matches how the other composite actions work.Dummy Changes
I added dummy changes to the actual library as part of this PR in order to trigger some example changes.
I'll add another comment for how the workflow actual works next.