Skip to content

Commit 496c8d5

Browse files
authored
Merge pull request #694 from LLNL/jan-news
January news
2 parents d7cb8c2 + 5f46261 commit 496c8d5

7 files changed

Lines changed: 28 additions & 3 deletions

_posts/2018-11-26-unify.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,5 +11,4 @@ Like much of LLNL’s HPC performance improvement software, Unify is open source
1111

1212
- [Unify on GitHub](https://github.com/LLNL/UnifyFS)
1313
- [Unify Docs](https://unifyfs.readthedocs.io/en/latest/)
14-
- [Exascale Computing Project](https://exascale.llnl.gov/)
1514
- [CASC Newsletter, Volume 4](https://computing.llnl.gov/casc/newsletter/vol-4#exascale)

_posts/2019-05-03-dcd-article.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,6 @@ categories: story
55

66
Artificial intelligence tools are revolutionizing scientific research and changing the needs of high performance computing. In a [Data Center Dynamics article](https://www.datacenterdynamics.com/analysis/how-machine-learning-could-change-science/), LLNL's Fred Streitz and [Brian Van Essen](https://github.com/bvanessen) discuss the future of scientific computing, highlighting the Exascale Computing Project (ECP) and the Livermore Big Artificial Neural Network (LBANN).
77

8-
The [ECP](https://exascale.llnl.gov/) is a multi-institutional Department of Energy collaboration aimed at achieving exascale computing capability. Many open source software projects, from LLNL and elsewhere, are crucial components of the ECP ecosystem.
8+
The ECP is a multi-institutional Department of Energy collaboration aimed at achieving exascale computing capability. Many open source software projects, from LLNL and elsewhere, are crucial components of the ECP ecosystem.
99

1010
[LBANN](https://github.com/LLNL/lbann) is an open source deep learning toolkit developed at the Lab. It provides model-parallel acceleration through domain decomposition to optimize for strong scaling of network training.

_posts/2024-11-12-dyna3d.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,4 @@ title: "Podcast: Big Ideas Labs Looks at DYNA3D"
33
categories: multimedia
44
---
55

6-
An iconic LLNL computer code that has saved the automobile industry billions of dollars is the focus for [the newest episode](https://www.llnl.gov/article/52026/big-ideas-lab-looks-how-dyna3d-has-served-workhorse-american-industry-nearly-50-years) of the Big Ideas Lab Podcast. Nearly 50 years ago in 1976, a then-LLNL mechanical engineer named John Hallquist wrote a small, 5,000-line program known as DYNA3D to help supercomputers analyze the structures of bombs dropped from the B-1 aircraft. The code modeled stress traveling through structures. Automakers use it in crash simulations. Beer manufacturers have run the code to design cans. Surgeons have used it to understand how fluid flows through the heart. Jet engine manufacturers have utilized it to certify modifications to engines and to model bird strikes. DYNA3D is a unique story about open source software and entrepreneurship.
6+
An iconic LLNL computer code that has saved the automobile industry billions of dollars is the focus for [the newest episode](https://www.llnl.gov/article/52026/big-ideas-lab-looks-how-dyna3d-has-served-workhorse-american-industry-nearly-50-years) of the Big Ideas Lab Podcast. Nearly 50 years ago in 1976, a then-LLNL mechanical engineer named John Hallquist wrote a small, 5,000-line program known as DYNA3D to help supercomputers analyze the structures of bombs dropped from the B-1 aircraft. The code modeled stress traveling through structures. Automakers use it in crash simulations. Beer manufacturers have run the code to design cans. Surgeons have used it to understand how fluid flows through the heart. Jet engine manufacturers have utilized it to certify modifications to engines and to model bird strikes. DYNA3D is a unique story about open source software and entrepreneurship.
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
---
2+
title: "New Repo: CompilerGPT"
3+
categories: new-repo
4+
---
5+
6+
[CompilerGPT](https://github.com/LLNL/CompilerGPT) is a framework that submits compiler optimization reports (i.e., Clang) and the source code to an LLM. The LLM is prompted to prioritize the findings in the optimization reports and then to make changes in the code accordingly. An automated test harness validates the changes. The test harness provides feedback to the LLM on any errors that were introduced to the code base.

_posts/2025-01-10-protlib-new.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
---
2+
title: "New Repo: protlib-designer"
3+
categories: new-repo
4+
---
5+
6+
[protlib-designer](https://github.com/LLNL/protlib-designer) contains a lightweight Python library for designing diverse protein libraries by seeding linear programming with deep mutational scanning data (or any other data that can be represented as a matrix of scores per single-point mutation). The software takes as input the score matrix, where each row corresponds to a mutation and each column corresponds to a different source of scores, and outputs a subset of mutations that maximize the diversity of the library while Pareto-optimizing the scores from the different sources. Related paper: [Antibody Library Design by Seeding Linear Programming with Inverse Folding and Protein Language Models](https://www.biorxiv.org/content/10.1101/2024.11.03.621763v1). Abstract:
7+
8+
> We propose a novel approach for antibody library design that combines deep learning and multi-objective linear programming with diversity constraints. Our method leverages recent advances in sequence and structure-based deep learning for protein engineering to predict the effects of mutations on antibody properties. These predictions are then used to seed a cascade of constrained integer linear programming problems, the solutions of which yield a diverse and high-performing antibody library. Operating in a cold-start setting, our approach creates designs without iterative feedback from wet laboratory experiments or computational simulations. We demonstrate the effectiveness of our method by designing antibody libraries for Trastuzumab in complex with the HER2 receptor, showing that it outperforms existing techniques in overall quality and diversity of the generated libraries.

_posts/2025-01-27-pylulesh-new.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
---
2+
title: "New Repo: pylulesh"
3+
categories: new-repo
4+
---
5+
6+
[pylulesh](https://github.com/LLNL/pylulesh), which stands for the Python Port of the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics, is a port of [LULESH](https://github.com/LLNL/LULESH) 2.0 using Python and NumPy.

_posts/2025-01-31-bobgat.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
---
2+
title: "ML-Driven Binary Analysis Pipeline Enhances SQA"
3+
categories: story
4+
---
5+
6+
Machine learning (ML) techniques—such as graph neural networks (GNNs) and natural language processing (NLP)—are opening up new avenues to automating binary analysis. Leveraging these techniques, computational mathematician Geoff Sanders and former LLNL data scientist Justin Allen explored ways to characterize software behaviors based on similarity to previous threats. Allen built an ML-driven binary analysis pipeline that incorporates large-scale training data and hierarchical embeddings, and presented their paper, [BobGAT: Towards Inferring Software Bill of Behavior with Pre-Trained Graph Attention Networks](https://www.osti.gov/servlets/purl/2475272), at the 2024 IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications. The work was part of a Laboratory Directed Research and Development project focusing on software assurance capabilities. Two complementary open-source tools are key to this pipeline. Developed for this research, [CAP (Compile. Analyze. Prepare.)](https://github.com/LLNL/CAP) generates large-scale binary datasets from source code examples, then [BinCFG](https://github.com/LLNL/BinCFG) parses compiler outputs, tokenizes and normalizes the binary data into assembly lines, and converts the data into ML-prepped formats. [Read more about the project at LLNL Computing.](https://computing.llnl.gov/about/newsroom/ml-driven-binary-analysis-pipeline)

0 commit comments

Comments
 (0)