Skip to content

Commit 0edb8a4

Browse files
xvzeminZemin Xujanosh
committed
Add TACE-OAM-L (#321)
* Add TACE-OAM-L * bump site deps, fix CI type/lint/format errors - bump TS 5.9->6.0, svelte 5.54->5.55, svelte-check-rs 0.9.5->0.9.7 - drop svelte-preprocess (Svelte 4 relic, conflicts with TS 6) - add vite/vitest/svelte overrides for npm peer dep resolution - fix ty errors: explicit np.ndarray typing for np.asarray results - fix oxlint: remove unused param, fix oxfmt formatting - add Mirror Physics logo, TACE-OAM-L figshare URLs and pr_url * update to phono3py 3.30 API (released today) v3.30.0 refactored thermal conductivity routines: - conductivity_type="wigner" → transport_type="MS-SMM19" - ConductivityWignerRTA → RTACalculator - Wigner-specific kappa accessed via get_extra_kappa_output() * restore tace-v1-oam-m.yml and add results of just prepare-model-submission * fix CI: update test patterns for xs_arr/ys_arr rename, trailing newline, stale comments - update test regex patterns to match renamed variables (xs_arr, ys_arr) - fix missing trailing newline in per-element-each-errors.json - remove stale "faster than FIRE" comments (leftover from GOQN) - fix "manual" -> "manually" typo in TACE test error messages - bump site deps (@sveltejs/kit, @types/node, svelte) * fix correctness issues and clean up branch - fix duplicate count in _validate_diatomic_curve for non-adjacent duplicates - add input validation to calc_second_deriv_smoothness (was silently returning nan/inf) - remove redundant descending re-sort in all 3 smoothness functions - fix consistency warning comparing against kappa_P_RTA instead of kappa_TOT_RTA - narrow bare except Exception to specific types in TACE test scripts - remove dead else: pass block in test_tace_discovery.py - remove redundant Lmax/lmax duplication in tace-oam-l.yml - add regression tests for duplicate detection and validation fixes * simplify diatomic metrics and TACE test script - extract _threshold_diff_signs helper to deduplicate sign-flip logic shared between energy.py (calc_energy_diff_flips, calc_energy_jump) and force.py (calc_force_jump) - simplify calc_force_flips: remove intermediate copy + dead variable - simplify calc_tortuosity: remove stale comments and temp variables - remove unused _sorted_seps in calc_conservation_deviation - remove stale 'Sort by separations in descending order' comment - simplify data_path lookup in test_tace_discovery.py (single-entry dict) * trim test bloat: remove dead fixture, duplicate calls, verbose assertions - remove unused mace_data fixture and its json/ROOT imports - remove duplicate x_lj variable (identical to x) - deduplicate calc_second_deriv_smoothness(dists, e_linear) calls - replace verbose raise AssertionError blocks with assert + message * add tests for coverage gaps in diatomic metrics - test _threshold_diff_signs helper directly (5 parametrized cases) - test calc_energy_diff_flips and calc_energy_jump with concrete values - test calc_energy_grad_norm_max with analytically known gradients - test calc_curve_diff_auc with identical curves (== 0) and normalize=False - test _validate_diatomic_curve normalize_energy=True path and ndim>1 skip * simplify and strengthen diatomic metric tests - parametrize 4 fixture-based metric tests into one - parametrize 7 validation error cases from monolithic test_edge_cases - add concrete value tests for AUC normalization and force jump - remove flat-potential block duplicating parametrized coverage - fix alphabetical ordering of org_logos in labels.ts * handle all Figshare URL variants in download_file, fix broken YAML URLs - broaden URL conversion to match figshare.com/ndownloader/files/ and ndownloader.figshare.com/files/ in addition to figshare.com/files/ - strip query params from file ID extraction - fix equiformer_v3_oam analysis_file_urls from ndownloader to standard format - annotate deleted Figshare file IDs in superseded dpa3-v1 YAMLs - add parametrized tests for all Figshare URL conversion variants * mark tace-v1-oam-m as superseded * increase timeout for flaky MetricsTable toggle test --------- Co-authored-by: Zemin Xu <you@example.com> Co-authored-by: janosh <janosh.riebesell@gmail.com>
1 parent 334d6df commit 0edb8a4

28 files changed

Lines changed: 552 additions & 367 deletions

.github/workflows/test.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ jobs:
2929
uses: janosh/workflows/.github/workflows/pytest.yml@main
3030
with:
3131
os: ${{ matrix.os }}
32-
python-version: '3.11'
32+
python-version: "3.11"
3333
# TODO remove main branch install of pymatviz after next release
3434
install-cmd: |
3535
uv pip install -e .[test,symmetry] --system
@@ -71,7 +71,7 @@ jobs:
7171
- name: Set up Python
7272
uses: actions/setup-python@v6
7373
with:
74-
python-version: '3.11'
74+
python-version: "3.11"
7575

7676
- name: Install package and dependencies
7777
run: pip install -e .[plots]

.pre-commit-config.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ default_install_hook_types: [pre-commit, commit-msg]
44

55
repos:
66
- repo: https://github.com/astral-sh/ruff-pre-commit
7-
rev: v0.15.7
7+
rev: v0.15.10
88
hooks:
99
- id: ruff-check
1010
args: [--fix]
@@ -79,7 +79,7 @@ repos:
7979
exclude: changelog\.md$
8080

8181
- repo: https://github.com/python-jsonschema/check-jsonschema
82-
rev: 0.37.0
82+
rev: 0.37.1
8383
hooks:
8484
- id: check-jsonschema
8585
files: ^models/.+/.+\.yml$
@@ -90,7 +90,7 @@ repos:
9090
- id: check-github-actions
9191

9292
- repo: https://github.com/crate-ci/typos
93-
rev: v1.44.0
93+
rev: v1.45.0
9494
hooks:
9595
- id: typos
9696
types: [text]

matbench_discovery/enums.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -370,9 +370,9 @@ class Model(Files, base_dir=f"{ROOT}/models"):
370370
# sevennet_mf_ompa = auto(), "sevennet/sevennet-mf-ompa.yml"
371371
sevennet_omni_i12 = auto(), "sevennet/sevennet-omni-i12.yml"
372372

373-
# Tensor Atomic Cluster Expansion (Irreducible Cartesian tensor)
374-
# https://arxiv.org/abs/2509.14961 and https://arxiv.org/abs/2512.16882
375-
tace_v1_oam_m = auto(), "tace/tace-v1-oam-m.yml"
373+
# Tensor Atomic Cluster Expansion
374+
# tace_v1_oam_m = auto(), "tace/tace-v1-oam-m.yml"
375+
tace_oam_l = auto(), "tace/tace-oam-l.yml"
376376

377377
# Magpie composition+Voronoi tessellation structure features + sklearn random forest
378378
voronoi_rf = auto(), "voronoi_rf/voronoi-rf.yml"

matbench_discovery/hpc.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -227,7 +227,7 @@ def chunk_by_lens(
227227
print(
228228
f"Split {len(inputs):,} structures into {n_chunks:,} chunks:\n"
229229
f"Mean sum(len({cls_name})) per chunk: {mean:,.1f} ± {std:,.1f}, "
230-
f"min: {chunk_sizes.min():,.0f}, max: {chunk_sizes.max():,.0f}"
230+
f"min: {chunk_sizes.min():,.0f}, max: {chunk_sizes.max():,.0f}" # ty: ignore[invalid-argument-type]
231231
)
232232

233233
return chunks

matbench_discovery/metrics/diatomics/energy.py

Lines changed: 53 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -24,32 +24,33 @@ def _validate_diatomic_curve(
2424
Raises:
2525
ValueError: If input data is invalid
2626
"""
27-
xs, ys = map(np.asarray, (xs, ys))
28-
29-
if len(xs) != len(ys):
30-
raise ValueError(f"{len(xs)=} != {len(ys)=}")
31-
if len(xs) < 2:
32-
raise ValueError(f"Input must have at least 2 points, got {len(xs)=}")
33-
n_x_nan, n_y_nan = int(np.isnan(xs).sum()), int(np.isnan(ys).sum())
27+
xs_arr: np.ndarray = np.asarray(xs)
28+
ys_arr: np.ndarray = np.asarray(ys)
29+
30+
if len(xs_arr) != len(ys_arr):
31+
raise ValueError(f"{len(xs_arr)=} != {len(ys_arr)=}")
32+
if len(xs_arr) < 2:
33+
raise ValueError(f"Input must have at least 2 points, got {len(xs_arr)=}")
34+
n_x_nan, n_y_nan = int(np.isnan(xs_arr).sum()), int(np.isnan(ys_arr).sum())
3435
if n_x_nan or n_y_nan:
3536
raise ValueError(f"Input contains NaN values: {n_x_nan=}, {n_y_nan=}")
36-
n_x_inf, n_y_inf = int(np.isinf(xs).sum()), int(np.isinf(ys).sum())
37+
n_x_inf, n_y_inf = int(np.isinf(xs_arr).sum()), int(np.isinf(ys_arr).sum())
3738
if n_x_inf or n_y_inf:
3839
raise ValueError(f"Input contains infinite values: {n_x_inf=}, {n_y_inf=}")
39-
if len(np.unique(xs)) != len(xs):
40-
n_x_dup = int((np.diff(xs) == 0).sum())
41-
raise ValueError(f"xs contains {n_x_dup} duplicates")
40+
n_unique = len(np.unique(xs_arr))
41+
if n_unique != len(xs_arr):
42+
raise ValueError(f"xs contains {len(xs_arr) - n_unique} duplicates")
4243

43-
sort_idx = np.argsort(xs) # ascending order
44-
xs = xs[sort_idx]
45-
ys = ys[sort_idx]
44+
sort_idx = np.argsort(xs_arr) # ascending order
45+
xs_arr = xs_arr[sort_idx]
46+
ys_arr = ys_arr[sort_idx]
4647

4748
# If these are energies (rank 1 array), normalize to zero at far field
48-
if normalize_energy and ys.ndim == 1:
49+
if normalize_energy and ys_arr.ndim == 1:
4950
# shift to zero at largest separation (last after ascending sort)
50-
ys = ys - ys[-1]
51+
ys_arr = ys_arr - ys_arr[-1]
5152

52-
return xs, ys
53+
return xs_arr, ys_arr
5354

5455

5556
def calc_curve_diff_auc(
@@ -131,10 +132,11 @@ def calc_curve_diff_auc(
131132
auc = np.trapezoid(diff, seps_ref)
132133

133134
if normalize:
134-
# Get bounding box area of reference curve on the same domain
135+
# Normalize by bounding box of reference curve on the (possibly masked) domain.
136+
# When interpolate=True, uses full ref range; when False, uses masked subset.
135137
seps_span, e_span = np.ptp(seps_ref), np.ptp(e_ref)
136138
box_area = seps_span * e_span
137-
if box_area > 0: # If reference curve is flat, don't normalize
139+
if box_area > 0:
138140
auc = auc / box_area
139141

140142
# Ensure AUC is always positive
@@ -198,30 +200,23 @@ def calc_energy_mae(
198200

199201
def calc_second_deriv_smoothness(seps: ArrayLike, energies: ArrayLike) -> float:
200202
"""Calculate smoothness using RMS of second derivative (lower is smoother)."""
201-
seps, energies = map(np.asarray, (seps, energies))
202-
sort_idx = np.argsort(seps)[::-1] # sort in descending order
203-
seps = seps[sort_idx]
204-
energies = energies[sort_idx]
205-
d2y = np.gradient(np.gradient(energies, seps), seps) # ty: ignore[no-matching-overload]
203+
seps_arr, energies_arr = _validate_diatomic_curve(
204+
seps, energies, normalize_energy=False
205+
)
206+
d2y = np.gradient(np.gradient(energies_arr, seps_arr), seps_arr) # ty: ignore[no-matching-overload]
206207
return float(np.sqrt(np.mean(d2y**2)))
207208

208209

209210
def calc_total_variation_smoothness(seps: ArrayLike, energies: ArrayLike) -> float:
210211
"""Calculate smoothness using mean absolute gradient (lower is smoother)."""
211212
seps, energies = _validate_diatomic_curve(seps, energies, normalize_energy=False)
212-
sort_idx = np.argsort(seps)[::-1] # sort in descending order
213-
seps = seps[sort_idx]
214-
energies = energies[sort_idx]
215213
dy = np.gradient(energies, seps)
216214
return float(np.log10(np.mean(np.abs(dy))))
217215

218216

219217
def calc_curvature_smoothness(seps: ArrayLike, energies: ArrayLike) -> float:
220218
"""Calculate smoothness using mean absolute curvature (lower is smoother)."""
221219
seps, energies = _validate_diatomic_curve(seps, energies, normalize_energy=False)
222-
sort_idx = np.argsort(seps)[::-1] # sort in descending order
223-
seps = seps[sort_idx]
224-
energies = energies[sort_idx]
225220
dy = np.gradient(energies, seps)
226221
d2y = np.gradient(dy, seps)
227222
curvature = np.abs(d2y) / (1 + dy**2) ** 1.5
@@ -247,22 +242,34 @@ def calc_tortuosity(seps: ArrayLike, energies: ArrayLike) -> float:
247242
Returns:
248243
float: tortuosity value (ratio of total variation to direct energy difference).
249244
"""
250-
# Validate and sort with energy normalization
251245
_, energies = _validate_diatomic_curve(seps, energies, normalize_energy=False)
252246

253-
# Total variation in energy (sum of absolute differences)
254247
tv_energy = np.sum(np.abs(np.diff(energies)))
248+
e_min = np.min(energies)
249+
direct_energy_diff = abs(energies[0] - e_min) + abs(energies[-1] - e_min)
255250

256-
# Get minimum energy and endpoint energies
257-
e_min = np.min(energies) # minimum energy (equilibrium point)
258-
# energy at largest distance (should be 0 after normalization)
259-
e_first = energies[0]
260-
e_last = energies[-1] # energy at shortest distance
251+
return float(tv_energy / direct_energy_diff)
261252

262-
# Sum of energy differences from minimum to endpoints
263-
direct_energy_diff = abs(e_first - e_min) + abs(e_last - e_min)
264253

265-
return float(tv_energy / direct_energy_diff)
254+
def _threshold_diff_signs(
255+
vals: np.ndarray, threshold: float = 1e-3
256+
) -> tuple[np.ndarray, np.ndarray, np.ndarray]:
257+
"""Compute thresholded diffs, their signs, and flip mask for a 1D array.
258+
259+
Args:
260+
vals (np.ndarray): 1D array of values (energies or forces).
261+
threshold (float): Diffs below this magnitude are zeroed. Defaults to 1e-3.
262+
263+
Returns:
264+
tuple: (thresholded diffs with zeros removed, their signs, boolean flip mask)
265+
"""
266+
diffs = np.diff(vals)
267+
diffs[np.abs(diffs) < threshold] = 0
268+
signs = np.sign(diffs)
269+
mask = signs != 0
270+
diffs, signs = diffs[mask], signs[mask]
271+
flips = np.diff(signs) != 0
272+
return diffs, signs, flips
266273

267274

268275
def calc_energy_diff_flips(seps: ArrayLike, energies: ArrayLike) -> float:
@@ -275,14 +282,9 @@ def calc_energy_diff_flips(seps: ArrayLike, energies: ArrayLike) -> float:
275282
Returns:
276283
float: Number of energy difference sign flips.
277284
"""
278-
seps, energies = _validate_diatomic_curve(seps, energies, normalize_energy=False)
279-
280-
ediff = np.diff(energies)
281-
ediff[np.abs(ediff) < 1e-3] = 0 # 1meV threshold
282-
ediff_sign = np.sign(ediff)
283-
mask = ediff_sign != 0
284-
ediff_sign = ediff_sign[mask]
285-
return float(np.sum(np.diff(ediff_sign) != 0))
285+
_, energies = _validate_diatomic_curve(seps, energies, normalize_energy=False)
286+
_, _, flips = _threshold_diff_signs(energies)
287+
return float(np.sum(flips))
286288

287289

288290
def calc_energy_grad_norm_max(seps: ArrayLike, energies: ArrayLike) -> float:
@@ -310,18 +312,6 @@ def calc_energy_jump(seps: ArrayLike, energies: ArrayLike) -> float:
310312
Returns:
311313
float: Sum of absolute energy differences at flip points.
312314
"""
313-
seps, energies = _validate_diatomic_curve(seps, energies, normalize_energy=False)
314-
315-
e_diff = np.diff(energies)
316-
e_diff[np.abs(e_diff) < 1e-3] = 0 # 1meV threshold
317-
e_diff_sign = np.sign(e_diff)
318-
mask = e_diff_sign != 0
319-
e_diff = e_diff[mask]
320-
e_diff_sign = e_diff_sign[mask]
321-
e_diff_flip = np.diff(e_diff_sign) != 0
322-
323-
e_jump = (
324-
np.abs(e_diff[:-1][e_diff_flip]).sum() + np.abs(e_diff[1:][e_diff_flip]).sum()
325-
)
326-
327-
return float(e_jump)
315+
_, energies = _validate_diatomic_curve(seps, energies, normalize_energy=False)
316+
diffs, _, flips = _threshold_diff_signs(energies)
317+
return float(np.abs(diffs[:-1][flips]).sum() + np.abs(diffs[1:][flips]).sum())

matbench_discovery/metrics/diatomics/force.py

Lines changed: 11 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,10 @@
33
import numpy as np
44
from numpy.typing import ArrayLike
55

6-
from matbench_discovery.metrics.diatomics.energy import _validate_diatomic_curve
6+
from matbench_discovery.metrics.diatomics.energy import (
7+
_threshold_diff_signs,
8+
_validate_diatomic_curve,
9+
)
710

811

912
def calc_force_mae(
@@ -83,19 +86,12 @@ def calc_force_flips(
8386
Returns:
8487
float: Number of force direction changes.
8588
"""
86-
# Sort by separations in descending order
8789
_, forces = _validate_diatomic_curve(seps, forces, normalize_energy=False)
8890

89-
fs = forces[:, 0, 0]
90-
91-
rounded_fs = np.copy(fs)
92-
rounded_fs[np.abs(rounded_fs) < threshold] = 0
93-
fs_sign = np.sign(rounded_fs)
94-
mask = fs_sign != 0
95-
rounded_fs = rounded_fs[mask]
96-
fs_sign = fs_sign[mask]
97-
f_flip = np.diff(fs_sign) != 0
98-
return float(np.sum(f_flip))
91+
fs = forces[:, 0, 0].copy()
92+
fs[np.abs(fs) < threshold] = 0
93+
fs_sign = np.sign(fs[fs != 0])
94+
return float(np.sum(np.diff(fs_sign) != 0))
9995

10096

10197
def calc_force_total_variation(seps: ArrayLike, forces: np.ndarray) -> float:
@@ -124,19 +120,8 @@ def calc_force_jump(seps: ArrayLike, forces: np.ndarray) -> float:
124120
float: Sum of absolute force differences at flip points.
125121
"""
126122
_, forces = _validate_diatomic_curve(seps, forces, normalize_energy=False)
127-
forces_x = forces[:, 0, 0] # x-component of force on first atom
128-
129-
f_diff = np.diff(forces_x)
130-
f_diff_sign = np.sign(f_diff)
131-
mask = f_diff_sign != 0
132-
f_diff = f_diff[mask]
133-
f_diff_sign = f_diff_sign[mask]
134-
f_diff_flip = np.diff(f_diff_sign) != 0
135-
136-
force_jumps = (
137-
np.abs(f_diff[:-1][f_diff_flip]).sum() + np.abs(f_diff[1:][f_diff_flip]).sum()
138-
)
139-
return float(force_jumps)
123+
diffs, _, flips = _threshold_diff_signs(forces[:, 0, 0], threshold=0)
124+
return float(np.abs(diffs[:-1][flips]).sum() + np.abs(diffs[1:][flips]).sum())
140125

141126

142127
def calc_conservation_deviation(
@@ -160,9 +145,7 @@ def calc_conservation_deviation(
160145
Returns:
161146
float: Mean absolute deviation between forces and -dE/dr.
162147
"""
163-
_sorted_seps, energies = _validate_diatomic_curve(
164-
seps, energies, normalize_energy=False
165-
)
148+
_, energies = _validate_diatomic_curve(seps, energies, normalize_energy=False)
166149
seps, forces = _validate_diatomic_curve(seps, forces, normalize_energy=False)
167150

168151
if interpolate:

matbench_discovery/phonons/thermal_conductivity.py

Lines changed: 15 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@
2424
from matbench_discovery.enums import MbdKey
2525

2626
if TYPE_CHECKING:
27-
from phono3py.conductivity.wigner_rta import ConductivityWignerRTA
27+
from phono3py.conductivity.calculators import RTACalculator
2828

2929

3030
def calculate_fc2_set(
@@ -221,7 +221,7 @@ def calculate_conductivity(
221221
boundary_mfp: float = 1e6,
222222
mode_kappa_thresh: float = 1e-6,
223223
**kwargs: Any,
224-
) -> tuple[Phono3py, dict[str, np.ndarray], "ConductivityWignerRTA"]:
224+
) -> tuple[Phono3py, dict[str, np.ndarray], "RTACalculator"]:
225225
"""Calculate thermal conductivity.
226226
227227
Args:
@@ -234,7 +234,7 @@ def calculate_conductivity(
234234
**kwargs (Any): Passed to Phono3py.run_thermal_conductivity().
235235
236236
Returns:
237-
tuple[Phono3py, dict[str, np.ndarray], ConductivityWignerRTA]: (Phono3py object,
237+
tuple[Phono3py, dict[str, np.ndarray], RTACalculator]: (Phono3py object,
238238
conductivity dict, conductivity object)
239239
"""
240240
ph3.init_phph_interaction(symmetrize_fc3q=False)
@@ -243,37 +243,38 @@ def calculate_conductivity(
243243
**kwargs,
244244
temperatures=temperatures,
245245
is_isotope=True,
246-
# use type="wigner" to include both wave-like coherence (kappa_c) and
247-
# particle-like (kappa_p) conductivity contributions
248-
conductivity_type="wigner",
246+
# use MS-SMM19 (Wigner transport equation) to include both wave-like
247+
# coherence (kappa_c) and particle-like (kappa_p) conductivity contributions
248+
transport_type="MS-SMM19",
249249
boundary_mfp=boundary_mfp,
250250
)
251251

252252
kappa = ph3.thermal_conductivity
253+
extra = kappa.get_extra_kappa_output()
253254

254255
kappa_dict = {
255-
MbdKey.kappa_tot_rta: deepcopy(kappa.kappa_TOT_RTA[0]),
256-
MbdKey.kappa_p_rta: deepcopy(kappa.kappa_P_RTA[0]),
257-
MbdKey.kappa_c: deepcopy(kappa.kappa_C[0]),
256+
MbdKey.kappa_tot_rta: deepcopy(extra["kappa_TOT_RTA"][0]),
257+
MbdKey.kappa_p_rta: deepcopy(extra["kappa_P_RTA"][0]),
258+
MbdKey.kappa_c: deepcopy(extra["kappa_C"][0]),
258259
Key.mode_weights: deepcopy(kappa.grid_weights),
259260
Key.q_points: deepcopy(kappa.qpoints),
260261
Key.ph_freqs: deepcopy(kappa.frequencies),
261262
}
262263
mode_kappa_total = kappa_dict[MbdKey.mode_kappa_tot_rta] = calc_mode_kappa_tot(
263-
deepcopy(kappa.mode_kappa_P_RTA[0]),
264-
deepcopy(kappa.mode_kappa_C[0]),
264+
deepcopy(extra["mode_kappa_P_RTA"][0]),
265+
deepcopy(extra["mode_kappa_C"][0]),
265266
deepcopy(kappa.mode_heat_capacities),
266267
)
267268

268269
sum_mode_kappa_tot = mode_kappa_total.sum(
269270
axis=tuple(range(1, mode_kappa_total.ndim - 1))
270271
) / np.sum(kappa_dict[Key.mode_weights])
271272

272-
kappa_p_rta = kappa_dict[MbdKey.kappa_p_rta]
273-
if np.any(np.abs(sum_mode_kappa_tot - kappa_p_rta) > mode_kappa_thresh):
273+
kappa_tot_rta = kappa_dict[MbdKey.kappa_tot_rta]
274+
if np.any(np.abs(sum_mode_kappa_tot - kappa_tot_rta) > mode_kappa_thresh):
274275
warnings.warn(
275276
f"Total mode kappa does not sum to total kappa. {sum_mode_kappa_tot=}, "
276-
f"{kappa_p_rta=}",
277+
f"{kappa_tot_rta=}",
277278
stacklevel=2,
278279
)
279280

0 commit comments

Comments
 (0)