Skip to content

Commit 280a61a

Browse files
committed
More unit tests
1 parent bf95904 commit 280a61a

23 files changed

Lines changed: 286 additions & 60 deletions

Cargo.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[package]
22
name = "happylock"
3-
version = "0.4.1"
3+
version = "0.4.2"
44
authors = ["Mica White <botahamec@outlook.com>"]
55
edition = "2021"
66
rust-version = "1.82"

happylock.md

Lines changed: 152 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
marp: true
33
theme: gaia
44
class: invert
5+
author: Mica White
56
---
67

78
<!-- _class: lead invert -->
@@ -112,7 +113,7 @@ use happylock::{ThreadKey, Mutex};
112113

113114
fn main() {
114115
// each thread can only have one thread key (that's why we unwrap)
115-
// ThreadKey is not Send, Sync, Copy, or Clone
116+
// ThreadKey is not Send, Copy, or Clone
116117
let key = ThreadKey::get().unwrap();
117118

118119
let mutex = Mutex::new(10);
@@ -153,6 +154,8 @@ fn main() {
153154
}
154155
```
155156

157+
This `LockCollection` can be implemented simply by releasing the currently acquired locks and retrying on failure
158+
156159
---
157160

158161
## The Lockable API
@@ -266,7 +269,17 @@ Time Complexity: O(nlogn)
266269

267270
## Problem: Live-locking
268271

269-
Although this library is able to successfully prevent deadlocks, livelocks may still be an issue. Imagine thread 1 gets resource 1, thread 2 gets resource 2, thread 1 realizes it can't get resource 2, thread 2 realizes it can't get resource 1, thread 1 drops resource 1, thread 2 drops resource 2, and then repeat forever. In practice, this situation probably wouldn't last forever. But it would be nice if this could be prevented somehow.
272+
Although this library is able to successfully prevent deadlocks, livelocks may still be an issue.
273+
274+
1. Thread 1 locks mutex 1
275+
2. Thread 2 locks mutex 2
276+
3. Thread 1 tries to lock mutex 2 and fails
277+
4. Thread 2 tries to lock mutex 1 and fails
278+
5. Thread 1 releases mutex 1
279+
6. Thread 2 releases mutex 2
280+
7. Repeat
281+
282+
This pattern will probably end eventually, but we should really avoid it, for performance reasons.
270283

271284
---
272285

@@ -384,8 +397,8 @@ This is what we were trying to avoid earlier
384397
This is what I used in HappyLock 0.1:
385398

386399
```rust
387-
struct ReadLock<'a, T(&'a RwLock<T>);
388-
struct WriteLock<'a, T(&'a RwLock<T>);
400+
struct ReadLock<'a, T>(&'a RwLock<T>);
401+
struct WriteLock<'a, T>(&'a RwLock<T>);
389402
```
390403

391404
**Problem:** This can't be used inside of an `OwnedLockCollection`
@@ -413,7 +426,7 @@ unsafe trait Lockable {
413426

414427
---
415428

416-
## Not every lock can be read doe
429+
## Not every lock can be read tho
417430

418431
```rust
419432
// This trait is used to indicate that reading is actually useful
@@ -432,25 +445,6 @@ impl<L: Sharable> OwnedLockable<L> {
432445

433446
---
434447

435-
## Missing Features
436-
437-
- `Condvar`/`Barrier`
438-
- We probably don't need `OnceLock` or `LazyLock`
439-
- Standard Library Backend
440-
- Mutex poisoning
441-
- Support for `no_std`
442-
- Convenience methods: `lock_swap`, `lock_set`?
443-
- `try_lock_swap` doesn't need a `ThreadKey`
444-
- Going further: `LockCell` API (preemptive allocation)
445-
446-
---
447-
448-
<!--_class: invert lead -->
449-
450-
## What's next?
451-
452-
---
453-
454448
## Poisoning
455449

456450
```rust
@@ -473,50 +467,55 @@ Allows: `Poisonable<LockCollection>` and `LockCollection<Poisonable>`
473467

474468
---
475469

476-
## OS Locks
470+
# `LockableGetMut` and `LockableIntoInner`
477471

478-
- Using `parking_lot` makes the binary size much larger
479-
- Unfortunately, it's impossible to implement `RawLock` on the standard library lock primitives
480-
- Creating a new crate based on a fork of the standard library is hard
481-
- Solution: create a new library (`sys_locks`), which exposes raw locks from the operating system
482-
- This is more complicated than you might think
483-
484-
---
472+
```rust
473+
fn Mutex::<T>::get_mut(&mut self) -> &mut T // already exists in std
474+
// this is safe because a mutable reference means nobody else can access the lock
485475

486-
## Expanding Cyclic Wait
476+
trait LockableGetMut: Lockable {
477+
type Inner<'a>;
487478

488-
> ... sometimes you need to lock an object to read its value and determine what should be locked next... is there a way to address it?
479+
fn get_mut(&mut self) -> Self::Inner<'_>
480+
}
489481

490-
```rust
491-
let guard = m1.lock(key);
492-
if *guard == true {
493-
let key = Mutex::unlock(m);
494-
let data = [&m1, &m2];
495-
let collection = LockCollection::try_new(data).unwrap();
496-
let guard = collection.lock(key);
482+
impl<A: LockableGetMut, B: LockableGetMut> LockableGetMut for (A, B) {
483+
type Inner = (A::Inner<'a>, B::Inner<'b>);
497484

498-
// m1 might no longer be true here...
485+
fn get_mut(&mut self) -> Self::Inner<'_> {
486+
(self.0.get_mut(), self.1.get_mut())
487+
}
499488
}
500489
```
501490

502491
---
503492

504-
## What I Really Want
493+
## Missing Features
505494

506-
```txt
507-
ordered locks: m1, m2, m3
495+
- `Condvar`/`Barrier`
496+
- `OnceLock` or `LazyLock`
497+
- Standard Library Backend
498+
- Support for `no_std`
499+
- Convenience methods: `lock_swap`, `lock_set`?
500+
- `try_lock_swap` doesn't need a `ThreadKey`
501+
- Going further: `LockCell` API (preemptive allocation)
508502

509-
if m1 is true
510-
lock m2 and keep m1 locked
511-
else
512-
skip m2 and lock m3
513-
```
503+
---
514504

515-
We can specify lock orders using `OwnedLockCollection`
505+
<!--_class: invert lead -->
506+
507+
## What's next?
508+
509+
---
516510

517-
Then we need an iterator over the collection to keep that ordering
518511

519-
This will be hard to do with tuples (but might not be impossible)
512+
## OS Locks
513+
514+
- Using `parking_lot` makes the binary size much larger
515+
- Unfortunately, it's impossible to implement `RawLock` on the standard library lock primitives
516+
- Creating a new crate based on a fork of the standard library is hard
517+
- Solution: create a new library (`sys_locks`), which exposes raw locks from the operating system
518+
- This is more complicated than you might think
520519

521520
---
522521

@@ -617,6 +616,106 @@ A `Readonly` collection cannot be exclusively locked.
617616
- LazyLock and OnceLock
618617
- can these even deadlock?
619618

619+
---
620+
## Expanding Cyclic Wait
621+
622+
> ... sometimes you need to lock an object to read its value and determine what should be locked next... is there a way to address it?
623+
624+
```rust
625+
let guard = m1.lock(key);
626+
if *guard == true {
627+
let key = Mutex::unlock(m);
628+
let data = [&m1, &m2];
629+
let collection = LockCollection::try_new(data).unwrap();
630+
let guard = collection.lock(key);
631+
632+
// m1 might no longer be true here...
633+
}
634+
```
635+
636+
---
637+
638+
## What I Really Want
639+
640+
```txt
641+
ordered locks: m1, m2, m3
642+
643+
if m1 is true
644+
lock m2 and keep m1 locked
645+
else
646+
skip m2 and lock m3
647+
```
648+
649+
We can specify lock orders using `OwnedLockCollection`
650+
651+
Then we need an iterator over the collection to keep that ordering
652+
653+
This will be hard to do with tuples (but is not be impossible)
654+
655+
---
656+
657+
## Something like this
658+
659+
```rust
660+
let key = ThreadKey::get().unwrap();
661+
let collection: OwnedLockCollection<(Vec<i32>, Vec<String>);
662+
let iterator: LockIterator<(Vec<i32>, Vec<String>)> = collection.locking_iter(key);
663+
let (guard, next: LockIterator<Vec<String>>) = collection.next();
664+
665+
unsafe trait IntoLockIterator: Lockable {
666+
type Next: Lockable;
667+
type Rest;
668+
669+
unsafe fn next(&self) -> Self::Next; // must be called before `rest`
670+
fn rest(&self) -> Self::Rest;
671+
}
672+
673+
unsafe impl<A: Lockable, B: Lockable> IntoLockIterator for (A, B) {
674+
type Next = A;
675+
type Rest = B;
676+
677+
unsafe fn next(&self) -> Self::Next { self.0 }
678+
679+
unsafe fn rest(&self) -> Self::Rest { self.1 }
680+
}
681+
```
682+
683+
---
684+
685+
## Here are the helper functions we'll need
686+
687+
```rust
688+
struct LockIterator<Current: IntoLockIterator, Rest: IntoLockIterator = ()>;
689+
690+
impl<Current, Rest> LockIterator<Current, Rest> {
691+
// locks the next item and moves on
692+
fn next(self) -> (Current::Next::Guard, LockIterator<Current::Rest>);
693+
694+
// moves on without locking anything
695+
fn skip(self) -> LockIterator<Current::Rest>;
696+
697+
// steps into the next item, allowing parts of it to be locked
698+
// For example, if i have LockIterator<(Vec<String>, Vec<i32>)>, but only
699+
// want to lock parts of the first Vec, then I can step into it,
700+
// locking what i need to, and then exit.
701+
// This is the first use of LockIterator's second generic parameter
702+
fn step_into(self) -> LockIterator<Current::Next, Rest=Current::Rest>;
703+
704+
// Once I'm done with my step_into, I can leave and move on
705+
fn exit(self) -> LockIterator<Rest>;
706+
}
707+
```
708+
709+
---
710+
711+
## A Quick Problem with this Approach
712+
713+
We're going to be returning a lot of guards.
714+
715+
The `ThreadKey` is held by the `LockIterator`.
716+
717+
**How do we ensure that the `ThreadKey` is not used again until all of the guards are dropped?**
718+
620719
---
621720

622721
<!--_class: invert lead -->

package-lock.json

Lines changed: 6 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
{}

src/collection/boxed.rs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,7 @@ impl<T, L: AsRef<T>> AsRef<T> for BoxedLockCollection<L> {
154154
}
155155
}
156156

157+
#[mutants::skip]
157158
impl<L: Debug> Debug for BoxedLockCollection<L> {
158159
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
159160
f.debug_struct(stringify!(BoxedLockCollection))

src/collection/guard.rs

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,32 +6,38 @@ use crate::key::Keyable;
66

77
use super::LockGuard;
88

9+
#[mutants::skip] // it's hard to get two guards safely
910
impl<Guard: PartialEq, Key: Keyable> PartialEq for LockGuard<'_, Guard, Key> {
1011
fn eq(&self, other: &Self) -> bool {
1112
self.guard.eq(&other.guard)
1213
}
1314
}
1415

16+
#[mutants::skip] // it's hard to get two guards safely
1517
impl<Guard: PartialOrd, Key: Keyable> PartialOrd for LockGuard<'_, Guard, Key> {
1618
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
1719
self.guard.partial_cmp(&other.guard)
1820
}
1921
}
2022

23+
#[mutants::skip] // it's hard to get two guards safely
2124
impl<Guard: Eq, Key: Keyable> Eq for LockGuard<'_, Guard, Key> {}
2225

26+
#[mutants::skip] // it's hard to get two guards safely
2327
impl<Guard: Ord, Key: Keyable> Ord for LockGuard<'_, Guard, Key> {
2428
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
2529
self.guard.cmp(&other.guard)
2630
}
2731
}
2832

33+
#[mutants::skip] // hashing involves RNG and is hard to test
2934
impl<Guard: Hash, Key: Keyable> Hash for LockGuard<'_, Guard, Key> {
3035
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
3136
self.guard.hash(state)
3237
}
3338
}
3439

40+
#[mutants::skip]
3541
impl<Guard: Debug, Key: Keyable> Debug for LockGuard<'_, Guard, Key> {
3642
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
3743
Debug::fmt(&**self, f)

src/collection/owned.rs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ use crate::Keyable;
77

88
use super::{utils, LockGuard, OwnedLockCollection};
99

10+
#[mutants::skip] // it's hard to test individual locks in an OwnedLockCollection
1011
fn get_locks<L: Lockable>(data: &L) -> Vec<&dyn RawLock> {
1112
let mut locks = Vec::new();
1213
data.get_ptrs(&mut locks);
@@ -61,6 +62,7 @@ unsafe impl<L: Lockable> Lockable for OwnedLockCollection<L> {
6162
where
6263
Self: 'g;
6364

65+
#[mutants::skip] // It's hard to test lkocks in an OwnedLockCollection, because they're owned
6466
fn get_ptrs<'a>(&'a self, ptrs: &mut Vec<&'a dyn RawLock>) {
6567
self.data.get_ptrs(ptrs)
6668
}

src/collection/ref.rs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -108,6 +108,7 @@ impl<T, L: AsRef<T>> AsRef<T> for RefLockCollection<'_, L> {
108108
}
109109
}
110110

111+
#[mutants::skip]
111112
impl<L: Debug> Debug for RefLockCollection<'_, L> {
112113
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
113114
f.debug_struct(stringify!(RefLockCollection))

src/key.rs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@ unsafe impl Keyable for &mut ThreadKey {}
4343
// Safety: a &ThreadKey is useless by design.
4444
unsafe impl Sync for ThreadKey {}
4545

46+
#[mutants::skip]
4647
impl Debug for ThreadKey {
4748
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4849
write!(f, "ThreadKey")

0 commit comments

Comments
 (0)