Skip to content

Add boot_time cache to improve performance #64

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 25, 2020

Conversation

dalance
Copy link
Contributor

@dalance dalance commented Jan 23, 2020

This PR improves Stat::starttime() performance.
Currently starttime() is slow because it internally calls boot_time() each time.
I think the result of boot_time() can be cached because it is not changed through program execution.

I tried the following benchmark:

#[bench]
fn bench_starttime(b: &mut test::Bencher) {
    b.iter(|| {
        for p in process::all_processes().unwrap() {
            let _ = p.stat.starttime();
        }
    });
}

The result is below:

  • Current master
test tests::bench_starttime ... bench: 695,472,752 ns/iter (+/- 35,133,377)
  • This PR applied
test tests::bench_starttime ... bench:   7,444,747 ns/iter (+/- 1,975,527)

@dalance dalance requested a review from eminence January 23, 2020 12:28
@eminence eminence self-assigned this Jan 24, 2020
@eminence
Copy link
Owner

On my system, the current master branch is about an order of magnitude faster than your system, which is interesting:

test tests::bench_starttime ... bench:  36,540,102 ns/iter (+/- 2,651,593)

And with this patch applied, it's quite a bit faster:

test tests::bench_starttime ... bench:   8,893,658 ns/iter (+/- 917,462)  

Looks like a win!

By my analysis, the call to borrow_mut will never panic because this is a thread-local variable that can never be shared between threads (because RefCell is !Send), so it will never be borrowed simultaneously.

@eminence eminence merged commit 7f9a8d6 into eminence:master Jan 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants