Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize Parallel Version of Interpolate #227

Open
aszepieniec opened this issue Aug 2, 2024 · 1 comment
Open

Optimize Parallel Version of Interpolate #227

aszepieniec opened this issue Aug 2, 2024 · 1 comment

Comments

@aszepieniec
Copy link
Collaborator

Function par_interpolate has weird behavior for small domain sizes. In particular, it is faster when (some of) its subroutines are sequential.

There is a lot of potential for optimization here. In general it is okay to rely on dispatcher methods that choose the asymptotically superior or concretely superior algorithm depending on some threshold, but in the context of parallel hardware we ideally want hardcoded thresholds to be independent of the number of cores/threads. It is allowable to call available_parallelism and make a decision based on that. This task involves finding the optimal cascade of specialized functions and the optimal dispatch criteria.

Sword-Smith added a commit that referenced this issue Aug 2, 2024
Adjust benchmark size to reveal the asymptotic benefit of using
par_batch_evaluate over naive parallelization over the domain.

See
#227

Co-authored-by: Alan Szepieniec <[email protected]>
@Sword-Smith
Copy link
Member

Sword-Smith commented Aug 2, 2024

It seems that par_interpolate can be made faster for some domain sizes, e.g. $2^{10}$, if evaluation uses "naive parallelism", i.e.:

impl Polynomial<FF: FiniteField> {
    fn parallel_naive_evaluate(&self, domain: &[FF]) -> Vec<FF> {
        domain.par_iter().map(|x| self.evaluate(x)).collect_vec()
    }
}

See commit message in fd5add7 for more info.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants