Skip to content

gpu scan bench#6714

Open
onursatici wants to merge 2 commits intodevelopfrom
os/gpu-scan-bench
Open

gpu scan bench#6714
onursatici wants to merge 2 commits intodevelopfrom
os/gpu-scan-bench

Conversation

@onursatici
Copy link
Contributor

Summary

Add a gpu scan binary for benchmarks, this is not wired in to CI yet

@onursatici onursatici added the changelog/feature A new feature label Feb 27, 2026
@codspeed-hq
Copy link

codspeed-hq bot commented Feb 27, 2026

Merging this PR will not alter performance

✅ 959 untouched benchmarks
⏩ 1466 skipped benchmarks1


Comparing os/gpu-scan-bench (eed8627) with develop (905e9a8)

Open in CodSpeed

Footnotes

  1. 1466 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Mar 1, 2026

Deploying vortex-bench with  Cloudflare Pages  Cloudflare Pages

Latest commit: f08430d
Status: ✅  Deploy successful!
Preview URL: https://8d3ea536.vortex-93b.pages.dev
Branch Preview URL: https://os-gpu-scan-bench.vortex-93b.pages.dev

View logs

@codecov
Copy link

codecov bot commented Mar 2, 2026

Codecov Report

❌ Patch coverage is 0% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 81.36%. Comparing base (fb9a332) to head (f08430d).
⚠️ Report is 23 commits behind head on develop.

Files with missing lines Patch % Lines
vortex-cuda/gpu-scan-bench/src/main.rs 0.00% 1 Missing ⚠️

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@0ax1 0ax1 self-requested a review March 2, 2026 13:34
// SPDX-License-Identifier: Apache-2.0
// SPDX-FileCopyrightText: Copyright the Vortex contributors

#![allow(unused_imports)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we scope this more narrow or use cuda_available/cuda_not_available? We could put all of the CUDA logic in a separate module/file and include that mod with cuda_available. wdyt?

if cli.json {
let log_layer = tracing_subscriber::fmt::layer()
.json()
.with_span_events(FmtSpan::NONE)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we hoist out shared construction of the log_layer?

cuda_ctx.stream().context(),
)));
let cuda_stream =
VortexCudaStreamPool::new(Arc::clone(cuda_ctx.stream().context()), 1).get_stream()?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to adjust to latest develop: get_stream => stream

Copy link
Contributor

@0ax1 0ax1 Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Side note: I'll change the CI tasks to at least build all CUDA code on each PR.

let mut batches = gpu_file.scan()?.into_array_stream()?;

let mut chunk = 0;
while let Some(next) = batches.next().await.transpose()? {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What batch size is this here, 1MB? And we'll call multiple kernels per batch here. That'll be very expensive.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let it rip

Signed-off-by: Onur Satici <onur@spiraldb.com>
Signed-off-by: Onur Satici <onur@spiraldb.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

changelog/feature A new feature

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants