Skip to content

spirv-std functions to query compute builtins#535

Open
fluffysquirrels wants to merge 6 commits intoRust-GPU:move_modfrom
fluffysquirrels:spirv-std-builtin-fn
Open

spirv-std functions to query compute builtins#535
fluffysquirrels wants to merge 6 commits intoRust-GPU:move_modfrom
fluffysquirrels:spirv-std-builtin-fn

Conversation

@fluffysquirrels
Copy link

@fluffysquirrels fluffysquirrels commented Feb 20, 2026

Requires #540

This PR adds new API to query compute and subgroup builtins via getter functions:

  • new mod spirv_std::compute with builtins:
    • local_invocation_index() -> u32
    • local_invocation_id() -> UVec3
    • global_invocation_id() -> UVec3
    • num_workgroups() -> UVec3
    • workgroup_id() -> UVec3
    • missing gl_WorkgroupSize equivalent, see below
  • new builtins in spirv_std::subgroup:
    • num_subgroups() -> u32
    • subgroup_id() -> u32
    • subgroup_size() -> u32
    • subgroup_invocation_id() -> u32
    • subgroup_*_mask() -> SubgroupMask
  • post-link pass to deduplicate Input OpVariables with Builtin decoration
    • entry points must not have duplicate Builtins declared in the storageclasses (Input | Output | Output with PerPrimitive decoration)

about gl_WorkgroupSize

In spirv, there's a WorkgroupSize built-in which you have to apply not to an OpVariable like every other built-in but to an OpConstant, and applying built-ins to constants has been deprecated. Instead, they recommend that shader compilers look at the ExecutionMode LocalSize of their compute shader and just return that when gl_WorkGroupSize is read. glslc literally compiles a gl_WorkGroupSize read into %gl_WorkGroupSize = OpConstantComposite %v3uint %uint_1 %uint_2 %uint_3 without even decorating it with WorkgroupSize since that's been deprecated. But that's not trivial in rust-gpu, since we may have a single module with two compute shaders, and each compute shader may have a different workgroup size. So an fn workgroup_size() -> UVec3 intrinsic would need to be somehow specialized per shader, even if it is called through some non-generic function both shaders use. Possible solutions:

  1. Force-inline all functions that may lead to workgroup_size() to specialize it, but since there's nothing passed to the function by reference, will likely be annoying to implement.
  2. Have a global OpVaraiable that is filled with the value of WorkGroupSize by the entry point, and that intrinsic just reads it. Would require support for statics, which I'd honestly like to have for other custom use-cases as well.

I don't feel like we need to support this right away and can delay it's implementation.

old

This is a continuation of Firestar99's work in
#459 . I created a new PR only to have somewhere to show my changes.

A small problem: if an entry-point accepts a builtin as a parameter and code is generated for the get function for that parameter, then 2 globals are emitted decorated with the same builtin, and validation fails.

See the failing test compute_duplicates.

SPIR-V output:

OpDecorate %2 BuiltIn LocalInvocationIndex
OpDecorate %3 BuiltIn LocalInvocationIndex     

compiletest error message:

error: error:0:0 - [VUID-StandaloneSpirv-OpEntryPoint-09658] OpEntryPoint contains duplicate input variables with Loca
lInvocationIndex builtin                                   
         %gl_LocalInvocationIndex = OpVariable %_ptr_Input_uint Input
   |
   = note: spirv-val failed
   = note: module `$TEST_BUILD_DIR/builtin/compute_duplicates.vulkan1.2`

error: aborting due to 1 previous error

For ease of use and porting to the new bulitin getters, I'd like to have this compile successfully by de-duplicating the global variable for the builtin. I think I see similar code in /crates/rustc_codegen_spirv/src/linker/duplicates.rs, is that the right way to handle it? I will try to write this.

I also considered building a map between builtin names and result ID's in the assembler and the entry-point codegen. I think that might run faster, but would have higher code complexity and need to touch more places.

@Firestar99
Copy link
Member

Firestar99 commented Feb 20, 2026

Give me a little bit more time to work on my version. Don't get me wrong, I think your version is fine, I'm just wondering if we can extract more information from the intrinsics.

Like global_invocation_id gives you a UVec3 that is known to be unique across all (= global) invocations. So if your workgroup was 1D (eg. compute(threads(32, 1, 1))) you can write to some output buffer safety using global_invocation_id().x, since you know every invocation has their own unique "slot" in the output buffer and there won't be any overlapping writes (which is UB). So we may be able to provide a safe API for many common write patterns, and ofc an unsafe escape hatch.

@fluffysquirrels
Copy link
Author

fluffysquirrels commented Feb 20, 2026

I was also wondering about safe collection patterns without data races, the equivalent of rayon but for GPU collections. I considered making those newtype wrappers (from my first attempt) immutable and using them as a safe proof that an invocation really did own its slot in the collection.

That is still possible later even if spirv_std::builtin::* return these raw ID's such as UVec3 or u32; I was considering having the newtype wrappers just initialise themselves using these builtin getters.

@fluffysquirrels
Copy link
Author

Here are some thoughts on safe GPU collections. Is there already a discussion thread or issue for these, or should we make one?

In my compute project I'm playing with implementing some standard algorithms, e.g. reduce, scan, filter, map. For coalesced access I'm using the normal pattern: each workgroup takes a slice of the total data, for each thread operates on 1 value at a time at index local_invocation_index = n * WG_LEN for iteration n.

I was thinking a workgroup-level data-race-free API could expose this as a map_into algorithm that takes a &mut [Output], inserts control barriers as necessary for the whole workgroup to finish.

For simple compute grids with multiple workgroups, the collection fn could consume the target collection to model it not being usable again until the whole grid finishes.

For prefix scan I'm implementing approximately StreamScan (best described IMO in NVIDIA's "Single-pass Parallel Prefix Scan with Decoupled Look-back" paper, if you don't know it). I use a pattern where each workgroup increments a global atomic to take the next sub-slice of data to work on, and increments another when complete. Once all blocks are assigned, workgroups spin-wait on the completed blocks count to reach the expected final count; this acts as a global barrier before proceeding to the next phase of the algorithm. This seemed sufficiently useful and general to extract into a re-usable algorithm. It would be fantastic for the API to mutably borrow the output collections during such a phase, hopefully guaranteeing data-race-freedom. I was thinking a phase could be modelled like a rayon scope.

@fluffysquirrels
Copy link
Author

There's a bug in the new linker dedupe pass I'm half way through fixing.

@fluffysquirrels
Copy link
Author

Linker builtin de-dupe bug fixed!

@fluffysquirrels

This comment was marked as outdated.

@Firestar99
Copy link
Member

I dumped my thoughts here: Rust-GPU/rust-gpu.github.io#96

TLDR:

But if there's a takeaway from this experiment, then that the uniqueness property of global_invocation_id is not worth preserving in the type system, as it is extremely difficult to take advantage of it. Instead, we should get on with implementing builtins as getter functions, and have fn global_invocation_id() just return a simple UVec3.

@fluffysquirrels

This comment was marked as outdated.

@Firestar99 Firestar99 marked this pull request as ready for review February 23, 2026 14:05
@Firestar99

This comment was marked as outdated.

@Firestar99

This comment was marked as outdated.

@fluffysquirrels

This comment was marked as outdated.

@Firestar99

This comment was marked as outdated.

@Firestar99 Firestar99 changed the title spirv-std functions to query builtins (fluffy's fork) spirv-std functions to query compute builtins Mar 3, 2026
@Firestar99 Firestar99 changed the base branch from main to move_mod March 3, 2026 10:53
@Firestar99
Copy link
Member

I'm gonna repurpose this PR to add just the compute and subgroup builtin getters, and worry about the graphics stuff in another PR. Which also means I'm merging #539 into this one, to have one complete working implementation, instead of splitting it up into two PRs.

@Firestar99 Firestar99 force-pushed the spirv-std-builtin-fn branch from 08ccd9d to 286d803 Compare March 3, 2026 11:25
@Firestar99 Firestar99 force-pushed the spirv-std-builtin-fn branch 2 times, most recently from 0f1b95b to 708a6d6 Compare March 3, 2026 13:22
@Firestar99 Firestar99 force-pushed the spirv-std-builtin-fn branch from 708a6d6 to adcba8d Compare March 3, 2026 13:24
@Firestar99
Copy link
Member

@fluffysquirrels I may have found an entirely new way to implement the linker fixup... The existing type dedup pass added the name (as in OpName) of the OpVariable to the dedup key. Removing it dedups any getter and attribute-based builtin declarations.

I think this should also be fine with other usages of OpVariable. The key includes all other decorations on the variable, so buffers should work just fine since they have DescriptorSet and Binding decorations, similarly vertex and fragment in / out have Location decorations (and In / Out too).

@Firestar99 Firestar99 force-pushed the spirv-std-builtin-fn branch from adcba8d to f382cfb Compare March 3, 2026 13:56
/// Query SPIR-V (read-only global) built-in values
///
/// See [module level documentation] on how to use these.
#[macro_export]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I deliberately kept load_builtin! private as an implementation detail. As is, it's very unsafe. Also easy to make something that won't compile but will have an inscrutable error message, or won't validate.

I think this should remain an internal implementation detail.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separating out all the builtins into modules in the root makes it a little harder to find them all. I think (especially for compute), someone using the subgroup builtins are also likely to use the compute builtins, so keeping these modules nested under crate::builtin makes sense to me.

In std (and other crates) I've seen people keep all the low level primitives / intrinsics in std::arch or similar. I think that pattern makes sense.

Any higher-level abstraction on top such as safe collections (alloc::Vec in std or whatever parallel collection for spirv here) could then have its own module in the root.

Copy link
Member

@Firestar99 Firestar99 Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't work for the graphics builtins. Here's a message I sent internally last week about this:

  • Easy case: Some built-ins can be "inherited", eg. local_invocation_id from compute shaders are used in mesh shaders and ray gen shaders, cause they fundamentally function like augmented compute shaders. Will likely just define it once for compute shaders and you can just reuse them.
  • medium case: gl_PrimitiveID
    • vertex, tessellation & fragment shaders: an input
    • geometry shaders:
      • gl_PrimitiveIDIn: an input
      • gl_PrimitiveID: an output
    • mesh shaders:
      • gl_MeshPerPrimitiveEXT[].gl_PrimitiveID unsized per-primitive output array.
  • Hard case: gl_Position
    • vertex shader: an output variable that is written to via gl_Position
    • tessellation control | evaluation shaders:
      • gl_in[gl_MaxPatchVertices].gl_Position: a sized arrayed input variable you can read to get the Nth vertex position
      • gl_Position: an output variable you can write
    • geometry shader:
      • an unsized arrayed input variable you can read to get the Nth vertex position via gl_in[].gl_Position
      • gl_Position: an output variable you can write

If you split them into one module per shader type (like in #540), you can define the builtin for each shader separately and exactly like is required.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I meant 1 module per shader type, so crate::builtins::compute::*, crate::builtins::vertex::*, etc. Just as you had in your first pull request and first commit.

As I mentioned elsewhere, but you may have missed, I think output builtin setters would be confusing, as it will be difficult to see in their code when and where they are set.

For compute shaders, I think you always have all the input builtins, whereas e.g. for vertex what is available depends more on what has been configured, is that correct? I also think it would be confusing to not be able to see all the inputs params in one place.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving just the subgroup builtins from subgroup to compute would be confusing, as these builtins are also hidden behind subgroup capabilities. I wonder if #540 should move spirv_std::arch::subgroup to spirv_std::compute::subgroup instead of spirv_std::subgroup? But I feel like that's creating extra module depth without being any useful.

On output setters: I also think they need more design work, which is why I separated out the compute parts and we'll do graphics later. But have you looked at the current function declaration? Cause it's very much not a setter: fn decl_position_out() -> &'static mut Vec4. Calling it declares the position output variable, and returns a mutable reference to write the position to, thus making it unique. This design still has the issue that you can call it multiple times to declare multiple output positions, which is illegal per shader. And incredibly difficult to validate, since we're sorta forced to merge all variables of the same type, as one crate's function could be inlined in two other crates even though it's only one function.

So I'm currently thinking of having these functions marked as unsafe and hide them a little, so they're only used via the new proc macro (to be designed).

Another idea I just had: We may also be better off with a wrapper type like struct Position(&'static mut Vec4) that represents the OpVariable and is passed as an arg to the entry point, which would prevent duplication problems.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

crate::builtins::subgroup::* for the subgroup ones I think is fine if you want them separate from compute, e.g. if they are re-used for other shader types.

Vulkan 1.1 (released 2018) included the subgroup capabilities I believe, so I doubt they are rare enough to be packaged away separately. From a quick check on vulkan.gpuinfo.org reports (from the last 2 years, excluding CPU implementations), 0.4% of reports supported none of the subgroup features.

Output fns: whether it is a direct setter or returns an &mut, the key problem to me is it is not clear if and where the output value is set. If it's an entry point parameter, you can grep or just read what writes that parameter, and if the parameter is passed as an argument to other fns you must inspect those also. If it's written with an output fn, I think you would need to search the whole codebase for use of that output fn, then figure out which paths are actually called.

Fine in small examples, potentially very nasty in a framework. It's the difference between finding usages of a local variable and a global variable across your code base.

having these functions marked as unsafe and hide them a little, so they're only used via the new proc macro

Sounds fine if they're #[doc(hidden)].

wrapper type like struct Position(&'static mut Vec4) that represents the OpVariable and is passed as an arg to the entry point

Like my first PR? Sounds good to me for a typed unique output variable :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For setting a variable at most once, the output wrapper can be opaque and consumed when you set the output: fn set(self: Position, v: Vec4).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait crate::builtins::subgroup::* there are no modules in builtins, they're all in the crate root. So crate::compute contains the compute builtins, crate::subgroup contains all the subgroup intrinsics and the subgroup builtins, and similarly crate::fragment contains kill(), demote_to_helper_invocation() and will contain fragment builtins such as frag_coord() -> Vec4 in future PRs.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know you have that now, I was proposing low level stuff is grouped somewhere (as crate::arch is now).

But thinking again I prefer grouping by shader type and use case, as you describe.

As you probably know, ray tracing shaders also have subgroup builtins but with different semantics as the subgroup IDs can change during the invocation.

I think I would like the module docs for crate::compute and crate::raytrace to include "see also: crate::subgroup" or similar, if they are both using the same subgroup builtins there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants