Skip to content

updated bmm and matmul for GPT-OSS#999

Open
kinjalpatel27 wants to merge 1 commit intomainfrom
kinjal/gpt_oss_recur
Open

updated bmm and matmul for GPT-OSS#999
kinjalpatel27 wants to merge 1 commit intomainfrom
kinjal/gpt_oss_recur

Conversation

@kinjalpatel27
Copy link
Contributor

@kinjalpatel27 kinjalpatel27 commented Mar 6, 2026

What does this PR do?

This PR fixes maximum recursion bug for GPT-OSS. It replaces torch._bmm and torch.matmul with torch.ops.aten.bmm and torch.ops.aten.matmul to avoid recursion

Usage

Docker image: nvcr.io/nvidia/tensorrt-llm/release:1.3.0rc4

[Repro Steps]:

[gpt-oss]
Step1:
accelerate launch --config_file configs/zero3.yaml sft.py --config configs/sft_full.yaml --model_name_or_path openai/gpt-oss-20b --output_dir /tmp/pytest-of-root/pytest-0/test_gpt_oss_complete_pipeline0/gpt-oss-20b-sft

Step 1 completed: SFT checkpoint at /tmp/pytest-of-root/pytest-0/test_gpt_oss_complete_pipeline0/gpt-oss-20b-sft


Step2:

accelerate launch --config_file configs/zero3.yaml sft.py --config configs/sft_full.yaml --model_name_or_path /tmp/pytest-of-root/pytest-0/test_gpt_oss_complete_pipeline0/gpt-oss-20b-sft --quant_cfg MXFP4_MLP_WEIGHT_ONLY_CFG --output_dir /tmp/pytest-of-root/pytest-0/test_gpt_oss_complete_pipeline0/gpt-oss-20b-qat

Testing

pytest tests/examples/gpt_oss/test_gpt_oss_qat.py

Before your PR is "Ready for review"

Make sure you read and follow Contributor guidelines and your commits are signed (git commit -s -S).

Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded trust_remote_code=True, torch.load(..., weights_only=False), pickle, etc.).

  • Is this change backward compatible?: ✅
  • If you copied code from any other sources or added a new PIP dependency, did you follow guidance in CONTRIBUTING.md:N/A
  • Did you write any new necessary tests?: N/A (test already exist)
  • Did you update Changelog?: N/A

Additional Information

Summary by CodeRabbit

  • Bug Fixes
    • Improved quantization stability for certain model architectures by resolving internal dispatch recursion issues during matrix operations.

Signed-off-by: Kinjal Patel <kinjalpravin@nvidia.com>
@kinjalpatel27 kinjalpatel27 requested a review from a team as a code owner March 6, 2026 20:47
@kinjalpatel27 kinjalpatel27 requested a review from Fridah-nv March 6, 2026 20:47
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 6, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 31d51bb2-f57c-4cbf-8b8a-991fc74067e4

📥 Commits

Reviewing files that changed from the base of the PR and between be6dfad and 8488dfc.

📒 Files selected for processing (1)
  • modelopt/torch/quantization/plugins/huggingface.py

📝 Walkthrough

Walkthrough

This change modifies the quantization plugin to implement a resilient quantization path for GptOssExperts by routing matmul and bmm operations through explicit ATen implementations instead of standard torch dispatchers. This avoids potential Python dispatch recursion issues and adds an explicit override for the torch.Tensor.matmul functional.

Changes

Cohort / File(s) Summary
Quantization Plugin
modelopt/torch/quantization/plugins/huggingface.py
Caches and uses _aten_bmm and _aten_matmul for internal tensor operations, replaces direct torch._bmm and torch.matmul calls with ATen equivalents, extends functionals_to_replace list with explicit torch.Tensor.__matmul__ override, and adds comments documenting recursion avoidance.
Configuration
pyproject.toml
Minor configuration updates related to the quantization plugin changes.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 3 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: replacing bmm and matmul operations for GPT-OSS to fix recursion issues.
Security Anti-Patterns ✅ Passed The pull request does not introduce any security anti-patterns specified in SECURITY.md. Changes use standard PyTorch ATen operations with appropriate documentation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch kinjal/gpt_oss_recur

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Mar 6, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 72.11%. Comparing base (e8f9687) to head (8488dfc).
⚠️ Report is 8 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #999      +/-   ##
==========================================
- Coverage   72.12%   72.11%   -0.02%     
==========================================
  Files         209      209              
  Lines       23628    23638      +10     
==========================================
+ Hits        17042    17046       +4     
- Misses       6586     6592       +6     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant