Skip to content

Register choose_qparams_affine_float8 as custom op #2461

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 1, 2025
Merged

Conversation

angelayi
Copy link
Contributor

@angelayi angelayi commented Jun 30, 2025

Addresses #2456

Copy link

pytorch-bot bot commented Jun 30, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2461

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 54e7649 with merge base 6dfba04 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 30, 2025
@angelayi angelayi added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label Jun 30, 2025
@angelayi angelayi requested a review from jerryzh168 June 30, 2025 19:58
@angelayi angelayi marked this pull request as ready for review June 30, 2025 19:58
Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me, cc @drisspg on using empty list for per tensor quant

@@ -2076,6 +2078,26 @@ def forward(self, x):
self.assertTrue(torch.ops.torchao.choose_qparams_affine.default in targets)
self.assertTrue(torch.ops.torchao.quantize_affine.default in targets)
self.assertFalse(torch.ops.aten.narrow.default in targets)

def test_export_float8(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need this skip according to CI:

@unittest.skipIf(
not is_sm_at_least_89(), "Requires GPU with compute capability >= 8.9"
)

@angelayi angelayi merged commit 0aa89a8 into main Jul 1, 2025
19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing Use this tag if you don't want this PR to show up in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants