-
Notifications
You must be signed in to change notification settings - Fork 9
Pull requests: google-ai-edge/ai-edge-quantizer
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Add support for UNPACK op for int8 and int16
#276
opened Jun 30, 2025 by
copybara-service
bot
Loading…
Add support for PACK op for int8 and int16
#275
opened Jun 30, 2025 by
copybara-service
bot
Loading…
Fix a buffer sharing bug for composite op quantization
#274
opened Jun 23, 2025 by
copybara-service
bot
Loading…
De-duplicate zero points for per channel quantized tensors when all the zero points are the same.
#256
opened May 30, 2025 by
copybara-service
bot
Loading…
Conditionally dynamically allocate TfLiteFloatArray data member.
#252
opened May 26, 2025 by
copybara-service
bot
Loading…
Add support for PADV2 op for int8 and int16
#246
opened May 14, 2025 by
copybara-service
bot
Loading…
Implement Hadamard rotation reference as a custom op
#232
opened May 3, 2025 by
copybara-service
bot
Loading…
Fix incorrect im2col size allocation with INT4 filter
#158
opened Oct 16, 2024 by
copybara-service
bot
Loading…
Complete i4 FullyConnected support for TFLite
#155
opened Oct 5, 2024 by
copybara-service
bot
Loading…
Create a python schema_generated target on the compiler mlir side, and use that for odml/model_customization/quantization
#140
opened Sep 20, 2024 by
copybara-service
bot
Loading…
ProTip!
Updated in the last three days: updated:>2025-07-02.