-
Notifications
You must be signed in to change notification settings - Fork 578
Pull requests: openxla/xla
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Add better builder method for PrecisionConfigAttr
#28205
opened Jun 25, 2025 by
copybara-service
bot
Loading…
Add initial support for broadcasts in XnnGraphFusion.
#28204
opened Jun 25, 2025 by
copybara-service
bot
Loading…
[xla] Add incorrectly formatted json string to the error message when converting json to proto. While here - fix imports, use absl types and std::string.
#28203
opened Jun 25, 2025 by
copybara-service
bot
Loading…
Fix a problem in Shape::Equal in comparing buffer types.
#28202
opened Jun 25, 2025 by
copybara-service
bot
Loading…
Remove UpdateEntryComputationLayout from HloRunnerPjRt.
#28201
opened Jun 25, 2025 by
copybara-service
bot
Loading…
#HLODiff Add bipartite matching to GreedyTopDownMatcher.
#28200
opened Jun 25, 2025 by
copybara-service
bot
Loading…
Fix newly-broken debug_options_flags_test.
#28199
opened Jun 24, 2025 by
copybara-service
bot
Loading…
Add CopyToRemote() to CommonPjRtBufferImpl.
#28198
opened Jun 24, 2025 by
copybara-service
bot
Loading…
Move
tensorflow/third_party/tensorrt
to xla/third_party/tensorrt
#28197
opened Jun 24, 2025 by
copybara-service
bot
Loading…
[xla:copy_insertion] Fixed a problem in finding a rotated non-copyable chain.
#28196
opened Jun 24, 2025 by
copybara-service
bot
Loading…
Add metadata for CUDA and libtpu versions
#28195
opened Jun 24, 2025 by
copybara-service
bot
Loading…
Update socket transfers to call new transfer library APIs.
#28193
opened Jun 24, 2025 by
copybara-service
bot
Loading…
[XLA:GPU] Store upper bounds of the tile directly in the SymbolicTile.
#28192
opened Jun 24, 2025 by
copybara-service
bot
Loading…
[NVIDIA GPU] Add copies for collective memory ops if they are consuming from constant or module inputs
#28190
opened Jun 24, 2025 by
Tixxx
Loading…
Add an option to enable GPU collective cancelling.
#28189
opened Jun 24, 2025 by
copybara-service
bot
Loading…
Integrate LLVM at llvm/llvm-project@13bb7948c914
#28187
opened Jun 24, 2025 by
copybara-service
bot
Loading…
[XLA:GPU]: Calculate launch dimensions based on input size.
#28186
opened Jun 24, 2025 by
copybara-service
bot
Loading…
Integrate LLVM at llvm/llvm-project@bae48ac3c0e6
#28185
opened Jun 24, 2025 by
copybara-service
bot
Loading…
Extend
WhileLoopAllReduceCodeMotion
pass with a new pattern (DUS)
#28184
opened Jun 24, 2025 by
sergey-kozub
Loading…
[XLA:GPU] enable dynamic-slice instruction in generic triton support (try 2)
#28183
opened Jun 24, 2025 by
copybara-service
bot
Loading…
Roll forward - add back checking if buffers are available
#28182
opened Jun 24, 2025 by
copybara-service
bot
Loading…
Add a tuple sharding when creating get-tuple-element(tuple(single_result)).
#28180
opened Jun 24, 2025 by
copybara-service
bot
Loading…
Refactor
py_import
macros to avoid unpacking pypi wheels twice.
#28179
opened Jun 24, 2025 by
copybara-service
bot
Loading…
[XLA:GPU] Refactor code figuring out a support for unified latency estimator.
#28176
opened Jun 24, 2025 by
copybara-service
bot
Loading…
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.