-
Notifications
You must be signed in to change notification settings - Fork 74k
Pull requests: tensorflow/tensorflow
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Pass GpuCompatibilityFlags to CheckGpuDelegateCompatibility.
#69809
by copybara-service
bot
was merged Jun 15, 2024
•
Draft
Add patch ahead of LLVM integrate to fix CI
#69805
by copybara-service
bot
was merged Jun 15, 2024
•
Draft
Revert [XLA] Make space-to-batch propagate through reduces that do not touch the respective space and batch dimensions
#69800
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
Add a tag to remove a few targets from internal code coverage computation.
#69787
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
PR #13787: [GPU] Fix and cleanup cuDNN GEMM fusion tests.
#69781
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
PR #13497: Swap inner and outer minor reduced dimension of tree reduction
#69774
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
[XLA] Add shardings for implicit operands and return values of CaseOp and IfOp.
#69773
by copybara-service
bot
was merged Jun 15, 2024
•
Draft
[XLA:GPU][MLIR-based indexing] Clean-up before removing tiling.
#69772
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
Integrate LLVM at llvm/llvm-project@da249cad8d39
#69766
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
[XLA:GPU] Enable H100 for triton legacy support test
#69765
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
[XLA:GPU] Make Interval & IndexingMap properly hashable
#69764
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
Make sure that the same serialization is used for backend config.
#69763
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
PR #13513: Prevent XLA crash in case if PATH variable is not set
#69761
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
PR #13760: Increase alignment of Traits::Params to 128
#69759
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
[XLA:GPU] Disable CuDnnFusionLevel2Test.ClampExecutesCorrectly which is failing with
CUDNN_BACKEND_OPERATION: cudnnFinalize Failed
.
#69757
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
[xla:cpu] NFC: Remove MLIRContext from dot emitter
#69755
by copybara-service
bot
was merged Jun 15, 2024
•
Draft
PR #13768: [XLA:GPU] Add synchronized allocation mode for cuda async memory allocator
#69749
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
[xla:cpu] Add a flag to set preferred vector width for LLVM backend
#69747
by copybara-service
bot
was merged Jun 15, 2024
•
Draft
[xla:cpu] Add optimizer micro-benchmark
#69746
by copybara-service
bot
was merged Jun 15, 2024
•
Draft
Adding testing infrastructure for gather fusions.
#69741
by copybara-service
bot
was merged Jun 14, 2024
•
Draft
test metadata presubmits, do not actually submit
#69739
by copybara-service
bot
was closed Jun 13, 2024
•
Draft
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.