preloader

Blogs

FlexAttention I

FlexAttention I

The Flexibility of PyTorch with the Performance of FlashAttention

By Team PyTorch

FlexAttention II: FlexDecoding

FlexAttention II: FlexDecoding

Using FlexAttention for inference: backend optimized for decoding and PagedAttention.

By Team PyTorch

Helion

Helion

A High-Level DSL (PyTorch with Tiles) for Performant and Portable ML Kernels

By Team PyTorch

mm2-gb

mm2-gb

AMD Collaboration with the University of Michigan offers High Performance Open-Source Solutions to the Bioinformatics Community