Skip to content
GitLab
Menu
Projects
Groups
Snippets
Help
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
Shenguo Wang
Flash Attention
Commits
f9d73761
Commit
f9d73761
authored
1 year ago
by
Tri Dao
Browse files
Options
Download
Email Patches
Plain Diff
Bump to v2.4.3
parent
0399432d
main
v2.5.9
v2.5.9.post1
v2.5.8
v2.5.7
v2.5.6
v2.5.5
v2.5.4
v2.5.3
v2.5.2
v2.5.1
v2.5.1.post1
v2.5.0
v2.4.3
v2.4.3.post1
No related merge requests found
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
flash_attn/__init__.py
+1
-1
flash_attn/__init__.py
training/Dockerfile
+1
-1
training/Dockerfile
with
2 additions
and
2 deletions
+2
-2
flash_attn/__init__.py
View file @
f9d73761
__version__
=
"2.4.
2
"
__version__
=
"2.4.
3
"
from
flash_attn.flash_attn_interface
import
(
flash_attn_func
,
...
...
This diff is collapsed.
Click to expand it.
training/Dockerfile
View file @
f9d73761
...
...
@@ -85,7 +85,7 @@ RUN pip install transformers==4.25.1 datasets==2.8.0 pytorch-lightning==1.8.6 tr
RUN
pip
install
git+https://github.com/mlcommons/logging.git@2.1.0
# Install FlashAttention
RUN
pip
install
flash-attn
==
2.4.
2
RUN
pip
install
flash-attn
==
2.4.
3
# Install CUDA extensions for fused dense
RUN
pip
install
git+https://github.com/HazyResearch/flash-attention@v2.4.2#subdirectory
=
csrc/fused_dense_lib
This diff is collapsed.
Click to expand it.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment
Menu
Projects
Groups
Snippets
Help