mirror of
https://github.com/likelovewant/ollama-for-amd.git
synced 2025-12-22 14:53:56 +00:00
Compare commits
1309 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cb13784a11 | ||
|
|
bc1a818fdc | ||
|
|
ba2253dc30 | ||
|
|
68e04c7ff8 | ||
|
|
270679932f | ||
|
|
65fb3ff49d | ||
|
|
e2a0b24435 | ||
|
|
1813ff85a0 | ||
|
|
b531777a66 | ||
|
|
fe3ec8dbf0 | ||
|
|
c744134287 | ||
|
|
4be41d2d45 | ||
|
|
de670570c9 | ||
|
|
201d93716e | ||
|
|
160cecc8e2 | ||
|
|
8b6e5baee7 | ||
|
|
75d17fc6c2 | ||
|
|
8fafc8af77 | ||
|
|
c3c85aa06c | ||
|
|
0d713051a2 | ||
|
|
c4c5a4a01e | ||
|
|
3dcfd5f69e | ||
|
|
53a969d509 | ||
|
|
08fbb60bb2 | ||
|
|
850da848c5 | ||
|
|
2aba569a2a | ||
|
|
fd8aa947f3 | ||
|
|
ddaca643d0 | ||
|
|
05982a95cb | ||
|
|
4987f13d34 | ||
|
|
e638f2acb6 | ||
|
|
18087f2ec7 | ||
|
|
6c833d5f8d | ||
|
|
6544e14735 | ||
|
|
5db8a818a1 | ||
|
|
6db8da9958 | ||
|
|
0c68ec8d6a | ||
|
|
70d9e363e1 | ||
|
|
1a2feb2a97 | ||
|
|
aab2190420 | ||
|
|
629db9dc43 | ||
|
|
e0cd511661 | ||
|
|
207332078f | ||
|
|
93085127f4 | ||
|
|
c00fa9cc2b | ||
|
|
df411c4b02 | ||
|
|
3d32249c74 | ||
|
|
d681cd7c29 | ||
|
|
47298fce39 | ||
|
|
4a48937ef1 | ||
|
|
967a82f52f | ||
|
|
bbbc73d637 | ||
|
|
15e3611d3d | ||
|
|
77060d462c | ||
|
|
1b91d4dda1 | ||
|
|
7d965258ce | ||
|
|
6a62b894c7 | ||
|
|
90d429f5a8 | ||
|
|
1fc35f1260 | ||
|
|
aa45f7ce27 | ||
|
|
4e5d862ec4 | ||
|
|
303be9304c | ||
|
|
bd15eba4e4 | ||
|
|
bc71278670 | ||
|
|
918231931c | ||
|
|
04c1849878 | ||
|
|
2c2f4deaa9 | ||
|
|
292767afb4 | ||
|
|
ae5e0f0889 | ||
|
|
19e6796eac | ||
|
|
33801c1597 | ||
|
|
e4340667e3 | ||
|
|
2fa1e92a99 | ||
|
|
07e36761c3 | ||
|
|
c29fb007c0 | ||
|
|
730ed6e9e1 | ||
|
|
dc06601677 | ||
|
|
1ed2881ef0 | ||
|
|
0bda72892c | ||
|
|
55ca827267 | ||
|
|
c68f367ef6 | ||
|
|
fdb109469f | ||
|
|
05a43e078a | ||
|
|
bc8909fb38 | ||
|
|
6b50f2b9cd | ||
|
|
35ac4eb12c | ||
|
|
3d0b1734c0 | ||
|
|
efaee8c2d6 | ||
|
|
734b57da0e | ||
|
|
83021fcf0f | ||
|
|
0469861d9d | ||
|
|
04431b50fa | ||
|
|
c47154c08d | ||
|
|
b04e46da3e | ||
|
|
34efbbd3f0 | ||
|
|
05ba4ca1f4 | ||
|
|
5a56ff3cf0 | ||
|
|
2fba04b5fb | ||
|
|
fbd82ba5bb | ||
|
|
2e742544bf | ||
|
|
bbb195a6ff | ||
|
|
fd88cd7cb0 | ||
|
|
e1979c571a | ||
|
|
bf78ed6ee9 | ||
|
|
a40d427bce | ||
|
|
64883e3c4c | ||
|
|
41efdd4048 | ||
|
|
c23e6f4cae | ||
|
|
af060eb250 | ||
|
|
ae5c33008e | ||
|
|
000a3ec8b9 | ||
|
|
3677842ff1 | ||
|
|
242df70a75 | ||
|
|
dba39b2eee | ||
|
|
9f3a37fd36 | ||
|
|
7460259eb3 | ||
|
|
22ccdd74c2 | ||
|
|
0c3d0e7533 | ||
|
|
e7f56ef3d8 | ||
|
|
eb0a5d4459 | ||
|
|
ceac416ec2 | ||
|
|
2717dce6fe | ||
|
|
9b8187b487 | ||
|
|
8b894933a7 | ||
|
|
9c5bf342bc | ||
|
|
564b558c92 | ||
|
|
a417ac97ee | ||
|
|
05d53457af | ||
|
|
b225508c9b | ||
|
|
fa1c987a29 | ||
|
|
ad95d5b30b | ||
|
|
c253433d68 | ||
|
|
a1cff89b30 | ||
|
|
93c64ea1b1 | ||
|
|
3f6642f6fc | ||
|
|
6f7117145f | ||
|
|
472feec2ff | ||
|
|
47991940d4 | ||
|
|
9f3f80891d | ||
|
|
92b96d54ef | ||
|
|
9d56e63dbf | ||
|
|
053092185e | ||
|
|
44a6792873 | ||
|
|
e4ce68311a | ||
|
|
26214125e8 | ||
|
|
61fb912ca4 | ||
|
|
aba1575315 | ||
|
|
eb10390de9 | ||
|
|
feb18cd710 | ||
|
|
8a7e2055d2 | ||
|
|
29ddfc2cab | ||
|
|
71cb86af3e | ||
|
|
5198956372 | ||
|
|
17a023f34b | ||
|
|
8d6fffaead | ||
|
|
20b53eaa72 | ||
|
|
6745182885 | ||
|
|
f810ec741c | ||
|
|
e119783e66 | ||
|
|
1a558f98e2 | ||
|
|
7b91c9ce51 | ||
|
|
950d33aa30 | ||
|
|
9714e38dd0 | ||
|
|
4378ae4ffa | ||
|
|
501cb38b8c | ||
|
|
5994e8e8fd | ||
|
|
59e3a35203 | ||
|
|
b3e6120736 | ||
|
|
fb92b61754 | ||
|
|
8149a3c86e | ||
|
|
0cc90a8186 | ||
|
|
e42300f25b | ||
|
|
66e73809a1 | ||
|
|
c632fdbad8 | ||
|
|
517807cdf2 | ||
|
|
ead4a9a1d0 | ||
|
|
4383a3ab7a | ||
|
|
9d97e6a9f1 | ||
|
|
1081532430 | ||
|
|
59412fbb43 | ||
|
|
86834a2797 | ||
|
|
85ccf7354d | ||
|
|
30fb7e19f8 | ||
|
|
d3450dd52e | ||
|
|
4bcb04ad88 | ||
|
|
e3d5708754 | ||
|
|
4be4dc8717 | ||
|
|
109d4fc3b4 | ||
|
|
2cb0a580f3 | ||
|
|
7cce5aac76 | ||
|
|
131c496340 | ||
|
|
4ae4f47b16 | ||
|
|
073fa31df5 | ||
|
|
91fc3c48e3 | ||
|
|
6de62664d9 | ||
|
|
463a6caad8 | ||
|
|
fc5fb09f51 | ||
|
|
05ccb17c6e | ||
|
|
f804e8a460 | ||
|
|
9cfbffafc5 | ||
|
|
470d580205 | ||
|
|
b517bb1c19 | ||
|
|
e3ade453a8 | ||
|
|
048bd4472a | ||
|
|
ec8bf5e6c5 | ||
|
|
709bbb0b6d | ||
|
|
abeec240f9 | ||
|
|
df335aac09 | ||
|
|
026bc29237 | ||
|
|
883d031268 | ||
|
|
5271ff8559 | ||
|
|
d6f7233a1c | ||
|
|
8de1da4767 | ||
|
|
d925b5350c | ||
|
|
6eaf194b85 | ||
|
|
d5a0d8d904 | ||
|
|
ef7d26ba2c | ||
|
|
1a19df1f3a | ||
|
|
7ccfd97a93 | ||
|
|
c385ca8672 | ||
|
|
837379a94c | ||
|
|
a24f90604f | ||
|
|
dc5a645434 | ||
|
|
bb71654ebe | ||
|
|
d4af9f04f9 | ||
|
|
a343ae53a4 | ||
|
|
d0cf6c8281 | ||
|
|
8f4ec9ab28 | ||
|
|
dbfd7bd027 | ||
|
|
ee04dbba51 | ||
|
|
ea7657b54a | ||
|
|
2c776f0780 | ||
|
|
79f6376f5b | ||
|
|
756c78cfc7 | ||
|
|
d7f4f788d1 | ||
|
|
114c3f2265 | ||
|
|
f2e9c9aff5 | ||
|
|
aa9d889522 | ||
|
|
735c41f9ca | ||
|
|
223a619468 | ||
|
|
759dd78dd6 | ||
|
|
44bc36d063 | ||
|
|
8f14e1f5f6 | ||
|
|
203c137810 | ||
|
|
fa8be9e35c | ||
|
|
8a75e9ee15 | ||
|
|
9231379bce | ||
|
|
c7ba6128b4 | ||
|
|
8970233a2b | ||
|
|
cde948f976 | ||
|
|
7c8aba0d83 | ||
|
|
4742e12c23 | ||
|
|
2d06977ade | ||
|
|
30f8a68c4c | ||
|
|
e378e33421 | ||
|
|
fcec04bf42 | ||
|
|
ee92ca3e1d | ||
|
|
8253ad4d2b | ||
|
|
fa7776fd24 | ||
|
|
0d38b66502 | ||
|
|
e5e077b4b7 | ||
|
|
4183bb0574 | ||
|
|
ff89ba90bc | ||
|
|
6dcc5dfb9c | ||
|
|
25911a6e6b | ||
|
|
8afa6e83f2 | ||
|
|
ea85e27bbd | ||
|
|
c116a7523d | ||
|
|
3515cc377c | ||
|
|
bbf66c0b96 | ||
|
|
764be7480f | ||
|
|
b72e5adb14 | ||
|
|
80b538e312 | ||
|
|
4f8a0166cc | ||
|
|
1e6eab5c33 | ||
|
|
6c733bf0a6 | ||
|
|
3bac5cba60 | ||
|
|
4151ef8cf7 | ||
|
|
e4ff6e6c0f | ||
|
|
82da19c634 | ||
|
|
bdd9d22dfd | ||
|
|
5fc38d042f | ||
|
|
475a11d08e | ||
|
|
191d94289d | ||
|
|
802ad16ce4 | ||
|
|
5e67f4f90e | ||
|
|
e840ccb523 | ||
|
|
b4fe3adc0a | ||
|
|
d73f8aa8c3 | ||
|
|
92c2e8a56c | ||
|
|
2e3fd86d48 | ||
|
|
4261a3b0b2 | ||
|
|
acef9b4c1b | ||
|
|
9a43994c45 | ||
|
|
f8a6e88819 | ||
|
|
35fda7b4af | ||
|
|
66fb8575ce | ||
|
|
20c3266e94 | ||
|
|
34088dbcfb | ||
|
|
e41dd73705 | ||
|
|
43107b15b9 | ||
|
|
1f91cb0c8c | ||
|
|
12d8ad0d38 | ||
|
|
592d21e7db | ||
|
|
5a08b01f5b | ||
|
|
4f473e224c | ||
|
|
9d60bb44cf | ||
|
|
f371260e75 | ||
|
|
c9e6d7719e | ||
|
|
2c4ce40334 | ||
|
|
5d8c173529 | ||
|
|
44b17d2bfa | ||
|
|
4ad87b58bb | ||
|
|
3b8b692218 | ||
|
|
4129af9205 | ||
|
|
45f216a9c7 | ||
|
|
d0b32def60 | ||
|
|
11ffc36157 | ||
|
|
ba04902670 | ||
|
|
3944602f51 | ||
|
|
73b642e6f3 | ||
|
|
ad118d8b13 | ||
|
|
f08534137b | ||
|
|
4b4a90f233 | ||
|
|
03274a6b2f | ||
|
|
cc6463ebca | ||
|
|
405d2f628f | ||
|
|
a3f7dd3e98 | ||
|
|
c85c0ebf89 | ||
|
|
10a8e04a8d | ||
|
|
1c6669e64c | ||
|
|
b2b270ad5d | ||
|
|
2bb69b40c7 | ||
|
|
65bff664cb | ||
|
|
c088ac0e79 | ||
|
|
0a066cfd91 | ||
|
|
87b7af6cee | ||
|
|
f2527b08fb | ||
|
|
71a4057fcf | ||
|
|
5ab7422508 | ||
|
|
8bcb3125c1 | ||
|
|
6baf1e31e2 | ||
|
|
ed567ef43b | ||
|
|
a6e64fbdf2 | ||
|
|
60cfa2a203 | ||
|
|
55bbf3b4a1 | ||
|
|
6bda1d2479 | ||
|
|
50f2219dd6 | ||
|
|
9e125d884c | ||
|
|
a6fbfc880c | ||
|
|
502028968d | ||
|
|
5a8eb0e151 | ||
|
|
9f8a18ec05 | ||
|
|
6b04cad7e8 | ||
|
|
45f56355d5 | ||
|
|
0dabb4ef6a | ||
|
|
2e77aa1ae7 | ||
|
|
deaabe292d | ||
|
|
af21a5ac39 | ||
|
|
f63d7f68eb | ||
|
|
82ad1dbc07 | ||
|
|
feeabdadd2 | ||
|
|
fc0309615e | ||
|
|
09d308d6b6 | ||
|
|
a8ed68bd93 | ||
|
|
2ae65ae471 | ||
|
|
a3b6886b7d | ||
|
|
c6a6d7294d | ||
|
|
2cf007c9d1 | ||
|
|
0683efa637 | ||
|
|
0943001193 | ||
|
|
5c42800fca | ||
|
|
65f10c2823 | ||
|
|
aaa7818000 | ||
|
|
f15ffc4320 | ||
|
|
d008f108cc | ||
|
|
5f57b0ef42 | ||
|
|
aa25aff10d | ||
|
|
ea79003180 | ||
|
|
9239a254e0 | ||
|
|
066d0f4746 | ||
|
|
aea6fb9b58 | ||
|
|
012cf65340 | ||
|
|
a45231af47 | ||
|
|
2307fc2bcd | ||
|
|
6623898198 | ||
|
|
eda472df1b | ||
|
|
f18e0cb550 | ||
|
|
68b58c5cb8 | ||
|
|
e8b981fa5d | ||
|
|
884d26093c | ||
|
|
1f371ea92f | ||
|
|
73d6a82cce | ||
|
|
6db8a3771c | ||
|
|
d950ff12c0 | ||
|
|
adff143bcd | ||
|
|
fbe6ae285a | ||
|
|
fdd4d479a3 | ||
|
|
61aeaf7e81 | ||
|
|
7359b02707 | ||
|
|
c890011322 | ||
|
|
e0ed984cde | ||
|
|
139f84cf21 | ||
|
|
375839ea2d | ||
|
|
69b2fe9282 | ||
|
|
9ed8bf14cb | ||
|
|
e6a800ca11 | ||
|
|
ff180c3466 | ||
|
|
3fe74fba42 | ||
|
|
1a0cfd080a | ||
|
|
94ab428e3f | ||
|
|
d755577473 | ||
|
|
a2cc8571c5 | ||
|
|
7edfdd2f5f | ||
|
|
333e360422 | ||
|
|
cb104a2082 | ||
|
|
27da2cddc5 | ||
|
|
feb8923ada | ||
|
|
fe623c2cf4 | ||
|
|
3c14461d5d | ||
|
|
499ae7311f | ||
|
|
ef202789fa | ||
|
|
55760195e6 | ||
|
|
bd68d3ae50 | ||
|
|
ff80718e9c | ||
|
|
0aa8b371dd | ||
|
|
23125648b8 | ||
|
|
0478d440f0 | ||
|
|
8cc33f4c2b | ||
|
|
f46df4e5d2 | ||
|
|
c6bcdc4223 | ||
|
|
4b903f088a | ||
|
|
c7f4ae7b9c | ||
|
|
526b2ed102 | ||
|
|
a7240c6d63 | ||
|
|
9d6df90805 | ||
|
|
0cefd46f23 | ||
|
|
ad035ad595 | ||
|
|
f95a1f2bef | ||
|
|
82a9e9462a | ||
|
|
76724e2f29 | ||
|
|
ecf14a220f | ||
|
|
69ce44b33c | ||
|
|
5969674cf1 | ||
|
|
867d75b21e | ||
|
|
3fa78598a1 | ||
|
|
0d6e35d3c6 | ||
|
|
20c5fd39c8 | ||
|
|
6e9a7a2568 | ||
|
|
b585a58121 | ||
|
|
fa9973cd7f | ||
|
|
3d9498a425 | ||
|
|
3098c8b29b | ||
|
|
5e380c3b42 | ||
|
|
392de84031 | ||
|
|
5d967d59b1 | ||
|
|
af31ccefc0 | ||
|
|
fa393554b9 | ||
|
|
307e3b3e1d | ||
|
|
4090aca97b | ||
|
|
92ce438de0 | ||
|
|
424810450f | ||
|
|
95e744beeb | ||
|
|
3b2d2c8326 | ||
|
|
d931ee8f22 | ||
|
|
7073600797 | ||
|
|
b1c40138da | ||
|
|
17466217e5 | ||
|
|
1703d1472e | ||
|
|
913905028b | ||
|
|
7e5c8eee5c | ||
|
|
6a74bba7e7 | ||
|
|
76ea735aaf | ||
|
|
dd1d4e99e7 | ||
|
|
a6ef73f4f2 | ||
|
|
c2f5d6662b | ||
|
|
57fb759f3c | ||
|
|
8dd12c873d | ||
|
|
e6d2d04121 | ||
|
|
074bac8447 | ||
|
|
8e8f2c6d67 | ||
|
|
938e8447e8 | ||
|
|
d5d5f0c445 | ||
|
|
5478571e92 | ||
|
|
a7835c6716 | ||
|
|
ad3c7c9bda | ||
|
|
415c8fcc3d | ||
|
|
718eda1b3e | ||
|
|
421b7edeb4 | ||
|
|
7b68e254c2 | ||
|
|
7bec2724a5 | ||
|
|
a27462b708 | ||
|
|
6bf0b8193a | ||
|
|
db428adbb8 | ||
|
|
fe5b9bb21b | ||
|
|
6ec71d8fb6 | ||
|
|
44b466eeb2 | ||
|
|
a25f3f8260 | ||
|
|
dd93e1af85 | ||
|
|
d2ee599dcf | ||
|
|
6ed8898590 | ||
|
|
5cfc1c39f3 | ||
|
|
f0ad49ea17 | ||
|
|
7ba9fa9c7d | ||
|
|
8bf11b84c1 | ||
|
|
470af8ab89 | ||
|
|
178761aef3 | ||
|
|
f0c66e6dea | ||
|
|
54055a6dae | ||
|
|
340448d2d1 | ||
|
|
ced7d0e53d | ||
|
|
a0dba0f8ae | ||
|
|
5e20b170a7 | ||
|
|
d26c18e25c | ||
|
|
8d376acc9b | ||
|
|
dc1e81f027 | ||
|
|
5d0279164c | ||
|
|
214a7678ea | ||
|
|
4892872c18 | ||
|
|
0b9198bf47 | ||
|
|
e9e5f61c45 | ||
|
|
11dde41824 | ||
|
|
a53d744b01 | ||
|
|
e82cdb5f24 | ||
|
|
40b10eee6d | ||
|
|
424f648632 | ||
|
|
2eb1fb3231 | ||
|
|
0806521642 | ||
|
|
88738b357b | ||
|
|
4e535e6188 | ||
|
|
40b8fdbdca | ||
|
|
d9472e31b7 | ||
|
|
1d99451ad7 | ||
|
|
09bb2e30f6 | ||
|
|
dc264be6ff | ||
|
|
fbe7039618 | ||
|
|
943464ccb8 | ||
|
|
369de832cd | ||
|
|
3457a315b2 | ||
|
|
ed4e139314 | ||
|
|
56dc316a57 | ||
|
|
2fec73eef6 | ||
|
|
1e7f62cb42 | ||
|
|
ccb7eb8135 | ||
|
|
637fd21230 | ||
|
|
0fe487e732 | ||
|
|
6bfaa6e282 | ||
|
|
378d3210dc | ||
|
|
97fe45e36d | ||
|
|
64a9cc8f05 | ||
|
|
f50d691254 | ||
|
|
34c3b68fc8 | ||
|
|
f33ccd5d27 | ||
|
|
bc108b9ad6 | ||
|
|
0c3d27ae42 | ||
|
|
ef65174df2 | ||
|
|
42ecb9f138 | ||
|
|
5c0331fd83 | ||
|
|
e7019c9455 | ||
|
|
d98bfe7e70 | ||
|
|
6747099d71 | ||
|
|
ccc8c6777b | ||
|
|
dbb149e6f7 | ||
|
|
a807985e59 | ||
|
|
8643c4d5bf | ||
|
|
76014b9ac7 | ||
|
|
b0c3aba590 | ||
|
|
19c0c25de8 | ||
|
|
2f723ac2d6 | ||
|
|
249fbbe52f | ||
|
|
c38680b8a1 | ||
|
|
16fca86c4a | ||
|
|
0f3f9e353d | ||
|
|
eceb276901 | ||
|
|
6bd0a983cd | ||
|
|
1861fbdeb5 | ||
|
|
3b96a93672 | ||
|
|
e53b3cbd0c | ||
|
|
b51e0f397c | ||
|
|
b42970063d | ||
|
|
493385eb3e | ||
|
|
9876c9faa4 | ||
|
|
4e415029b3 | ||
|
|
e172f095ba | ||
|
|
c001b98087 | ||
|
|
23fc8e92eb | ||
|
|
4059a297a6 | ||
|
|
66b2539238 | ||
|
|
ef27d52e79 | ||
|
|
b2a465296d | ||
|
|
5d097277ef | ||
|
|
071a9872cb | ||
|
|
cc2978039c | ||
|
|
e9c7bade80 | ||
|
|
0bd0454ea7 | ||
|
|
6097b74894 | ||
|
|
2c9f7a9e17 | ||
|
|
01aa788722 | ||
|
|
ead27aa9fe | ||
|
|
b816ff86c9 | ||
|
|
e5d84fb90b | ||
|
|
dd66712e31 | ||
|
|
f66216e399 | ||
|
|
f4f0992b6e | ||
|
|
1feff61977 | ||
|
|
5e0b904e88 | ||
|
|
9bd1a6116c | ||
|
|
131f0355a5 | ||
|
|
17bb5ea679 | ||
|
|
ce929984a3 | ||
|
|
4b34930a31 | ||
|
|
74bd09652d | ||
|
|
fb6252d786 | ||
|
|
c794fef2f2 | ||
|
|
00ebda8cc4 | ||
|
|
d14ce75b95 | ||
|
|
2d6eac9084 | ||
|
|
3ed7ad3ab3 | ||
|
|
6d1103048e | ||
|
|
0ff28758b3 | ||
|
|
d3e9ca3eda | ||
|
|
0fbfcf3c9c | ||
|
|
0c220935bd | ||
|
|
ffbfe833da | ||
|
|
42a14f7f63 | ||
|
|
f8c3dbe5b5 | ||
|
|
b078dd157c | ||
|
|
2ddacd7516 | ||
|
|
da0e345200 | ||
|
|
df94175a0f | ||
|
|
61a8825216 | ||
|
|
a69a1e6e63 | ||
|
|
021dcf089d | ||
|
|
bf24498b1e | ||
|
|
95e271d98f | ||
|
|
364629b8d6 | ||
|
|
108fe02165 | ||
|
|
4561fff36e | ||
|
|
50b5962042 | ||
|
|
457576739f | ||
|
|
e27e4a3c1b | ||
|
|
088514bbd4 | ||
|
|
2c8b484643 | ||
|
|
8294676150 | ||
|
|
ef378ad673 | ||
|
|
2d2247e59e | ||
|
|
7bf793a600 | ||
|
|
282bfaaa95 | ||
|
|
9679f40146 | ||
|
|
3892c3a703 | ||
|
|
4e320b8b90 | ||
|
|
4cd0c73408 | ||
|
|
eb2b22b042 | ||
|
|
4ea4d2b189 | ||
|
|
8d76fa23ef | ||
|
|
74b44fdf8f | ||
|
|
65b88c544f | ||
|
|
a422ba39c9 | ||
|
|
d2ec22371e | ||
|
|
033cec232a | ||
|
|
543240fb5f | ||
|
|
4bed739259 | ||
|
|
80c7ce381b | ||
|
|
ccfd41c4f0 | ||
|
|
3e102b7dad | ||
|
|
ec46f3286c | ||
|
|
5e2e0b46b1 | ||
|
|
45a13b1dec | ||
|
|
5c0b663969 | ||
|
|
30d7a59ba8 | ||
|
|
4aeb67ef4c | ||
|
|
3ba91634c1 | ||
|
|
1b7433b71e | ||
|
|
a70820daa0 | ||
|
|
6b45b1d6b4 | ||
|
|
85ab552028 | ||
|
|
c3945aaa1d | ||
|
|
b3af953a55 | ||
|
|
3a65093078 | ||
|
|
ad4e0bf3be | ||
|
|
88ab587807 | ||
|
|
aee28501b5 | ||
|
|
83f0ec8269 | ||
|
|
c6b6938b3a | ||
|
|
fb4664fcec | ||
|
|
20e3593863 | ||
|
|
63a394068c | ||
|
|
ab39e08eb9 | ||
|
|
11bfa62796 | ||
|
|
f63e62e546 | ||
|
|
65b0f329d1 | ||
|
|
06007c0a18 | ||
|
|
a8e83a7654 | ||
|
|
475005504e | ||
|
|
2c40c4d35e | ||
|
|
e95278932b | ||
|
|
9d2a20a763 | ||
|
|
2e54d72fc3 | ||
|
|
6b32a2d549 | ||
|
|
c5cbe4fc2a | ||
|
|
f888912870 | ||
|
|
9e4642e9b3 | ||
|
|
6b0486c216 | ||
|
|
d368c039f0 | ||
|
|
9b54267e69 | ||
|
|
46bb0169c4 | ||
|
|
8934324b72 | ||
|
|
0e886595bf | ||
|
|
c62861f4fa | ||
|
|
0df1800436 | ||
|
|
631fecc6d9 | ||
|
|
4346c2409d | ||
|
|
4b037a97dc | ||
|
|
5f74d1fd47 | ||
|
|
4dcf80167a | ||
|
|
26a26998fb | ||
|
|
9926eae015 | ||
|
|
8585b7b151 | ||
|
|
7e34f4fbfa | ||
|
|
fe776293f7 | ||
|
|
d8a5d96b98 | ||
|
|
757668c42f | ||
|
|
96ec8afd09 | ||
|
|
e093db92c4 | ||
|
|
a1cda80bcb | ||
|
|
642a2496fe | ||
|
|
4614fafae0 | ||
|
|
4100ed7bdd | ||
|
|
f52b2615ef | ||
|
|
25f9b152f9 | ||
|
|
6da8b6a879 | ||
|
|
0daaaef8c9 | ||
|
|
98272fbd58 | ||
|
|
b27e8f3f10 | ||
|
|
45df786f09 | ||
|
|
daaf42e4a4 | ||
|
|
2dc60d4620 | ||
|
|
b5312f30e8 | ||
|
|
26c2e0bd35 | ||
|
|
bf920883d5 | ||
|
|
58b9ec1f6b | ||
|
|
7bae7fa5ce | ||
|
|
764e199d67 | ||
|
|
bfce55db3d | ||
|
|
bab6f34dc0 | ||
|
|
0682dae027 | ||
|
|
1f6986e919 | ||
|
|
4289c74359 | ||
|
|
25248f4bd5 | ||
|
|
e82001c122 | ||
|
|
a7e63b82be | ||
|
|
b70fc4d51e | ||
|
|
e2252d0fc6 | ||
|
|
cae5d4d4ea | ||
|
|
d80ea37d36 | ||
|
|
05a01fdecb | ||
|
|
8fe6f69f28 | ||
|
|
1fdb351c37 | ||
|
|
7a01ad7614 | ||
|
|
55ab9f371a | ||
|
|
fefbf8f74b | ||
|
|
b428ddd796 | ||
|
|
ba7d31240e | ||
|
|
d25efe3954 | ||
|
|
36dfb906bb | ||
|
|
a6f0f908b9 | ||
|
|
3b1ddb2b3a | ||
|
|
1579c4f06d | ||
|
|
3519dd1c6e | ||
|
|
e41c4cbea7 | ||
|
|
ee048b76d4 | ||
|
|
af68d60a58 | ||
|
|
92731dfc6f | ||
|
|
21aa666a1e | ||
|
|
ee141cc821 | ||
|
|
55e5776c44 | ||
|
|
854a9195f3 | ||
|
|
96a97adf9b | ||
|
|
e75c6126e9 | ||
|
|
cda6f5c66c | ||
|
|
1f7de23036 | ||
|
|
bebb6823c0 | ||
|
|
31e472baa4 | ||
|
|
657685e85d | ||
|
|
a14912858e | ||
|
|
eed11ded30 | ||
|
|
b42aba40ed | ||
|
|
25885e5335 | ||
|
|
8cc0064cf3 | ||
|
|
98d44fa39d | ||
|
|
2099e2d267 | ||
|
|
0c1041ad85 | ||
|
|
c245b0406f | ||
|
|
8b194b7520 | ||
|
|
3e8b8a1933 | ||
|
|
41dc280491 | ||
|
|
53d2990d9b | ||
|
|
e185c08ad9 | ||
|
|
2412adf42b | ||
|
|
be2ac1ed93 | ||
|
|
dc13813a03 | ||
|
|
d6af13efed | ||
|
|
a59f665235 | ||
|
|
688925aca9 | ||
|
|
76e903cf9d | ||
|
|
a5272130c4 | ||
|
|
d7d7e99662 | ||
|
|
2db96c18e7 | ||
|
|
e12af460ed | ||
|
|
88936d5c9a | ||
|
|
3ad4bc8afe | ||
|
|
0d694793f2 | ||
|
|
e91ae3d47d | ||
|
|
6ecd7f64ba | ||
|
|
888855675e | ||
|
|
b16367b4b2 | ||
|
|
a499390648 | ||
|
|
4df98f3eb5 | ||
|
|
348b3e0983 | ||
|
|
0b7e1676eb | ||
|
|
314573bfe8 | ||
|
|
4604b10306 | ||
|
|
8c13cfa4dd | ||
|
|
7cfd4aee4d | ||
|
|
b026930aba | ||
|
|
5eb640b20a | ||
|
|
f374747b0d | ||
|
|
68bac1e0a6 | ||
|
|
f53f4198c3 | ||
|
|
2192a28eed | ||
|
|
5d81c1a184 | ||
|
|
5c5535c064 | ||
|
|
e5bcc51ae1 | ||
|
|
bd6a7d5e64 | ||
|
|
14b5a9a150 | ||
|
|
ba9ec3d05e | ||
|
|
7c168b08c9 | ||
|
|
3d4cc7833c | ||
|
|
351a85d9ea | ||
|
|
bda4ef6c56 | ||
|
|
1e438b237c | ||
|
|
d721a02e7d | ||
|
|
778603a818 | ||
|
|
3c874df46e | ||
|
|
0d5897fadc | ||
|
|
d2eb226c91 | ||
|
|
e13e7c8d94 | ||
|
|
78f403ff45 | ||
|
|
5f8c03189e | ||
|
|
08a299e1d0 | ||
|
|
7b5d916a9a | ||
|
|
33ad61b112 | ||
|
|
716e365615 | ||
|
|
3b4424ff98 | ||
|
|
f9c7ead160 | ||
|
|
5930aaeb1a | ||
|
|
faf67db089 | ||
|
|
0667baddc6 | ||
|
|
d006e1e09b | ||
|
|
df2680b4b9 | ||
|
|
010313bb63 | ||
|
|
51a157d3d8 | ||
|
|
5296f487a8 | ||
|
|
f05774b04c | ||
|
|
6600bd7d91 | ||
|
|
ed443a0393 | ||
|
|
6945617af5 | ||
|
|
7916f55009 | ||
|
|
d650ad398f | ||
|
|
d223f3b697 | ||
|
|
60830695c2 | ||
|
|
01d9a46854 | ||
|
|
d773b7d671 | ||
|
|
4d4463b2bd | ||
|
|
0e38297f87 | ||
|
|
7e13f568dc | ||
|
|
58245413f4 | ||
|
|
8cf16063a5 | ||
|
|
3a4449e2f1 | ||
|
|
10d59d5f90 | ||
|
|
a4f69a0191 | ||
|
|
82658c3eec | ||
|
|
378d6e1e6a | ||
|
|
afa55bc70c | ||
|
|
2629a7aca4 | ||
|
|
d89b2f0fe7 | ||
|
|
a364232373 | ||
|
|
0e9767093d | ||
|
|
49df03da9a | ||
|
|
0189bdd0b7 | ||
|
|
be3653df11 | ||
|
|
f4711da7bd | ||
|
|
3ffa0e920b | ||
|
|
23a2e85bf8 | ||
|
|
38117fba83 | ||
|
|
1f766c36fb | ||
|
|
484a99e428 | ||
|
|
ec6121c331 | ||
|
|
b86c0a1500 | ||
|
|
7e402ebb8c | ||
|
|
b901a712c6 | ||
|
|
abb8dd57f8 | ||
|
|
a400df48c0 | ||
|
|
6ab4ba4c26 | ||
|
|
e8d4eb3e68 | ||
|
|
ae7e368f75 | ||
|
|
31acd1ebf9 | ||
|
|
9a4757ae66 | ||
|
|
7814019708 | ||
|
|
b698f9a0d8 | ||
|
|
32285a6d19 | ||
|
|
1c198977ec | ||
|
|
330b6c50b0 | ||
|
|
928911bc68 | ||
|
|
5b446cc815 | ||
|
|
451c1596af | ||
|
|
932bded12f | ||
|
|
070ad913ac | ||
|
|
8d8b9f83ae | ||
|
|
f00d359a67 | ||
|
|
291def6adb | ||
|
|
cd3fbf1c49 | ||
|
|
c852b8e021 | ||
|
|
d8932c55e7 | ||
|
|
63f0269f7f | ||
|
|
4759ecae19 | ||
|
|
65b7ecac7b | ||
|
|
f9d2d89135 | ||
|
|
669dc31cf3 | ||
|
|
d4d338c224 | ||
|
|
bfdeffc375 | ||
|
|
e806184023 | ||
|
|
50566113ac | ||
|
|
ad22ace439 | ||
|
|
f4321a421c | ||
|
|
475333d533 | ||
|
|
39fd89308c | ||
|
|
548a9f56a6 | ||
|
|
3f0cb36bdb | ||
|
|
bea1f1fac6 | ||
|
|
5d75d837ef | ||
|
|
711648c9bb | ||
|
|
dcfb7a105c | ||
|
|
2ef3c803a1 | ||
|
|
453e4d090b | ||
|
|
ca2f9843c8 | ||
|
|
294b6f5a22 | ||
|
|
7bb356c680 | ||
|
|
021817e59a | ||
|
|
a420a453b4 | ||
|
|
42cf4db601 | ||
|
|
93a8daf285 | ||
|
|
a041b4df7c | ||
|
|
2539f2dbf9 | ||
|
|
61676fb506 | ||
|
|
f6f3713001 | ||
|
|
a30f347201 | ||
|
|
74ea4fb604 | ||
|
|
6982e9cc96 | ||
|
|
ab39872cb4 | ||
|
|
84a2314463 | ||
|
|
17fcdea698 | ||
|
|
fdef326dcd | ||
|
|
32bd37adf8 | ||
|
|
9446c2c902 | ||
|
|
9aa141d023 | ||
|
|
8bccae4f92 | ||
|
|
6ae2adc1af | ||
|
|
1deafd8254 | ||
|
|
57f038ec7b | ||
|
|
cdf3a181dc | ||
|
|
6c7ce09dda | ||
|
|
3919f4ba3d | ||
|
|
2d33c4e97d | ||
|
|
29a8975c66 | ||
|
|
86a622cbdc | ||
|
|
459d822b51 | ||
|
|
844899440a | ||
|
|
103db4216d | ||
|
|
6daddcde01 | ||
|
|
07f7e69b36 | ||
|
|
b68e8e5727 | ||
|
|
369fb529e2 | ||
|
|
643c2cb88b | ||
|
|
023e4bca14 | ||
|
|
51af455f62 | ||
|
|
ffe3549064 | ||
|
|
928de9050e | ||
|
|
36aea6154a | ||
|
|
dd352ab27f | ||
|
|
cb40d60469 | ||
|
|
d8bab8ea44 | ||
|
|
9ab62eb96f | ||
|
|
290cf2040a | ||
|
|
08b8916a45 | ||
|
|
d46d05d636 | ||
|
|
a72f2dce45 | ||
|
|
08a832b482 | ||
|
|
2ddc32d5c5 | ||
|
|
a1b7b955a3 | ||
|
|
2cde4b8817 | ||
|
|
87f0a49fe6 | ||
|
|
0f06a6daa7 | ||
|
|
8f805dd74b | ||
|
|
89d5e2f2fd | ||
|
|
297ada6c87 | ||
|
|
8c9fb8eb73 | ||
|
|
b75ccfc5ec | ||
|
|
7a81daf026 | ||
|
|
60f75560a2 | ||
|
|
e28f2d4900 | ||
|
|
c216850523 | ||
|
|
4839cee4fb | ||
|
|
0e4d604b02 | ||
|
|
a64347c6d4 | ||
|
|
18f6a98bd6 | ||
|
|
b1fd7fef86 | ||
|
|
36d111e788 | ||
|
|
9039c821a2 | ||
|
|
581a4a5553 | ||
|
|
cf4d7c52c4 | ||
|
|
6a6328a5e9 | ||
|
|
527cc97899 | ||
|
|
a37f4a86a7 | ||
|
|
46f74e0cb5 | ||
|
|
7622ea21af | ||
|
|
c5d3947084 | ||
|
|
757eeacc1b | ||
|
|
dd42acf737 | ||
|
|
b9ccb3741e | ||
|
|
abfdc4710f | ||
|
|
82a02e18d9 | ||
|
|
4879a234c4 | ||
|
|
63269668c0 | ||
|
|
900f64e6be | ||
|
|
da09488fbf | ||
|
|
7f0ccc8a9d | ||
|
|
a0caaa2bc8 | ||
|
|
de52b6c2f9 | ||
|
|
acd7d03266 | ||
|
|
f6e87fd628 | ||
|
|
aed1419c64 | ||
|
|
c6c526275d | ||
|
|
630e7dc6ff | ||
|
|
eb8366d658 | ||
|
|
4456012956 | ||
|
|
539be43640 | ||
|
|
1bdab9fdb1 | ||
|
|
2b82c5a8a1 | ||
|
|
b394f879e3 | ||
|
|
55c3efa900 | ||
|
|
1aedffad93 | ||
|
|
4ff79c933b | ||
|
|
5422786ee6 | ||
|
|
ff6c2d6dc8 | ||
|
|
d543b282a7 | ||
|
|
5f8051180e | ||
|
|
1c28a2d3e6 | ||
|
|
39e29ae5dd | ||
|
|
30a9f063c9 | ||
|
|
ce7455a8e1 | ||
|
|
e3936d4fb3 | ||
|
|
940e62772e | ||
|
|
71e6a0d0d1 | ||
|
|
2cd11ae365 | ||
|
|
52bbad12f9 | ||
|
|
30e88d7f31 | ||
|
|
2b7ed61ca2 | ||
|
|
647513a7d4 | ||
|
|
a210ec74d2 | ||
|
|
4c4e0482e7 | ||
|
|
cfb1ddd6fc | ||
|
|
3987acd7ec | ||
|
|
fda1e6b563 | ||
|
|
3440ffb37b | ||
|
|
a820d2b267 | ||
|
|
2ebdb54fb3 | ||
|
|
bb52abfa55 | ||
|
|
31cb1ca9e5 | ||
|
|
78f779a323 | ||
|
|
3478b2cf14 | ||
|
|
7b5585b9cb | ||
|
|
f0a351810c | ||
|
|
b85520bfb9 | ||
|
|
d88972ea48 | ||
|
|
24668cddf6 | ||
|
|
25c9339e2d | ||
|
|
597072ef1b | ||
|
|
84b3e07f1b | ||
|
|
422d52858c | ||
|
|
723f285813 | ||
|
|
eaaf5d309d | ||
|
|
27d9c749d5 | ||
|
|
b7bddeebc1 | ||
|
|
6a0c2ec50f | ||
|
|
baa41be2aa | ||
|
|
2157b1232e | ||
|
|
37711578a2 | ||
|
|
fb2c9594e0 | ||
|
|
7fbcd55da3 | ||
|
|
b4348bdd25 | ||
|
|
155734e09a | ||
|
|
883d80e097 | ||
|
|
e4c9f75b23 | ||
|
|
f5ec7cc872 | ||
|
|
811bafba82 | ||
|
|
431075fcbb | ||
|
|
c4f27225ac | ||
|
|
b7aa5ee06c | ||
|
|
3f87f71755 | ||
|
|
20623cec13 | ||
|
|
0e5f31a86d | ||
|
|
7e92091751 | ||
|
|
1a742f54c9 | ||
|
|
6a89dcf848 | ||
|
|
c5e238e8e5 | ||
|
|
fce30f407a | ||
|
|
d863298210 | ||
|
|
c4b34f2a2a | ||
|
|
c3ff916431 | ||
|
|
3fc1dc0e6f | ||
|
|
7121dfa309 | ||
|
|
5f68fcab12 | ||
|
|
ecf41eed05 | ||
|
|
b8c66d3307 | ||
|
|
303f4bc79e | ||
|
|
d2a25206b1 | ||
|
|
2f0a8c8778 | ||
|
|
bfd30f4286 | ||
|
|
0ef17ede89 | ||
|
|
909a88c5c0 | ||
|
|
f602ab4de4 | ||
|
|
807ace5b1f | ||
|
|
4b8a2e341a | ||
|
|
e66c29261a | ||
|
|
712d63c3f0 | ||
|
|
8b2fc1078b | ||
|
|
6cdf27d154 | ||
|
|
5c18e66384 | ||
|
|
35096a7eff | ||
|
|
81d55d3e4d | ||
|
|
a14f76491d | ||
|
|
760cfa27e5 | ||
|
|
c9a5aca3da | ||
|
|
d5da2ab7e8 | ||
|
|
1c04117114 | ||
|
|
8b4b243f5f | ||
|
|
b42a596425 | ||
|
|
219d6c92a1 | ||
|
|
4759d879f2 | ||
|
|
d875e99e46 | ||
|
|
8a35bb926e | ||
|
|
a0ea067b63 | ||
|
|
4efb98cb4f | ||
|
|
0679d491fe | ||
|
|
c25ffde91d | ||
|
|
17b386a891 | ||
|
|
549c2bdfcf | ||
|
|
67691e410d | ||
|
|
5b3393b6a2 | ||
|
|
d7eb05b936 | ||
|
|
636a743c2b | ||
|
|
df011054fa | ||
|
|
ac07160c8d | ||
|
|
6606e4243c | ||
|
|
65973ceb64 | ||
|
|
bebef1e50d | ||
|
|
d48c1c5a44 | ||
|
|
8a29cf27ac | ||
|
|
36a8372b28 | ||
|
|
4e94227b5d | ||
|
|
479d551766 | ||
|
|
76b2b723b2 | ||
|
|
b8d77cdeab | ||
|
|
14a68a0ca9 | ||
|
|
e0d2c332fe | ||
|
|
c2e8cbaa14 | ||
|
|
771fab1dd8 | ||
|
|
3a5239e6bf | ||
|
|
3d25e7bf8c | ||
|
|
1618700c5a | ||
|
|
b111aa5a91 | ||
|
|
9e83e550e1 | ||
|
|
fc2a0715df | ||
|
|
750e70f5ea | ||
|
|
6b5ed2f68f | ||
|
|
3020d2dc58 | ||
|
|
a909417602 | ||
|
|
6cd566872b | ||
|
|
9d71bcc3e2 | ||
|
|
a4c70fe157 | ||
|
|
34a75102f7 | ||
|
|
4157d1f7b6 | ||
|
|
4ebfa2cb91 | ||
|
|
046054fa3b | ||
|
|
95483f348b | ||
|
|
f247a6233e | ||
|
|
44bd9e5994 | ||
|
|
18237be9b2 | ||
|
|
29ab9fa7d7 | ||
|
|
b8d5036e33 | ||
|
|
312d9de1d1 | ||
|
|
a103dae01e | ||
|
|
d07cf41a97 | ||
|
|
8c238e70ab | ||
|
|
8a9bb0d000 | ||
|
|
26acdcf44e | ||
|
|
921779bb10 | ||
|
|
16f4eabe2d | ||
|
|
c826e57475 | ||
|
|
712e99d477 | ||
|
|
b754f5a6a3 | ||
|
|
a805e5947e | ||
|
|
91dfbb1bba | ||
|
|
db1842b9e1 | ||
|
|
c9ca386131 | ||
|
|
078f666f73 | ||
|
|
de1557a0dc | ||
|
|
084929c293 | ||
|
|
abd5dfd06a | ||
|
|
099f7077a1 | ||
|
|
d7c94e0ca6 | ||
|
|
35ec7f079f | ||
|
|
5231ae52d9 | ||
|
|
3085c47bea | ||
|
|
0ccc73251a | ||
|
|
dc6fe82051 | ||
|
|
d78fb62056 | ||
|
|
5c44461ccf | ||
|
|
03e40efa51 | ||
|
|
23f746508d | ||
|
|
c692dfdec3 | ||
|
|
b8b8a38c5e | ||
|
|
c3ec66a9b5 | ||
|
|
bae2394dc2 | ||
|
|
ae33c6f8ba | ||
|
|
80ed8f850e | ||
|
|
48708ca0d5 | ||
|
|
c7cb0f0602 | ||
|
|
bf4018b9ec | ||
|
|
f86d00cd95 | ||
|
|
f2890a4494 | ||
|
|
05cd82ef94 | ||
|
|
7d6eb0d4c3 | ||
|
|
24636dfa87 | ||
|
|
1d7fa3ad2d | ||
|
|
09035b71cd | ||
|
|
f3c8b898cd | ||
|
|
5dd0477fd4 | ||
|
|
eec4cd6b52 | ||
|
|
77da3e3653 | ||
|
|
c3d321d405 | ||
|
|
7fe3902552 | ||
|
|
0077e22d52 | ||
|
|
03408f3437 | ||
|
|
cd7e01e8b9 | ||
|
|
7a962bd802 | ||
|
|
f9584deba5 | ||
|
|
96efd9052f | ||
|
|
de982616f1 | ||
|
|
defbf9425a | ||
|
|
f40bb398f6 | ||
|
|
79d3b1e2bd | ||
|
|
f6cc192947 | ||
|
|
03608cb46e | ||
|
|
450acb71a6 | ||
|
|
55ea963c9e | ||
|
|
e9e9bdb8d9 | ||
|
|
35bb6d32b3 | ||
|
|
98701b58b3 | ||
|
|
ad935f45ac | ||
|
|
dbba73469d | ||
|
|
6c2eb73a70 | ||
|
|
2a038c1d7e | ||
|
|
616c5eafee | ||
|
|
f5ff917b1d | ||
|
|
d632e23fba | ||
|
|
c8d6d9a010 | ||
|
|
5804cf1723 | ||
|
|
bf7ee0f4d4 | ||
|
|
504a410f02 | ||
|
|
d05da29912 | ||
|
|
72962c6e08 | ||
|
|
7bd7b02712 | ||
|
|
8f9ab5e14d | ||
|
|
7717bb6a84 | ||
|
|
0ec2915ea7 | ||
|
|
c9a7541b9c | ||
|
|
d81cfd7d6f | ||
|
|
b330c830d3 | ||
|
|
d889c6fd07 | ||
|
|
56b9af336a | ||
|
|
fda0d3be52 | ||
|
|
cd5c8f6471 | ||
|
|
fef257c5c5 | ||
|
|
d066d9b8e0 | ||
|
|
5a00dc9fc9 | ||
|
|
c354e87809 | ||
|
|
93ac3760cb | ||
|
|
abed273de3 | ||
|
|
034392624c | ||
|
|
ecab6f1cc5 | ||
|
|
7d6900827d | ||
|
|
9246e6dd15 | ||
|
|
735a0ca2e4 | ||
|
|
dddb72e084 | ||
|
|
83a9b5271a | ||
|
|
4a8069f9c4 | ||
|
|
84b84ce2db |
@@ -3,7 +3,9 @@ ollama
|
|||||||
app
|
app
|
||||||
macapp
|
macapp
|
||||||
dist
|
dist
|
||||||
llm/llama.cpp
|
build
|
||||||
.env
|
.env
|
||||||
.cache
|
.cache
|
||||||
test_data
|
test_data
|
||||||
|
.git
|
||||||
|
|
||||||
|
|||||||
23
.gitattributes
vendored
23
.gitattributes
vendored
@@ -1,3 +1,24 @@
|
|||||||
llm/ext_server/* linguist-vendored
|
llama/**/*.cpp linguist-vendored
|
||||||
|
llama/**/*.hpp linguist-vendored
|
||||||
|
llama/**/*.h linguist-vendored
|
||||||
|
llama/**/*.c linguist-vendored
|
||||||
|
llama/**/*.cu linguist-vendored
|
||||||
|
llama/**/*.cuh linguist-vendored
|
||||||
|
llama/**/*.m linguist-vendored
|
||||||
|
llama/**/*.metal linguist-vendored
|
||||||
|
|
||||||
|
ml/backend/**/*.c linguist-vendored
|
||||||
|
ml/backend/**/*.h linguist-vendored
|
||||||
|
ml/backend/**/*.cpp linguist-vendored
|
||||||
|
ml/backend/**/*.hpp linguist-vendored
|
||||||
|
ml/backend/**/*.cu linguist-vendored
|
||||||
|
ml/backend/**/*.cuh linguist-vendored
|
||||||
|
ml/backend/**/*.m linguist-vendored
|
||||||
|
ml/backend/**/*.metal linguist-vendored
|
||||||
|
ml/backend/**/CMakeLists.txt linguist-vendored
|
||||||
|
|
||||||
|
llama/build-info.cpp linguist-generated
|
||||||
|
ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.s linguist-generated
|
||||||
|
|
||||||
* text=auto
|
* text=auto
|
||||||
*.go text eol=lf
|
*.go text eol=lf
|
||||||
|
|||||||
8
.github/ISSUE_TEMPLATE/10_bug_report.yml
vendored
8
.github/ISSUE_TEMPLATE/10_bug_report.yml
vendored
@@ -9,6 +9,14 @@ body:
|
|||||||
description: What happened? What did you expect to happen?
|
description: What happened? What did you expect to happen?
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: logs
|
||||||
|
attributes:
|
||||||
|
label: Relevant log output
|
||||||
|
description: Please copy and paste any relevant log output. See [Troubleshooting Guide](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) for details.
|
||||||
|
render: shell
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
- type: dropdown
|
- type: dropdown
|
||||||
id: os
|
id: os
|
||||||
attributes:
|
attributes:
|
||||||
|
|||||||
805
.github/workflows/release.yaml
vendored
805
.github/workflows/release.yaml
vendored
@@ -5,492 +5,427 @@ on:
|
|||||||
tags:
|
tags:
|
||||||
- 'v*'
|
- 'v*'
|
||||||
|
|
||||||
|
env:
|
||||||
|
CGO_CFLAGS: '-O3'
|
||||||
|
CGO_CXXFLAGS: '-O3'
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
# Full build of the Mac assets
|
setup-environment:
|
||||||
build-darwin:
|
runs-on: ubuntu-latest
|
||||||
runs-on: macos-12
|
|
||||||
environment: release
|
environment: release
|
||||||
|
outputs:
|
||||||
|
GOFLAGS: ${{ steps.goflags.outputs.GOFLAGS }}
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- name: Set Version
|
- name: Set environment
|
||||||
shell: bash
|
id: goflags
|
||||||
run: |
|
run: |
|
||||||
echo "VERSION=${GITHUB_REF_NAME#v}" >> $GITHUB_ENV
|
echo GOFLAGS="'-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=${GITHUB_REF_NAME#v}\" \"-X=github.com/ollama/ollama/server.mode=release\"'" >>$GITHUB_OUTPUT
|
||||||
echo "RELEASE_VERSION=$(echo ${GITHUB_REF_NAME} | cut -f1 -d-)" >> $GITHUB_ENV
|
|
||||||
- name: key
|
|
||||||
env:
|
|
||||||
MACOS_SIGNING_KEY: ${{ secrets.MACOS_SIGNING_KEY }}
|
|
||||||
MACOS_SIGNING_KEY_PASSWORD: ${{ secrets.MACOS_SIGNING_KEY_PASSWORD }}
|
|
||||||
run: |
|
|
||||||
echo $MACOS_SIGNING_KEY | base64 --decode > certificate.p12
|
|
||||||
security create-keychain -p password build.keychain
|
|
||||||
security default-keychain -s build.keychain
|
|
||||||
security unlock-keychain -p password build.keychain
|
|
||||||
security import certificate.p12 -k build.keychain -P $MACOS_SIGNING_KEY_PASSWORD -T /usr/bin/codesign
|
|
||||||
security set-key-partition-list -S apple-tool:,apple:,codesign: -s -k password build.keychain
|
|
||||||
security set-keychain-settings -lut 3600 build.keychain
|
|
||||||
- uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version-file: go.mod
|
|
||||||
cache: true
|
|
||||||
- name: Build Darwin
|
|
||||||
env:
|
|
||||||
APPLE_IDENTITY: ${{ secrets.APPLE_IDENTITY }}
|
|
||||||
APPLE_PASSWORD: ${{ secrets.APPLE_PASSWORD }}
|
|
||||||
APPLE_TEAM_ID: ${{ vars.APPLE_TEAM_ID }}
|
|
||||||
APPLE_ID: ${{ vars.APPLE_ID }}
|
|
||||||
SDKROOT: /Applications/Xcode_13.4.1.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk
|
|
||||||
DEVELOPER_DIR: /Applications/Xcode_13.4.1.app/Contents/Developer
|
|
||||||
run: |
|
|
||||||
./scripts/build_darwin.sh
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v4
|
darwin-build:
|
||||||
with:
|
runs-on: macos-13-xlarge
|
||||||
name: dist-darwin
|
|
||||||
path: |
|
|
||||||
dist/*arwin*
|
|
||||||
!dist/*-cov
|
|
||||||
|
|
||||||
# Windows builds take a long time to both install the dependencies and build, so parallelize
|
|
||||||
# CPU generation step
|
|
||||||
generate-windows-cpu:
|
|
||||||
environment: release
|
environment: release
|
||||||
runs-on: windows
|
needs: setup-environment
|
||||||
env:
|
|
||||||
KEY_CONTAINER: ${{ vars.KEY_CONTAINER }}
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
- name: Set Version
|
|
||||||
shell: bash
|
|
||||||
run: echo "VERSION=${GITHUB_REF_NAME#v}" >> $GITHUB_ENV
|
|
||||||
- uses: 'google-github-actions/auth@v2'
|
|
||||||
with:
|
|
||||||
project_id: 'ollama'
|
|
||||||
credentials_json: '${{ secrets.GOOGLE_SIGNING_CREDENTIALS }}'
|
|
||||||
- run: echo "${{ vars.OLLAMA_CERT }}" > ollama_inc.crt
|
|
||||||
- name: install Windows SDK 8.1 to get signtool
|
|
||||||
run: |
|
|
||||||
$ErrorActionPreference = "Stop"
|
|
||||||
write-host "downloading SDK"
|
|
||||||
Invoke-WebRequest -Uri "https://go.microsoft.com/fwlink/p/?LinkId=323507" -OutFile "${env:RUNNER_TEMP}\sdksetup.exe"
|
|
||||||
Start-Process "${env:RUNNER_TEMP}\sdksetup.exe" -ArgumentList @("/q") -NoNewWindow -Wait
|
|
||||||
write-host "Win SDK 8.1 installed"
|
|
||||||
gci -path 'C:\Program Files (x86)\Windows Kits\' -r -fi 'signtool.exe'
|
|
||||||
- name: install signing plugin
|
|
||||||
run: |
|
|
||||||
$ErrorActionPreference = "Stop"
|
|
||||||
write-host "downloading plugin"
|
|
||||||
Invoke-WebRequest -Uri "https://github.com/GoogleCloudPlatform/kms-integrations/releases/download/cng-v1.0/kmscng-1.0-windows-amd64.zip" -OutFile "${env:RUNNER_TEMP}\plugin.zip"
|
|
||||||
Expand-Archive -Path "${env:RUNNER_TEMP}\plugin.zip" -DestinationPath ${env:RUNNER_TEMP}\plugin\
|
|
||||||
write-host "Installing plugin"
|
|
||||||
& "${env:RUNNER_TEMP}\plugin\*\kmscng.msi" /quiet
|
|
||||||
write-host "plugin installed"
|
|
||||||
- uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version-file: go.mod
|
|
||||||
cache: true
|
|
||||||
- run: go get ./...
|
|
||||||
- run: |
|
|
||||||
$gopath=(get-command go).source | split-path -parent
|
|
||||||
& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Launch-VsDevShell.ps1"
|
|
||||||
cd $env:GITHUB_WORKSPACE
|
|
||||||
$env:CMAKE_SYSTEM_VERSION="10.0.22621.0"
|
|
||||||
$env:PATH="$gopath;$env:PATH"
|
|
||||||
go generate -x ./...
|
|
||||||
name: go generate
|
|
||||||
- uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: generate-windows-cpu
|
|
||||||
path: |
|
|
||||||
llm/build/**/bin/*
|
|
||||||
llm/build/**/*.a
|
|
||||||
dist/windows-amd64/**
|
|
||||||
|
|
||||||
# ROCm generation step
|
|
||||||
generate-windows-rocm:
|
|
||||||
environment: release
|
|
||||||
runs-on: windows
|
|
||||||
env:
|
|
||||||
KEY_CONTAINER: ${{ vars.KEY_CONTAINER }}
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
- name: Set Version
|
|
||||||
shell: bash
|
|
||||||
run: echo "VERSION=${GITHUB_REF_NAME#v}" >> $GITHUB_ENV
|
|
||||||
- uses: 'google-github-actions/auth@v2'
|
|
||||||
with:
|
|
||||||
project_id: 'ollama'
|
|
||||||
credentials_json: '${{ secrets.GOOGLE_SIGNING_CREDENTIALS }}'
|
|
||||||
- run: echo "${{ vars.OLLAMA_CERT }}" > ollama_inc.crt
|
|
||||||
- name: install Windows SDK 8.1 to get signtool
|
|
||||||
run: |
|
|
||||||
$ErrorActionPreference = "Stop"
|
|
||||||
write-host "downloading SDK"
|
|
||||||
Invoke-WebRequest -Uri "https://go.microsoft.com/fwlink/p/?LinkId=323507" -OutFile "${env:RUNNER_TEMP}\sdksetup.exe"
|
|
||||||
Start-Process "${env:RUNNER_TEMP}\sdksetup.exe" -ArgumentList @("/q") -NoNewWindow -Wait
|
|
||||||
write-host "Win SDK 8.1 installed"
|
|
||||||
gci -path 'C:\Program Files (x86)\Windows Kits\' -r -fi 'signtool.exe'
|
|
||||||
- name: install signing plugin
|
|
||||||
run: |
|
|
||||||
$ErrorActionPreference = "Stop"
|
|
||||||
write-host "downloading plugin"
|
|
||||||
Invoke-WebRequest -Uri "https://github.com/GoogleCloudPlatform/kms-integrations/releases/download/cng-v1.0/kmscng-1.0-windows-amd64.zip" -OutFile "${env:RUNNER_TEMP}\plugin.zip"
|
|
||||||
Expand-Archive -Path "${env:RUNNER_TEMP}\plugin.zip" -DestinationPath ${env:RUNNER_TEMP}\plugin\
|
|
||||||
write-host "Installing plugin"
|
|
||||||
& "${env:RUNNER_TEMP}\plugin\*\kmscng.msi" /quiet
|
|
||||||
write-host "plugin installed"
|
|
||||||
- uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version-file: go.mod
|
|
||||||
cache: true
|
|
||||||
- name: 'Install ROCm'
|
|
||||||
run: |
|
|
||||||
$ErrorActionPreference = "Stop"
|
|
||||||
write-host "downloading AMD HIP Installer"
|
|
||||||
Invoke-WebRequest -Uri "https://download.amd.com/developer/eula/rocm-hub/AMD-Software-PRO-Edition-24.Q3-WinSvr2022-For-HIP.exe" -OutFile "${env:RUNNER_TEMP}\rocm-install.exe"
|
|
||||||
write-host "Installing AMD HIP"
|
|
||||||
Start-Process "${env:RUNNER_TEMP}\rocm-install.exe" -ArgumentList '-install' -NoNewWindow -Wait
|
|
||||||
write-host "Completed AMD HIP"
|
|
||||||
- name: 'Verify ROCm'
|
|
||||||
run: |
|
|
||||||
& 'C:\Program Files\AMD\ROCm\*\bin\clang.exe' --version
|
|
||||||
- run: go get ./...
|
|
||||||
- run: |
|
|
||||||
$gopath=(get-command go).source | split-path -parent
|
|
||||||
& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Launch-VsDevShell.ps1"
|
|
||||||
cd $env:GITHUB_WORKSPACE
|
|
||||||
$env:CMAKE_SYSTEM_VERSION="10.0.22621.0"
|
|
||||||
$env:PATH="$gopath;$env:PATH"
|
|
||||||
$env:OLLAMA_SKIP_CPU_GENERATE="1"
|
|
||||||
$env:HIP_PATH=$(Resolve-Path 'C:\Program Files\AMD\ROCm\*\bin\clang.exe' | split-path | split-path)
|
|
||||||
go generate -x ./...
|
|
||||||
name: go generate
|
|
||||||
- name: 'gather rocm dependencies'
|
|
||||||
run: |
|
|
||||||
$HIP_PATH=$(Resolve-Path 'C:\Program Files\AMD\ROCm\*\bin\clang.exe' | split-path | split-path)
|
|
||||||
md "dist\deps\bin\rocblas\library"
|
|
||||||
cp "${HIP_PATH}\bin\hipblas.dll" "dist\deps\bin\"
|
|
||||||
cp "${HIP_PATH}\bin\rocblas.dll" "dist\deps\bin\"
|
|
||||||
cp "${HIP_PATH}\bin\rocblas\library\*" "dist\deps\bin\rocblas\library\"
|
|
||||||
- uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: generate-windows-rocm
|
|
||||||
path: |
|
|
||||||
llm/build/**/bin/*
|
|
||||||
dist/windows-amd64/**
|
|
||||||
- uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: windows-rocm-deps
|
|
||||||
path: dist/deps/*
|
|
||||||
|
|
||||||
# CUDA generation step
|
|
||||||
generate-windows-cuda:
|
|
||||||
environment: release
|
|
||||||
runs-on: windows
|
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
cuda:
|
os: [darwin]
|
||||||
- version: "11"
|
arch: [amd64, arm64]
|
||||||
url: 'https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda_11.3.1_465.89_win10.exe'
|
|
||||||
- version: "12"
|
|
||||||
url: 'https://developer.download.nvidia.com/compute/cuda/12.4.0/local_installers/cuda_12.4.0_551.61_windows.exe'
|
|
||||||
env:
|
env:
|
||||||
KEY_CONTAINER: ${{ vars.KEY_CONTAINER }}
|
GOFLAGS: ${{ needs.setup-environment.outputs.GOFLAGS }}
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- name: Set Version
|
|
||||||
shell: bash
|
|
||||||
run: echo "VERSION=${GITHUB_REF_NAME#v}" >> $GITHUB_ENV
|
|
||||||
- uses: 'google-github-actions/auth@v2'
|
|
||||||
with:
|
|
||||||
project_id: 'ollama'
|
|
||||||
credentials_json: '${{ secrets.GOOGLE_SIGNING_CREDENTIALS }}'
|
|
||||||
- run: echo "${{ vars.OLLAMA_CERT }}" > ollama_inc.crt
|
|
||||||
- name: install Windows SDK 8.1 to get signtool
|
|
||||||
run: |
|
|
||||||
$ErrorActionPreference = "Stop"
|
|
||||||
write-host "downloading SDK"
|
|
||||||
Invoke-WebRequest -Uri "https://go.microsoft.com/fwlink/p/?LinkId=323507" -OutFile "${env:RUNNER_TEMP}\sdksetup.exe"
|
|
||||||
Start-Process "${env:RUNNER_TEMP}\sdksetup.exe" -ArgumentList @("/q") -NoNewWindow -Wait
|
|
||||||
write-host "Win SDK 8.1 installed"
|
|
||||||
gci -path 'C:\Program Files (x86)\Windows Kits\' -r -fi 'signtool.exe'
|
|
||||||
- name: install signing plugin
|
|
||||||
run: |
|
|
||||||
$ErrorActionPreference = "Stop"
|
|
||||||
write-host "downloading plugin"
|
|
||||||
Invoke-WebRequest -Uri "https://github.com/GoogleCloudPlatform/kms-integrations/releases/download/cng-v1.0/kmscng-1.0-windows-amd64.zip" -OutFile "${env:RUNNER_TEMP}\plugin.zip"
|
|
||||||
Expand-Archive -Path "${env:RUNNER_TEMP}\plugin.zip" -DestinationPath ${env:RUNNER_TEMP}\plugin\
|
|
||||||
write-host "Installing plugin"
|
|
||||||
& "${env:RUNNER_TEMP}\plugin\*\kmscng.msi" /quiet
|
|
||||||
write-host "plugin installed"
|
|
||||||
- uses: actions/setup-go@v5
|
- uses: actions/setup-go@v5
|
||||||
with:
|
with:
|
||||||
go-version-file: go.mod
|
go-version-file: go.mod
|
||||||
cache: true
|
- run: |
|
||||||
- name: 'Install CUDA ${{ matrix.cuda.version }}'
|
go build -o dist/ .
|
||||||
run: |
|
|
||||||
$ErrorActionPreference = "Stop"
|
|
||||||
write-host "downloading CUDA Installer"
|
|
||||||
Invoke-WebRequest -Uri "${{ matrix.cuda.url }}" -OutFile "${env:RUNNER_TEMP}\cuda-install.exe"
|
|
||||||
write-host "Installing CUDA"
|
|
||||||
Start-Process "${env:RUNNER_TEMP}\cuda-install.exe" -ArgumentList '-s' -NoNewWindow -Wait
|
|
||||||
write-host "Completed CUDA"
|
|
||||||
$cudaPath=((resolve-path "c:\Program Files\NVIDIA*\CUDA\v*\bin\nvcc.exe")[0].path | split-path | split-path)
|
|
||||||
$cudaVer=($cudaPath | split-path -leaf ) -replace 'v(\d+).(\d+)', '$1_$2'
|
|
||||||
echo "$cudaPath\bin" >> $env:GITHUB_PATH
|
|
||||||
echo "CUDA_PATH=$cudaPath" >> $env:GITHUB_ENV
|
|
||||||
echo "CUDA_PATH_V${cudaVer}=$cudaPath" >> $env:GITHUB_ENV
|
|
||||||
echo "CUDA_PATH_VX_Y=CUDA_PATH_V${cudaVer}" >> $env:GITHUB_ENV
|
|
||||||
- name: 'Verify CUDA'
|
|
||||||
run: nvcc -V
|
|
||||||
- run: go get ./...
|
|
||||||
- name: go generate
|
|
||||||
run: |
|
|
||||||
$gopath=(get-command go).source | split-path -parent
|
|
||||||
$cudabin=(get-command nvcc).source | split-path
|
|
||||||
& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Launch-VsDevShell.ps1"
|
|
||||||
cd $env:GITHUB_WORKSPACE
|
|
||||||
$env:CMAKE_SYSTEM_VERSION="10.0.22621.0"
|
|
||||||
$env:PATH="$gopath;$cudabin;$env:PATH"
|
|
||||||
$env:OLLAMA_SKIP_CPU_GENERATE="1"
|
|
||||||
go generate -x ./...
|
|
||||||
- name: 'gather cuda dependencies'
|
|
||||||
run: |
|
|
||||||
$NVIDIA_DIR=(resolve-path 'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\*\bin\')[0]
|
|
||||||
md "dist\deps"
|
|
||||||
cp "${NVIDIA_DIR}\cudart64_*.dll" "dist\deps\"
|
|
||||||
cp "${NVIDIA_DIR}\cublas64_*.dll" "dist\deps\"
|
|
||||||
cp "${NVIDIA_DIR}\cublasLt64_*.dll" "dist\deps\"
|
|
||||||
- uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: generate-windows-cuda-${{ matrix.cuda.version }}
|
|
||||||
path: |
|
|
||||||
llm/build/**/bin/*
|
|
||||||
dist/windows-amd64/**
|
|
||||||
- uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: windows-cuda-deps-${{ matrix.cuda.version }}
|
|
||||||
path: dist/deps/*
|
|
||||||
|
|
||||||
|
|
||||||
# Import the prior generation steps and build the final windows assets
|
|
||||||
build-windows:
|
|
||||||
environment: release
|
|
||||||
runs-on: windows
|
|
||||||
needs:
|
|
||||||
- generate-windows-cuda
|
|
||||||
- generate-windows-rocm
|
|
||||||
- generate-windows-cpu
|
|
||||||
env:
|
env:
|
||||||
KEY_CONTAINER: ${{ vars.KEY_CONTAINER }}
|
GOOS: ${{ matrix.os }}
|
||||||
|
GOARCH: ${{ matrix.arch }}
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
CGO_CPPFLAGS: '-mmacosx-version-min=11.3'
|
||||||
|
- if: matrix.arch == 'amd64'
|
||||||
|
run: |
|
||||||
|
cmake --preset CPU -DCMAKE_OSX_DEPLOYMENT_TARGET=11.3 -DCMAKE_SYSTEM_PROCESSOR=x86_64 -DCMAKE_OSX_ARCHITECTURES=x86_64
|
||||||
|
cmake --build --parallel --preset CPU
|
||||||
|
cmake --install build --component CPU --strip --parallel 8
|
||||||
|
- uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: build-${{ matrix.os }}-${{ matrix.arch }}
|
||||||
|
path: dist/*
|
||||||
|
|
||||||
|
windows-depends:
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
os: [windows]
|
||||||
|
arch: [amd64]
|
||||||
|
preset: ['CPU']
|
||||||
|
include:
|
||||||
|
- os: windows
|
||||||
|
arch: amd64
|
||||||
|
preset: 'CUDA 12'
|
||||||
|
install: https://developer.download.nvidia.com/compute/cuda/12.8.0/local_installers/cuda_12.8.0_571.96_windows.exe
|
||||||
|
cuda-components:
|
||||||
|
- '"cudart"'
|
||||||
|
- '"nvcc"'
|
||||||
|
- '"cublas"'
|
||||||
|
- '"cublas_dev"'
|
||||||
|
cuda-version: '12.8'
|
||||||
|
flags: ''
|
||||||
|
runner_dir: 'cuda_v12'
|
||||||
|
- os: windows
|
||||||
|
arch: amd64
|
||||||
|
preset: 'CUDA 13'
|
||||||
|
install: https://developer.download.nvidia.com/compute/cuda/13.0.0/local_installers/cuda_13.0.0_windows.exe
|
||||||
|
cuda-components:
|
||||||
|
- '"cudart"'
|
||||||
|
- '"nvcc"'
|
||||||
|
- '"cublas"'
|
||||||
|
- '"cublas_dev"'
|
||||||
|
- '"crt"'
|
||||||
|
- '"nvvm"'
|
||||||
|
- '"nvptxcompiler"'
|
||||||
|
cuda-version: '13.0'
|
||||||
|
flags: ''
|
||||||
|
runner_dir: 'cuda_v13'
|
||||||
|
- os: windows
|
||||||
|
arch: amd64
|
||||||
|
preset: 'ROCm 6'
|
||||||
|
install: https://download.amd.com/developer/eula/rocm-hub/AMD-Software-PRO-Edition-24.Q4-WinSvr2022-For-HIP.exe
|
||||||
|
rocm-version: '6.2'
|
||||||
|
flags: '-DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_C_FLAGS="-parallel-jobs=4 -Wno-ignored-attributes -Wno-deprecated-pragma" -DCMAKE_CXX_FLAGS="-parallel-jobs=4 -Wno-ignored-attributes -Wno-deprecated-pragma"'
|
||||||
|
runner_dir: 'rocm'
|
||||||
|
runs-on: ${{ matrix.arch == 'arm64' && format('{0}-{1}', matrix.os, matrix.arch) || matrix.os }}
|
||||||
|
environment: release
|
||||||
|
env:
|
||||||
|
GOFLAGS: ${{ needs.setup-environment.outputs.GOFLAGS }}
|
||||||
steps:
|
steps:
|
||||||
|
- name: Install system dependencies
|
||||||
|
run: |
|
||||||
|
choco install -y --no-progress ccache ninja
|
||||||
|
ccache -o cache_dir=${{ github.workspace }}\.ccache
|
||||||
|
- if: startsWith(matrix.preset, 'CUDA ') || startsWith(matrix.preset, 'ROCm ')
|
||||||
|
id: cache-install
|
||||||
|
uses: actions/cache/restore@v4
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA
|
||||||
|
C:\Program Files\AMD\ROCm
|
||||||
|
key: ${{ matrix.install }}
|
||||||
|
- if: startsWith(matrix.preset, 'CUDA ')
|
||||||
|
name: Install CUDA ${{ matrix.cuda-version }}
|
||||||
|
run: |
|
||||||
|
$ErrorActionPreference = "Stop"
|
||||||
|
if ("${{ steps.cache-install.outputs.cache-hit }}" -ne 'true') {
|
||||||
|
Invoke-WebRequest -Uri "${{ matrix.install }}" -OutFile "install.exe"
|
||||||
|
$subpackages = @(${{ join(matrix.cuda-components, ', ') }}) | Foreach-Object {"${_}_${{ matrix.cuda-version }}"}
|
||||||
|
Start-Process -FilePath .\install.exe -ArgumentList (@("-s") + $subpackages) -NoNewWindow -Wait
|
||||||
|
}
|
||||||
|
|
||||||
|
$cudaPath = (Resolve-Path "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\*").path
|
||||||
|
echo "$cudaPath\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
|
||||||
|
- if: startsWith(matrix.preset, 'ROCm')
|
||||||
|
name: Install ROCm ${{ matrix.rocm-version }}
|
||||||
|
run: |
|
||||||
|
$ErrorActionPreference = "Stop"
|
||||||
|
if ("${{ steps.cache-install.outputs.cache-hit }}" -ne 'true') {
|
||||||
|
Invoke-WebRequest -Uri "${{ matrix.install }}" -OutFile "install.exe"
|
||||||
|
Start-Process -FilePath .\install.exe -ArgumentList '-install' -NoNewWindow -Wait
|
||||||
|
}
|
||||||
|
|
||||||
|
$hipPath = (Resolve-Path "C:\Program Files\AMD\ROCm\*").path
|
||||||
|
echo "$hipPath\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
|
||||||
|
echo "CC=$hipPath\bin\clang.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
|
echo "CXX=$hipPath\bin\clang++.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
|
echo "HIPCXX=$hipPath\bin\clang++.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
|
echo "HIP_PLATFORM=amd" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
|
echo "CMAKE_PREFIX_PATH=$hipPath" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
|
- if: matrix.preset == 'CPU'
|
||||||
|
run: |
|
||||||
|
echo "CC=clang.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
|
echo "CXX=clang++.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
|
- if: ${{ !cancelled() && steps.cache-install.outputs.cache-hit != 'true' }}
|
||||||
|
uses: actions/cache/save@v4
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA
|
||||||
|
C:\Program Files\AMD\ROCm
|
||||||
|
key: ${{ matrix.install }}
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
|
- uses: actions/cache@v4
|
||||||
with:
|
with:
|
||||||
submodules: recursive
|
path: ${{ github.workspace }}\.ccache
|
||||||
- name: Set Version
|
key: ccache-${{ matrix.os }}-${{ matrix.arch }}-${{ matrix.preset }}
|
||||||
shell: bash
|
- name: Build target "${{ matrix.preset }}"
|
||||||
run: echo "VERSION=${GITHUB_REF_NAME#v}" >> $GITHUB_ENV
|
run: |
|
||||||
- uses: 'google-github-actions/auth@v2'
|
Import-Module 'C:\Program Files\Microsoft Visual Studio\2022\Enterprise\Common7\Tools\Microsoft.VisualStudio.DevShell.dll'
|
||||||
|
Enter-VsDevShell -VsInstallPath 'C:\Program Files\Microsoft Visual Studio\2022\Enterprise' -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
|
||||||
|
cmake --preset "${{ matrix.preset }}" ${{ matrix.flags }} -DOLLAMA_RUNNER_DIR="${{ matrix.runner_dir }}"
|
||||||
|
cmake --build --parallel --preset "${{ matrix.preset }}"
|
||||||
|
cmake --install build --component "${{ startsWith(matrix.preset, 'CUDA ') && 'CUDA' || startsWith(matrix.preset, 'ROCm ') && 'HIP' || 'CPU' }}" --strip --parallel 8
|
||||||
|
Remove-Item -Path dist\lib\ollama\rocm\rocblas\library\*gfx906* -ErrorAction SilentlyContinue
|
||||||
|
env:
|
||||||
|
CMAKE_GENERATOR: Ninja
|
||||||
|
- uses: actions/upload-artifact@v4
|
||||||
with:
|
with:
|
||||||
project_id: 'ollama'
|
name: depends-${{ matrix.os }}-${{ matrix.arch }}-${{ matrix.preset }}
|
||||||
credentials_json: '${{ secrets.GOOGLE_SIGNING_CREDENTIALS }}'
|
path: dist\*
|
||||||
- run: echo "${{ vars.OLLAMA_CERT }}" > ollama_inc.crt
|
|
||||||
- name: install Windows SDK 8.1 to get signtool
|
windows-build:
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
os: [windows]
|
||||||
|
arch: [amd64, arm64]
|
||||||
|
include:
|
||||||
|
- os: windows
|
||||||
|
arch: amd64
|
||||||
|
llvmarch: x86_64
|
||||||
|
- os: windows
|
||||||
|
arch: arm64
|
||||||
|
llvmarch: aarch64
|
||||||
|
runs-on: ${{ matrix.arch == 'arm64' && format('{0}-{1}', matrix.os, matrix.arch) || matrix.os }}
|
||||||
|
environment: release
|
||||||
|
needs: [setup-environment]
|
||||||
|
env:
|
||||||
|
GOFLAGS: ${{ needs.setup-environment.outputs.GOFLAGS }}
|
||||||
|
steps:
|
||||||
|
- name: Install ARM64 system dependencies
|
||||||
|
if: matrix.arch == 'arm64'
|
||||||
run: |
|
run: |
|
||||||
$ErrorActionPreference = "Stop"
|
$ErrorActionPreference = "Stop"
|
||||||
write-host "downloading SDK"
|
Set-ExecutionPolicy Bypass -Scope Process -Force
|
||||||
Invoke-WebRequest -Uri "https://go.microsoft.com/fwlink/p/?LinkId=323507" -OutFile "${env:RUNNER_TEMP}\sdksetup.exe"
|
[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072
|
||||||
Start-Process "${env:RUNNER_TEMP}\sdksetup.exe" -ArgumentList @("/q") -NoNewWindow -Wait
|
iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
|
||||||
write-host "Win SDK 8.1 installed"
|
echo "C:\ProgramData\chocolatey\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
|
||||||
gci -path 'C:\Program Files (x86)\Windows Kits\' -r -fi 'signtool.exe'
|
|
||||||
- name: install signing plugin
|
choco install -y --no-progress git gzip
|
||||||
|
echo "C:\Program Files\Git\cmd" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
|
||||||
|
- name: Install clang and gcc-compat
|
||||||
run: |
|
run: |
|
||||||
$ErrorActionPreference = "Stop"
|
$ErrorActionPreference = "Stop"
|
||||||
write-host "downloading plugin"
|
Set-ExecutionPolicy Bypass -Scope Process -Force
|
||||||
Invoke-WebRequest -Uri "https://github.com/GoogleCloudPlatform/kms-integrations/releases/download/cng-v1.0/kmscng-1.0-windows-amd64.zip" -OutFile "${env:RUNNER_TEMP}\plugin.zip"
|
Invoke-WebRequest -Uri "https://github.com/mstorsjo/llvm-mingw/releases/download/20240619/llvm-mingw-20240619-ucrt-${{ matrix.llvmarch }}.zip" -OutFile "${{ runner.temp }}\llvm-mingw-ucrt.zip"
|
||||||
Expand-Archive -Path "${env:RUNNER_TEMP}\plugin.zip" -DestinationPath ${env:RUNNER_TEMP}\plugin\
|
Expand-Archive -Path ${{ runner.temp }}\llvm-mingw-ucrt.zip -DestinationPath "C:\Program Files\"
|
||||||
write-host "Installing plugin"
|
$installPath=(Resolve-Path -Path "C:\Program Files\llvm-mingw-*-ucrt*").path
|
||||||
& "${env:RUNNER_TEMP}\plugin\*\kmscng.msi" /quiet
|
echo "$installPath\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
|
||||||
write-host "plugin installed"
|
- uses: actions/checkout@v4
|
||||||
- uses: actions/setup-go@v5
|
- uses: actions/setup-go@v5
|
||||||
with:
|
with:
|
||||||
go-version-file: go.mod
|
go-version-file: go.mod
|
||||||
cache: true
|
- name: Verify gcc is actually clang
|
||||||
- run: go get
|
|
||||||
- uses: actions/download-artifact@v4
|
|
||||||
with:
|
|
||||||
name: generate-windows-cpu
|
|
||||||
- uses: actions/download-artifact@v4
|
|
||||||
with:
|
|
||||||
name: generate-windows-cuda-11
|
|
||||||
- uses: actions/download-artifact@v4
|
|
||||||
with:
|
|
||||||
name: generate-windows-cuda-12
|
|
||||||
- uses: actions/download-artifact@v4
|
|
||||||
with:
|
|
||||||
name: windows-cuda-deps-11
|
|
||||||
- uses: actions/download-artifact@v4
|
|
||||||
with:
|
|
||||||
name: windows-cuda-deps-12
|
|
||||||
- uses: actions/download-artifact@v4
|
|
||||||
with:
|
|
||||||
name: windows-rocm-deps
|
|
||||||
- uses: actions/download-artifact@v4
|
|
||||||
with:
|
|
||||||
name: generate-windows-rocm
|
|
||||||
- run: dir llm/build
|
|
||||||
- run: |
|
|
||||||
$gopath=(get-command go).source | split-path -parent
|
|
||||||
& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Launch-VsDevShell.ps1"
|
|
||||||
cd $env:GITHUB_WORKSPACE
|
|
||||||
$env:CMAKE_SYSTEM_VERSION="10.0.22621.0"
|
|
||||||
$env:PATH="$gopath;$env:PATH"
|
|
||||||
$env:OLLAMA_SKIP_GENERATE="1"
|
|
||||||
& .\scripts\build_windows.ps1
|
|
||||||
- uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: dist-windows
|
|
||||||
path: |
|
|
||||||
dist/OllamaSetup.exe
|
|
||||||
dist/ollama-windows-*.zip
|
|
||||||
|
|
||||||
# Linux x86 assets built using the container based build
|
|
||||||
build-linux-amd64:
|
|
||||||
environment: release
|
|
||||||
runs-on: linux
|
|
||||||
env:
|
|
||||||
OLLAMA_SKIP_MANIFEST_CREATE: '1'
|
|
||||||
BUILD_ARCH: amd64
|
|
||||||
PUSH: '1'
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
submodules: recursive
|
|
||||||
- name: Set Version
|
|
||||||
shell: bash
|
|
||||||
run: echo "VERSION=${GITHUB_REF_NAME#v}" >> $GITHUB_ENV
|
|
||||||
- name: Login to Docker Hub
|
|
||||||
uses: docker/login-action@v3
|
|
||||||
with:
|
|
||||||
username: ${{ vars.DOCKER_USER }}
|
|
||||||
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
|
|
||||||
- run: |
|
|
||||||
./scripts/build_linux.sh
|
|
||||||
./scripts/build_docker.sh
|
|
||||||
- uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: dist-linux-amd64
|
|
||||||
path: |
|
|
||||||
dist/*linux*
|
|
||||||
!dist/*-cov
|
|
||||||
|
|
||||||
# Linux ARM assets built using the container based build
|
|
||||||
# (at present, docker isn't pre-installed on arm ubunutu images)
|
|
||||||
build-linux-arm64:
|
|
||||||
environment: release
|
|
||||||
runs-on: linux-arm64
|
|
||||||
env:
|
|
||||||
OLLAMA_SKIP_MANIFEST_CREATE: '1'
|
|
||||||
BUILD_ARCH: arm64
|
|
||||||
PUSH: '1'
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
submodules: recursive
|
|
||||||
- name: Set Version
|
|
||||||
shell: bash
|
|
||||||
run: echo "VERSION=${GITHUB_REF_NAME#v}" >> $GITHUB_ENV
|
|
||||||
- name: 'Install Docker'
|
|
||||||
run: |
|
run: |
|
||||||
# Add Docker's official GPG key:
|
$ErrorActionPreference='Continue'
|
||||||
env
|
$version=& gcc -v 2>&1
|
||||||
uname -a
|
$version=$version -join "`n"
|
||||||
sudo apt-get update
|
echo "gcc is $version"
|
||||||
sudo apt-get install -y ca-certificates curl
|
if ($version -notmatch 'clang') {
|
||||||
sudo install -m 0755 -d /etc/apt/keyrings
|
echo "ERROR: GCC must be clang for proper utf16 handling"
|
||||||
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
|
exit 1
|
||||||
sudo chmod a+r /etc/apt/keyrings/docker.asc
|
}
|
||||||
|
$ErrorActionPreference='Stop'
|
||||||
|
- run: |
|
||||||
|
go build -o dist/${{ matrix.os }}-${{ matrix.arch }}/ .
|
||||||
|
- uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: build-${{ matrix.os }}-${{ matrix.arch }}
|
||||||
|
path: |
|
||||||
|
dist\${{ matrix.os }}-${{ matrix.arch }}\*.exe
|
||||||
|
|
||||||
# Add the repository to Apt sources:
|
linux-build:
|
||||||
echo \
|
strategy:
|
||||||
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
|
matrix:
|
||||||
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
|
include:
|
||||||
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
- os: linux
|
||||||
sudo apt-get update
|
arch: amd64
|
||||||
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
|
target: archive_novulkan
|
||||||
sudo usermod -aG docker $USER
|
- os: linux
|
||||||
sudo apt-get install acl
|
arch: amd64
|
||||||
sudo setfacl --modify user:$USER:rw /var/run/docker.sock
|
target: rocm
|
||||||
- name: Login to Docker Hub
|
- os: linux
|
||||||
uses: docker/login-action@v3
|
arch: arm64
|
||||||
|
target: archive_novulkan
|
||||||
|
runs-on: ${{ matrix.arch == 'arm64' && format('{0}-{1}', matrix.os, matrix.arch) || matrix.os }}
|
||||||
|
environment: release
|
||||||
|
needs: setup-environment
|
||||||
|
env:
|
||||||
|
GOFLAGS: ${{ needs.setup-environment.outputs.GOFLAGS }}
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: docker/setup-buildx-action@v3
|
||||||
|
- uses: docker/build-push-action@v6
|
||||||
|
with:
|
||||||
|
context: .
|
||||||
|
platforms: ${{ matrix.os }}/${{ matrix.arch }}
|
||||||
|
target: ${{ matrix.target }}
|
||||||
|
build-args: |
|
||||||
|
GOFLAGS=${{ env.GOFLAGS }}
|
||||||
|
CGO_CFLAGS=${{ env.CGO_CFLAGS }}
|
||||||
|
CGO_CXXFLAGS=${{ env.CGO_CXXFLAGS }}
|
||||||
|
outputs: type=local,dest=dist/${{ matrix.os }}-${{ matrix.arch }}
|
||||||
|
cache-from: type=registry,ref=${{ vars.DOCKER_REPO }}:latest
|
||||||
|
cache-to: type=inline
|
||||||
|
- run: |
|
||||||
|
for COMPONENT in bin/* lib/ollama/*; do
|
||||||
|
case "$COMPONENT" in
|
||||||
|
bin/ollama) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}.tar.in ;;
|
||||||
|
lib/ollama/*.so*) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}.tar.in ;;
|
||||||
|
lib/ollama/cuda_v*) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}.tar.in ;;
|
||||||
|
lib/ollama/cuda_jetpack5) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}-jetpack5.tar.in ;;
|
||||||
|
lib/ollama/cuda_jetpack6) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}-jetpack6.tar.in ;;
|
||||||
|
lib/ollama/rocm) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}-rocm.tar.in ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
working-directory: dist/${{ matrix.os }}-${{ matrix.arch }}
|
||||||
|
- run: |
|
||||||
|
echo "Manifests"
|
||||||
|
for ARCHIVE in dist/${{ matrix.os }}-${{ matrix.arch }}/*.tar.in ; do
|
||||||
|
echo $ARCHIVE
|
||||||
|
cat $ARCHIVE
|
||||||
|
done
|
||||||
|
- run: |
|
||||||
|
for ARCHIVE in dist/${{ matrix.os }}-${{ matrix.arch }}/*.tar.in; do
|
||||||
|
tar c -C dist/${{ matrix.os }}-${{ matrix.arch }} -T $ARCHIVE --owner 0 --group 0 | pigz -9vc >$(basename ${ARCHIVE//.*/}.tgz);
|
||||||
|
done
|
||||||
|
- uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: dist-${{ matrix.os }}-${{ matrix.arch }}-${{ matrix.target }}
|
||||||
|
path: |
|
||||||
|
*.tgz
|
||||||
|
|
||||||
|
# Build each Docker variant (OS, arch, and flavor) separately. Using QEMU is unreliable and slower.
|
||||||
|
docker-build-push:
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
include:
|
||||||
|
- os: linux
|
||||||
|
arch: arm64
|
||||||
|
target: novulkan
|
||||||
|
build-args: |
|
||||||
|
CGO_CFLAGS
|
||||||
|
CGO_CXXFLAGS
|
||||||
|
GOFLAGS
|
||||||
|
- os: linux
|
||||||
|
arch: amd64
|
||||||
|
target: novulkan
|
||||||
|
build-args: |
|
||||||
|
CGO_CFLAGS
|
||||||
|
CGO_CXXFLAGS
|
||||||
|
GOFLAGS
|
||||||
|
- os: linux
|
||||||
|
arch: amd64
|
||||||
|
suffix: '-rocm'
|
||||||
|
build-args: |
|
||||||
|
CGO_CFLAGS
|
||||||
|
CGO_CXXFLAGS
|
||||||
|
GOFLAGS
|
||||||
|
FLAVOR=rocm
|
||||||
|
- os: linux
|
||||||
|
arch: amd64
|
||||||
|
suffix: '-vulkan'
|
||||||
|
target: default
|
||||||
|
build-args: |
|
||||||
|
CGO_CFLAGS
|
||||||
|
CGO_CXXFLAGS
|
||||||
|
GOFLAGS
|
||||||
|
runs-on: ${{ matrix.arch == 'arm64' && format('{0}-{1}', matrix.os, matrix.arch) || matrix.os }}
|
||||||
|
environment: release
|
||||||
|
needs: setup-environment
|
||||||
|
env:
|
||||||
|
GOFLAGS: ${{ needs.setup-environment.outputs.GOFLAGS }}
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: docker/setup-buildx-action@v3
|
||||||
|
- uses: docker/login-action@v3
|
||||||
with:
|
with:
|
||||||
username: ${{ vars.DOCKER_USER }}
|
username: ${{ vars.DOCKER_USER }}
|
||||||
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
|
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
|
||||||
|
- id: build-push
|
||||||
|
uses: docker/build-push-action@v6
|
||||||
|
with:
|
||||||
|
context: .
|
||||||
|
platforms: ${{ matrix.os }}/${{ matrix.arch }}
|
||||||
|
target: ${{ matrix.target }}
|
||||||
|
build-args: ${{ matrix.build-args }}
|
||||||
|
outputs: type=image,name=${{ vars.DOCKER_REPO }},push-by-digest=true,name-canonical=true,push=true
|
||||||
|
cache-from: type=registry,ref=${{ vars.DOCKER_REPO }}:latest
|
||||||
|
cache-to: type=inline
|
||||||
- run: |
|
- run: |
|
||||||
./scripts/build_linux.sh
|
mkdir -p ${{ matrix.os }}-${{ matrix.arch }}
|
||||||
./scripts/build_docker.sh
|
echo "${{ steps.build-push.outputs.digest }}" >${{ matrix.os }}-${{ matrix.arch }}-${{ matrix.suffix }}.txt
|
||||||
|
working-directory: ${{ runner.temp }}
|
||||||
- uses: actions/upload-artifact@v4
|
- uses: actions/upload-artifact@v4
|
||||||
with:
|
with:
|
||||||
name: dist-linux-arm64
|
name: digest-${{ matrix.os }}-${{ matrix.arch }}-${{ matrix.suffix }}
|
||||||
path: |
|
path: |
|
||||||
dist/*linux*
|
${{ runner.temp }}/${{ matrix.os }}-${{ matrix.arch }}-${{ matrix.suffix }}.txt
|
||||||
!dist/*-cov
|
|
||||||
|
|
||||||
# Aggregate all the assets and ship a release
|
# Merge Docker images for the same flavor into a single multi-arch manifest
|
||||||
release:
|
docker-merge-push:
|
||||||
needs:
|
strategy:
|
||||||
- build-darwin
|
matrix:
|
||||||
- build-windows
|
suffix: ['', '-rocm']
|
||||||
- build-linux-amd64
|
|
||||||
- build-linux-arm64
|
|
||||||
runs-on: linux
|
runs-on: linux
|
||||||
environment: release
|
environment: release
|
||||||
|
needs: [docker-build-push]
|
||||||
|
steps:
|
||||||
|
- uses: docker/login-action@v3
|
||||||
|
with:
|
||||||
|
username: ${{ vars.DOCKER_USER }}
|
||||||
|
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
|
||||||
|
- id: metadata
|
||||||
|
uses: docker/metadata-action@v4
|
||||||
|
with:
|
||||||
|
flavor: |
|
||||||
|
latest=false
|
||||||
|
suffix=${{ matrix.suffix }}
|
||||||
|
images: |
|
||||||
|
${{ vars.DOCKER_REPO }}
|
||||||
|
tags: |
|
||||||
|
type=ref,enable=true,priority=600,prefix=pr-,event=pr
|
||||||
|
type=semver,pattern={{version}}
|
||||||
|
- uses: actions/download-artifact@v4
|
||||||
|
with:
|
||||||
|
pattern: digest-*
|
||||||
|
path: ${{ runner.temp }}
|
||||||
|
merge-multiple: true
|
||||||
|
- run: |
|
||||||
|
docker buildx imagetools create $(echo '${{ steps.metadata.outputs.json }}' | jq -cr '.tags | map("-t", .) | join(" ")') $(cat *-${{ matrix.suffix }}.txt | xargs printf '${{ vars.DOCKER_REPO }}@%s ')
|
||||||
|
docker buildx imagetools inspect ${{ vars.DOCKER_REPO }}:${{ steps.metadata.outputs.version }}
|
||||||
|
working-directory: ${{ runner.temp }}
|
||||||
|
|
||||||
|
# Trigger downstream release process
|
||||||
|
trigger:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
environment: release
|
||||||
|
needs: [darwin-build, windows-build, windows-depends, linux-build]
|
||||||
permissions:
|
permissions:
|
||||||
contents: write
|
contents: write
|
||||||
env:
|
env:
|
||||||
OLLAMA_SKIP_IMAGE_BUILD: '1'
|
|
||||||
PUSH: '1'
|
|
||||||
GH_TOKEN: ${{ github.token }}
|
GH_TOKEN: ${{ github.token }}
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- name: Set Version
|
- name: Create or update Release for tag
|
||||||
shell: bash
|
|
||||||
run: |
|
run: |
|
||||||
echo "VERSION=${GITHUB_REF_NAME#v}" >> $GITHUB_ENV
|
RELEASE_VERSION="$(echo ${GITHUB_REF_NAME} | cut -f1 -d-)"
|
||||||
echo "RELEASE_VERSION=$(echo ${GITHUB_REF_NAME} | cut -f1 -d-)" >> $GITHUB_ENV
|
echo "Looking for existing release for ${RELEASE_VERSION}"
|
||||||
- name: Login to Docker Hub
|
OLD_TAG=$(gh release ls --json name,tagName | jq -r ".[] | select(.name == \"${RELEASE_VERSION}\") | .tagName")
|
||||||
uses: docker/login-action@v3
|
|
||||||
with:
|
|
||||||
username: ${{ vars.DOCKER_USER }}
|
|
||||||
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
|
|
||||||
- run: ./scripts/build_docker.sh
|
|
||||||
- name: Retrieve built artifact
|
|
||||||
uses: actions/download-artifact@v4
|
|
||||||
with:
|
|
||||||
path: dist
|
|
||||||
pattern: dist-*
|
|
||||||
merge-multiple: true
|
|
||||||
- run: |
|
|
||||||
ls -lh dist/
|
|
||||||
(cd dist; find . -type f | xargs sha256sum > ../sha256sum.txt)
|
|
||||||
mv sha256sum.txt dist/
|
|
||||||
mv dist/linux-???64 .
|
|
||||||
mv dist/linux-amd64-rocm .
|
|
||||||
cat dist/sha256sum.txt
|
|
||||||
- name: Create or update Release
|
|
||||||
run: |
|
|
||||||
echo "Looking for existing release for ${{ env.RELEASE_VERSION }}"
|
|
||||||
OLD_TAG=$(gh release ls --json name,tagName | jq -r ".[] | select(.name == \"${{ env.RELEASE_VERSION }}\") | .tagName")
|
|
||||||
if [ -n "$OLD_TAG" ]; then
|
if [ -n "$OLD_TAG" ]; then
|
||||||
echo "Updating release ${{ env.RELEASE_VERSION }} to point to new tag ${GITHUB_REF_NAME}"
|
echo "Updating release ${RELEASE_VERSION} to point to new tag ${GITHUB_REF_NAME}"
|
||||||
gh release edit ${OLD_TAG} --tag ${GITHUB_REF_NAME}
|
gh release edit ${OLD_TAG} --tag ${GITHUB_REF_NAME}
|
||||||
else
|
else
|
||||||
echo "Creating new release ${{ env.RELEASE_VERSION }} pointing to tag ${GITHUB_REF_NAME}"
|
echo "Creating new release ${RELEASE_VERSION} pointing to tag ${GITHUB_REF_NAME}"
|
||||||
gh release create ${GITHUB_REF_NAME} \
|
gh release create ${GITHUB_REF_NAME} \
|
||||||
--title ${{ env.RELEASE_VERSION }} \
|
--title ${RELEASE_VERSION} \
|
||||||
--draft \
|
--draft \
|
||||||
--generate-notes \
|
--generate-notes \
|
||||||
--prerelease
|
--prerelease
|
||||||
fi
|
fi
|
||||||
echo "Uploading artifacts for tag ${GITHUB_REF_NAME}"
|
- name: Trigger downstream release process
|
||||||
gh release upload ${GITHUB_REF_NAME} dist/* --clobber
|
run: |
|
||||||
|
curl -L \
|
||||||
|
-X POST \
|
||||||
|
-H "Accept: application/vnd.github+json" \
|
||||||
|
-H "Authorization: Bearer ${{ secrets.RELEASE_TOKEN }}" \
|
||||||
|
-H "X-GitHub-Api-Version: 2022-11-28" \
|
||||||
|
https://api.github.com/repos/ollama/${{ vars.RELEASE_REPO }}/dispatches \
|
||||||
|
-d "{\"event_type\": \"trigger-workflow\", \"client_payload\": {\"run_id\": \"${GITHUB_RUN_ID}\", \"version\": \"${GITHUB_REF_NAME#v}\", \"origin\": \"${GITHUB_REPOSITORY}\", \"publish\": \"1\"}}"
|
||||||
|
|||||||
474
.github/workflows/test.yaml
vendored
474
.github/workflows/test.yaml
vendored
@@ -21,9 +21,7 @@ jobs:
|
|||||||
changes:
|
changes:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
outputs:
|
outputs:
|
||||||
GENERATE: ${{ steps.changes.outputs.GENERATE }}
|
changed: ${{ steps.changes.outputs.changed }}
|
||||||
GENERATE_CUDA: ${{ steps.changes.outputs.GENERATE_CUDA }}
|
|
||||||
GENERATE_ROCM: ${{ steps.changes.outputs.GENERATE_ROCM }}
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
with:
|
||||||
@@ -31,293 +29,259 @@ jobs:
|
|||||||
- id: changes
|
- id: changes
|
||||||
run: |
|
run: |
|
||||||
changed() {
|
changed() {
|
||||||
git diff-tree -r --no-commit-id --name-only \
|
local BASE=${{ github.event.pull_request.base.sha }}
|
||||||
$(git merge-base ${{ github.event.pull_request.base.sha }} ${{ github.event.pull_request.head.sha }}) \
|
local HEAD=${{ github.event.pull_request.head.sha }}
|
||||||
${{ github.event.pull_request.head.sha }} \
|
local MERGE_BASE=$(git merge-base $BASE $HEAD)
|
||||||
|
git diff-tree -r --no-commit-id --name-only "$MERGE_BASE" "$HEAD" \
|
||||||
| xargs python3 -c "import sys; from pathlib import Path; print(any(Path(x).match(glob) for x in sys.argv[1:] for glob in '$*'.split(' ')))"
|
| xargs python3 -c "import sys; from pathlib import Path; print(any(Path(x).match(glob) for x in sys.argv[1:] for glob in '$*'.split(' ')))"
|
||||||
}
|
}
|
||||||
|
|
||||||
{
|
echo changed=$(changed 'llama/llama.cpp/**/*' 'ml/backend/ggml/ggml/**/*') | tee -a $GITHUB_OUTPUT
|
||||||
echo GENERATE=$(changed 'llm/llama.cpp' 'llm/patches/**' 'llm/ext_server/**' 'llm/generate/**')
|
|
||||||
echo GENERATE_CUDA=$(changed 'llm/llama.cpp' 'llm/patches/**' 'llm/ext_server/**' 'llm/generate/**')
|
|
||||||
echo GENERATE_ROCM=$(changed 'llm/llama.cpp' 'llm/patches/**' 'llm/ext_server/**' 'llm/generate/**')
|
|
||||||
} >>$GITHUB_OUTPUT
|
|
||||||
|
|
||||||
generate:
|
linux:
|
||||||
needs: [changes]
|
needs: [changes]
|
||||||
if: ${{ needs.changes.outputs.GENERATE == 'True' }}
|
if: needs.changes.outputs.changed == 'True'
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
os: [ubuntu-latest, macos-latest, windows-2019]
|
include:
|
||||||
arch: [amd64, arm64]
|
- preset: CPU
|
||||||
exclude:
|
- preset: CUDA
|
||||||
- os: ubuntu-latest
|
container: nvidia/cuda:13.0.0-devel-ubuntu22.04
|
||||||
arch: arm64
|
flags: '-DCMAKE_CUDA_ARCHITECTURES=87'
|
||||||
- os: windows-2019
|
- preset: ROCm
|
||||||
arch: arm64
|
container: rocm/dev-ubuntu-22.04:6.1.2
|
||||||
runs-on: ${{ matrix.os }}
|
extra-packages: rocm-libs
|
||||||
env:
|
flags: '-DAMDGPU_TARGETS=gfx1010 -DCMAKE_PREFIX_PATH=/opt/rocm'
|
||||||
GOARCH: ${{ matrix.arch }}
|
- preset: Vulkan
|
||||||
CGO_ENABLED: '1'
|
container: ubuntu:22.04
|
||||||
|
extra-packages: >
|
||||||
|
mesa-vulkan-drivers vulkan-tools
|
||||||
|
libvulkan1 libvulkan-dev
|
||||||
|
vulkan-sdk cmake ccache g++ make
|
||||||
|
runs-on: linux
|
||||||
|
container: ${{ matrix.container }}
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version-file: go.mod
|
|
||||||
cache: true
|
|
||||||
- run: go get ./...
|
|
||||||
- run: |
|
- run: |
|
||||||
$gopath=(get-command go).source | split-path -parent
|
[ -n "${{ matrix.container }}" ] || sudo=sudo
|
||||||
$gccpath=(get-command gcc).source | split-path -parent
|
$sudo apt-get update
|
||||||
& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Launch-VsDevShell.ps1"
|
# Add LunarG Vulkan SDK apt repo for Ubuntu 22.04
|
||||||
cd $env:GITHUB_WORKSPACE
|
if [ "${{ matrix.preset }}" = "Vulkan" ]; then
|
||||||
$env:CMAKE_SYSTEM_VERSION="10.0.22621.0"
|
$sudo apt-get install -y --no-install-recommends wget gnupg ca-certificates software-properties-common
|
||||||
$env:PATH="$gopath;$gccpath;$env:PATH"
|
wget -qO - https://packages.lunarg.com/lunarg-signing-key-pub.asc | $sudo gpg --dearmor -o /usr/share/keyrings/lunarg-archive-keyring.gpg
|
||||||
echo $env:PATH
|
# Use signed-by to bind the repo to the installed keyring to avoid NO_PUBKEY
|
||||||
go generate -x ./...
|
echo "deb [signed-by=/usr/share/keyrings/lunarg-archive-keyring.gpg] https://packages.lunarg.com/vulkan/1.4.313 jammy main" | $sudo tee /etc/apt/sources.list.d/lunarg-vulkan-1.4.313-jammy.list > /dev/null
|
||||||
if: ${{ startsWith(matrix.os, 'windows-') }}
|
$sudo apt-get update
|
||||||
name: 'Windows Go Generate'
|
fi
|
||||||
- run: go generate -x ./...
|
$sudo apt-get install -y cmake ccache ${{ matrix.extra-packages }}
|
||||||
if: ${{ ! startsWith(matrix.os, 'windows-') }}
|
# Export VULKAN_SDK if provided by LunarG package (defensive)
|
||||||
name: 'Unix Go Generate'
|
if [ -d "/usr/lib/x86_64-linux-gnu/vulkan" ] && [ "${{ matrix.preset }}" = "Vulkan" ]; then
|
||||||
- run: go build .
|
echo "VULKAN_SDK=/usr" >> $GITHUB_ENV
|
||||||
- uses: actions/upload-artifact@v4
|
fi
|
||||||
with:
|
|
||||||
name: ${{ matrix.os }}-${{ matrix.arch }}-libraries
|
|
||||||
path: |
|
|
||||||
llm/build/**/bin/*
|
|
||||||
llm/build/**/*.a
|
|
||||||
generate-cuda:
|
|
||||||
needs: [changes]
|
|
||||||
if: ${{ needs.changes.outputs.GENERATE_CUDA == 'True' }}
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
cuda-version:
|
|
||||||
- '11.8.0'
|
|
||||||
runs-on: linux
|
|
||||||
container: nvidia/cuda:${{ matrix.cuda-version }}-devel-ubuntu20.04
|
|
||||||
steps:
|
|
||||||
- run: |
|
|
||||||
apt-get update && apt-get install -y git build-essential curl
|
|
||||||
curl -fsSL https://github.com/Kitware/CMake/releases/download/v3.28.1/cmake-3.28.1-linux-x86_64.tar.gz \
|
|
||||||
| tar -zx -C /usr --strip-components 1
|
|
||||||
env:
|
env:
|
||||||
DEBIAN_FRONTEND: noninteractive
|
DEBIAN_FRONTEND: noninteractive
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/cache@v4
|
||||||
- uses: actions/setup-go@v4
|
|
||||||
with:
|
with:
|
||||||
go-version-file: go.mod
|
path: /github/home/.cache/ccache
|
||||||
cache: true
|
key: ccache-${{ runner.os }}-${{ runner.arch }}-${{ matrix.preset }}
|
||||||
- run: go get ./...
|
|
||||||
- run: |
|
- run: |
|
||||||
git config --global --add safe.directory /__w/ollama/ollama
|
cmake --preset ${{ matrix.preset }} ${{ matrix.flags }}
|
||||||
go generate -x ./...
|
cmake --build --preset ${{ matrix.preset }} --parallel
|
||||||
env:
|
|
||||||
OLLAMA_SKIP_CPU_GENERATE: '1'
|
windows:
|
||||||
- uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: cuda-${{ matrix.cuda-version }}-libraries
|
|
||||||
path: |
|
|
||||||
llm/build/**/bin/*
|
|
||||||
dist/windows-amd64/**
|
|
||||||
generate-rocm:
|
|
||||||
needs: [changes]
|
needs: [changes]
|
||||||
if: ${{ needs.changes.outputs.GENERATE_ROCM == 'True' }}
|
if: needs.changes.outputs.changed == 'True'
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
rocm-version:
|
include:
|
||||||
- '6.1.2'
|
- preset: CPU
|
||||||
runs-on: linux
|
- preset: CUDA
|
||||||
container: rocm/dev-ubuntu-20.04:${{ matrix.rocm-version }}
|
install: https://developer.download.nvidia.com/compute/cuda/13.0.0/local_installers/cuda_13.0.0_windows.exe
|
||||||
|
flags: '-DCMAKE_CUDA_ARCHITECTURES=80'
|
||||||
|
cuda-components:
|
||||||
|
- '"cudart"'
|
||||||
|
- '"nvcc"'
|
||||||
|
- '"cublas"'
|
||||||
|
- '"cublas_dev"'
|
||||||
|
- '"crt"'
|
||||||
|
- '"nvvm"'
|
||||||
|
- '"nvptxcompiler"'
|
||||||
|
cuda-version: '13.0'
|
||||||
|
- preset: ROCm
|
||||||
|
install: https://download.amd.com/developer/eula/rocm-hub/AMD-Software-PRO-Edition-24.Q4-WinSvr2022-For-HIP.exe
|
||||||
|
flags: '-DAMDGPU_TARGETS=gfx1010 -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_C_FLAGS="-parallel-jobs=4 -Wno-ignored-attributes -Wno-deprecated-pragma" -DCMAKE_CXX_FLAGS="-parallel-jobs=4 -Wno-ignored-attributes -Wno-deprecated-pragma"'
|
||||||
|
- preset: Vulkan
|
||||||
|
install: https://sdk.lunarg.com/sdk/download/1.4.321.1/windows/vulkansdk-windows-X64-1.4.321.1.exe
|
||||||
|
runs-on: windows
|
||||||
steps:
|
steps:
|
||||||
- run: |
|
- run: |
|
||||||
apt-get update && apt-get install -y git build-essential curl rocm-libs
|
choco install -y --no-progress ccache ninja
|
||||||
curl -fsSL https://github.com/Kitware/CMake/releases/download/v3.28.1/cmake-3.28.1-linux-x86_64.tar.gz \
|
ccache -o cache_dir=${{ github.workspace }}\.ccache
|
||||||
| tar -zx -C /usr --strip-components 1
|
- if: matrix.preset == 'CUDA' || matrix.preset == 'ROCm' || matrix.preset == 'Vulkan'
|
||||||
env:
|
id: cache-install
|
||||||
DEBIAN_FRONTEND: noninteractive
|
uses: actions/cache/restore@v4
|
||||||
- uses: actions/checkout@v4
|
|
||||||
- uses: actions/setup-go@v4
|
|
||||||
with:
|
with:
|
||||||
go-version-file: go.mod
|
|
||||||
cache: true
|
|
||||||
- run: go get ./...
|
|
||||||
- run: |
|
|
||||||
git config --global --add safe.directory /__w/ollama/ollama
|
|
||||||
go generate -x ./...
|
|
||||||
env:
|
|
||||||
OLLAMA_SKIP_CPU_GENERATE: '1'
|
|
||||||
- uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: rocm-${{ matrix.rocm-version }}-libraries
|
|
||||||
path: |
|
path: |
|
||||||
llm/build/**/bin/*
|
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA
|
||||||
dist/windows-amd64/**
|
C:\Program Files\AMD\ROCm
|
||||||
|
C:\VulkanSDK
|
||||||
# ROCm generation step
|
key: ${{ matrix.install }}
|
||||||
generate-windows-rocm:
|
- if: matrix.preset == 'CUDA'
|
||||||
needs: [changes]
|
name: Install CUDA ${{ matrix.cuda-version }}
|
||||||
if: ${{ needs.changes.outputs.GENERATE_ROCM == 'True' }}
|
|
||||||
runs-on: windows
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
- uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version-file: go.mod
|
|
||||||
cache: true
|
|
||||||
- name: 'Install ROCm'
|
|
||||||
run: |
|
run: |
|
||||||
$ErrorActionPreference = "Stop"
|
$ErrorActionPreference = "Stop"
|
||||||
write-host "downloading AMD HIP Installer"
|
if ("${{ steps.cache-install.outputs.cache-hit }}" -ne 'true') {
|
||||||
Invoke-WebRequest -Uri "https://download.amd.com/developer/eula/rocm-hub/AMD-Software-PRO-Edition-24.Q3-WinSvr2022-For-HIP.exe" -OutFile "${env:RUNNER_TEMP}\rocm-install.exe"
|
Invoke-WebRequest -Uri "${{ matrix.install }}" -OutFile "install.exe"
|
||||||
write-host "Installing AMD HIP"
|
$subpackages = @(${{ join(matrix.cuda-components, ', ') }}) | Foreach-Object {"${_}_${{ matrix.cuda-version }}"}
|
||||||
Start-Process "${env:RUNNER_TEMP}\rocm-install.exe" -ArgumentList '-install' -NoNewWindow -Wait
|
Start-Process -FilePath .\install.exe -ArgumentList (@("-s") + $subpackages) -NoNewWindow -Wait
|
||||||
write-host "Completed AMD HIP"
|
}
|
||||||
- name: 'Verify ROCm'
|
|
||||||
run: |
|
|
||||||
& 'C:\Program Files\AMD\ROCm\*\bin\clang.exe' --version
|
|
||||||
- run: go get ./...
|
|
||||||
- run: |
|
|
||||||
$gopath=(get-command go).source | split-path -parent
|
|
||||||
& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Launch-VsDevShell.ps1"
|
|
||||||
cd $env:GITHUB_WORKSPACE
|
|
||||||
$env:CMAKE_SYSTEM_VERSION="10.0.22621.0"
|
|
||||||
$env:PATH="$gopath;$env:PATH"
|
|
||||||
$env:OLLAMA_SKIP_CPU_GENERATE="1"
|
|
||||||
$env:HIP_PATH=$(Resolve-Path 'C:\Program Files\AMD\ROCm\*\bin\clang.exe' | split-path | split-path)
|
|
||||||
go generate -x ./...
|
|
||||||
name: go generate
|
|
||||||
env:
|
|
||||||
OLLAMA_SKIP_CPU_GENERATE: '1'
|
|
||||||
# TODO - do we need any artifacts?
|
|
||||||
|
|
||||||
# CUDA generation step
|
$cudaPath = (Resolve-Path "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\*").path
|
||||||
generate-windows-cuda:
|
echo "$cudaPath\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
|
||||||
needs: [changes]
|
- if: matrix.preset == 'ROCm'
|
||||||
if: ${{ needs.changes.outputs.GENERATE_CUDA == 'True' }}
|
name: Install ROCm ${{ matrix.rocm-version }}
|
||||||
runs-on: windows
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
- uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version-file: go.mod
|
|
||||||
cache: true
|
|
||||||
- name: 'Install CUDA'
|
|
||||||
run: |
|
run: |
|
||||||
$ErrorActionPreference = "Stop"
|
$ErrorActionPreference = "Stop"
|
||||||
write-host "downloading CUDA Installer"
|
if ("${{ steps.cache-install.outputs.cache-hit }}" -ne 'true') {
|
||||||
Invoke-WebRequest -Uri "https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda_11.3.1_465.89_win10.exe" -OutFile "${env:RUNNER_TEMP}\cuda-install.exe"
|
Invoke-WebRequest -Uri "${{ matrix.install }}" -OutFile "install.exe"
|
||||||
write-host "Installing CUDA"
|
Start-Process -FilePath .\install.exe -ArgumentList '-install' -NoNewWindow -Wait
|
||||||
Start-Process "${env:RUNNER_TEMP}\cuda-install.exe" -ArgumentList '-s' -NoNewWindow -Wait
|
}
|
||||||
write-host "Completed CUDA"
|
|
||||||
$cudaPath=((resolve-path "c:\Program Files\NVIDIA*\CUDA\v*\bin\nvcc.exe")[0].path | split-path | split-path)
|
|
||||||
$cudaVer=($cudaPath | split-path -leaf ) -replace 'v(\d+).(\d+)', '$1_$2'
|
|
||||||
echo "$cudaPath\bin" >> $env:GITHUB_PATH
|
|
||||||
echo "CUDA_PATH=$cudaPath" >> $env:GITHUB_ENV
|
|
||||||
echo "CUDA_PATH_V${cudaVer}=$cudaPath" >> $env:GITHUB_ENV
|
|
||||||
echo "CUDA_PATH_VX_Y=CUDA_PATH_V${cudaVer}" >> $env:GITHUB_ENV
|
|
||||||
- name: 'Verify CUDA'
|
|
||||||
run: nvcc -V
|
|
||||||
- run: go get ./...
|
|
||||||
- name: go generate
|
|
||||||
run: |
|
|
||||||
$gopath=(get-command go).source | split-path -parent
|
|
||||||
$cudabin=(get-command nvcc).source | split-path
|
|
||||||
& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Launch-VsDevShell.ps1"
|
|
||||||
cd $env:GITHUB_WORKSPACE
|
|
||||||
$env:CMAKE_SYSTEM_VERSION="10.0.22621.0"
|
|
||||||
$env:PATH="$gopath;$cudabin;$env:PATH"
|
|
||||||
$env:OLLAMA_SKIP_CPU_GENERATE="1"
|
|
||||||
go generate -x ./...
|
|
||||||
env:
|
|
||||||
OLLAMA_SKIP_CPU_GENERATE: '1'
|
|
||||||
# TODO - do we need any artifacts?
|
|
||||||
|
|
||||||
lint:
|
$hipPath = (Resolve-Path "C:\Program Files\AMD\ROCm\*").path
|
||||||
strategy:
|
echo "$hipPath\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
|
||||||
matrix:
|
echo "CC=$hipPath\bin\clang.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
os: [ubuntu-latest, macos-latest, windows-2019]
|
echo "CXX=$hipPath\bin\clang++.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
arch: [amd64, arm64]
|
echo "HIPCXX=$hipPath\bin\clang++.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
exclude:
|
echo "HIP_PLATFORM=amd" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
- os: ubuntu-latest
|
echo "CMAKE_PREFIX_PATH=$hipPath" | Out-File -FilePath $env:GITHUB_ENV -Append
|
||||||
arch: arm64
|
- if: matrix.preset == 'Vulkan'
|
||||||
- os: windows-2019
|
name: Install Vulkan ${{ matrix.rocm-version }}
|
||||||
arch: arm64
|
run: |
|
||||||
- os: macos-latest
|
$ErrorActionPreference = "Stop"
|
||||||
arch: amd64
|
if ("${{ steps.cache-install.outputs.cache-hit }}" -ne 'true') {
|
||||||
runs-on: ${{ matrix.os }}
|
Invoke-WebRequest -Uri "${{ matrix.install }}" -OutFile "install.exe"
|
||||||
|
Start-Process -FilePath .\install.exe -ArgumentList "-c","--am","--al","in" -NoNewWindow -Wait
|
||||||
|
}
|
||||||
|
|
||||||
|
$vulkanPath = (Resolve-Path "C:\VulkanSDK\*").path
|
||||||
|
echo "$vulkanPath\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
|
||||||
|
echo "VULKAN_SDK=$vulkanPath" >> $env:GITHUB_ENV
|
||||||
|
- if: ${{ !cancelled() && steps.cache-install.outputs.cache-hit != 'true' }}
|
||||||
|
uses: actions/cache/save@v4
|
||||||
|
with:
|
||||||
|
path: |
|
||||||
|
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA
|
||||||
|
C:\Program Files\AMD\ROCm
|
||||||
|
key: ${{ matrix.install }}
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: actions/cache@v4
|
||||||
|
with:
|
||||||
|
path: ${{ github.workspace }}\.ccache
|
||||||
|
key: ccache-${{ runner.os }}-${{ runner.arch }}-${{ matrix.preset }}
|
||||||
|
- run: |
|
||||||
|
Import-Module 'C:\Program Files\Microsoft Visual Studio\2022\Enterprise\Common7\Tools\Microsoft.VisualStudio.DevShell.dll'
|
||||||
|
Enter-VsDevShell -VsInstallPath 'C:\Program Files\Microsoft Visual Studio\2022\Enterprise' -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
|
||||||
|
cmake --preset "${{ matrix.preset }}" ${{ matrix.flags }}
|
||||||
|
cmake --build --parallel --preset "${{ matrix.preset }}"
|
||||||
env:
|
env:
|
||||||
GOARCH: ${{ matrix.arch }}
|
CMAKE_GENERATOR: Ninja
|
||||||
CGO_ENABLED: '1'
|
|
||||||
|
go_mod_tidy:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
- name: check that 'go mod tidy' is clean
|
||||||
submodules: recursive
|
run: go mod tidy --diff || (echo "Please run 'go mod tidy'." && exit 1)
|
||||||
- uses: actions/setup-go@v5
|
|
||||||
with:
|
|
||||||
go-version-file: go.mod
|
|
||||||
cache: false
|
|
||||||
- run: |
|
|
||||||
case ${{ matrix.arch }} in
|
|
||||||
amd64) echo ARCH=x86_64 ;;
|
|
||||||
arm64) echo ARCH=arm64 ;;
|
|
||||||
esac >>$GITHUB_ENV
|
|
||||||
shell: bash
|
|
||||||
- run: |
|
|
||||||
mkdir -p llm/build/linux/$ARCH/stub/bin
|
|
||||||
touch llm/build/linux/$ARCH/stub/bin/ollama_llama_server
|
|
||||||
if: ${{ startsWith(matrix.os, 'ubuntu-') }}
|
|
||||||
- run: |
|
|
||||||
mkdir -p llm/build/darwin/$ARCH/stub/bin
|
|
||||||
touch llm/build/darwin/$ARCH/stub/bin/ollama_llama_server
|
|
||||||
if: ${{ startsWith(matrix.os, 'macos-') }}
|
|
||||||
- uses: golangci/golangci-lint-action@v6
|
|
||||||
with:
|
|
||||||
args: --timeout 8m0s -v
|
|
||||||
test:
|
test:
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
os: [ubuntu-latest, macos-latest, windows-2019]
|
os: [ubuntu-latest, macos-latest, windows-latest]
|
||||||
arch: [amd64]
|
|
||||||
exclude:
|
|
||||||
- os: ubuntu-latest
|
|
||||||
arch: arm64
|
|
||||||
- os: windows-2019
|
|
||||||
arch: arm64
|
|
||||||
runs-on: ${{ matrix.os }}
|
runs-on: ${{ matrix.os }}
|
||||||
env:
|
env:
|
||||||
GOARCH: ${{ matrix.arch }}
|
|
||||||
CGO_ENABLED: '1'
|
CGO_ENABLED: '1'
|
||||||
OLLAMA_CPU_TARGET: 'static'
|
GOEXPERIMENT: 'synctest'
|
||||||
OLLAMA_SKIP_CPU_GENERATE: '1'
|
steps:
|
||||||
OLLAMA_SKIP_METAL_GENERATE: '1'
|
- name: checkout
|
||||||
|
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # 4.2.2
|
||||||
|
|
||||||
|
- name: cache restore
|
||||||
|
uses: actions/cache/restore@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
|
||||||
|
with:
|
||||||
|
# Note: unlike the other setups, this is only grabbing the mod download
|
||||||
|
# cache, rather than the whole mod directory, as the download cache
|
||||||
|
# contains zips that can be unpacked in parallel faster than they can be
|
||||||
|
# fetched and extracted by tar
|
||||||
|
path: |
|
||||||
|
~/.cache/go-build
|
||||||
|
~/go/pkg/mod/cache
|
||||||
|
~\AppData\Local\go-build
|
||||||
|
# NOTE: The -3- here should be incremented when the scheme of data to be
|
||||||
|
# cached changes (e.g. path above changes).
|
||||||
|
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-3-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-3-${{ hashFiles('**/go.sum') }}
|
||||||
|
${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-3-
|
||||||
|
|
||||||
|
- name: Setup Go
|
||||||
|
uses: actions/setup-go@v5
|
||||||
|
with:
|
||||||
|
# The caching strategy of setup-go is less than ideal, and wastes
|
||||||
|
# time by not saving artifacts due to small failures like the linter
|
||||||
|
# complaining, etc. This means subsequent have to rebuild their world
|
||||||
|
# again until all checks pass. For instance, if you mispell a word,
|
||||||
|
# you're punished until you fix it. This is more hostile than
|
||||||
|
# helpful.
|
||||||
|
cache: false
|
||||||
|
|
||||||
|
go-version-file: go.mod
|
||||||
|
|
||||||
|
# It is tempting to run this in a platform independent way, but the past
|
||||||
|
# shows this codebase will see introductions of platform specific code
|
||||||
|
# generation, and so we need to check this per platform to ensure we
|
||||||
|
# don't abuse go generate on specific platforms.
|
||||||
|
- name: check that 'go generate' is clean
|
||||||
|
if: always()
|
||||||
|
run: |
|
||||||
|
go generate ./...
|
||||||
|
git diff --name-only --exit-code || (echo "Please run 'go generate ./...'." && exit 1)
|
||||||
|
|
||||||
|
- name: go test
|
||||||
|
if: always()
|
||||||
|
run: go test -count=1 -benchtime=1x ./...
|
||||||
|
|
||||||
|
# TODO(bmizerany): replace this heavy tool with just the
|
||||||
|
# tools/checks/binaries we want and then make them all run in parallel
|
||||||
|
# across jobs, not on a single tiny vm on Github Actions.
|
||||||
|
- uses: golangci/golangci-lint-action@v6
|
||||||
|
with:
|
||||||
|
args: --timeout 10m0s -v
|
||||||
|
|
||||||
|
- name: cache save
|
||||||
|
# Always save the cache, even if the job fails. The artifacts produced
|
||||||
|
# during the building of test binaries are not all for naught. They can
|
||||||
|
# be used to speed up subsequent runs.
|
||||||
|
if: always()
|
||||||
|
|
||||||
|
uses: actions/cache/save@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
|
||||||
|
with:
|
||||||
|
# Note: unlike the other setups, this is only grabbing the mod download
|
||||||
|
# cache, rather than the whole mod directory, as the download cache
|
||||||
|
# contains zips that can be unpacked in parallel faster than they can be
|
||||||
|
# fetched and extracted by tar
|
||||||
|
path: |
|
||||||
|
~/.cache/go-build
|
||||||
|
~/go/pkg/mod/cache
|
||||||
|
~\AppData\Local\go-build
|
||||||
|
# NOTE: The -3- here should be incremented when the scheme of data to be
|
||||||
|
# cached changes (e.g. path above changes).
|
||||||
|
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-3-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
|
||||||
|
|
||||||
|
patches:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
- name: Verify patches apply cleanly and do not change files
|
||||||
submodules: recursive
|
run: |
|
||||||
- uses: actions/setup-go@v5
|
make -f Makefile.sync clean checkout apply-patches sync
|
||||||
with:
|
git diff --compact-summary --exit-code
|
||||||
go-version-file: go.mod
|
|
||||||
cache: true
|
|
||||||
- run: |
|
|
||||||
case ${{ matrix.arch }} in
|
|
||||||
amd64) echo ARCH=x86_64 ;;
|
|
||||||
arm64) echo ARCH=arm64 ;;
|
|
||||||
esac >>$GITHUB_ENV
|
|
||||||
shell: bash
|
|
||||||
- run: |
|
|
||||||
mkdir -p llm/build/linux/$ARCH/stub/bin
|
|
||||||
touch llm/build/linux/$ARCH/stub/bin/ollama_llama_server
|
|
||||||
if: ${{ startsWith(matrix.os, 'ubuntu-') }}
|
|
||||||
- run: |
|
|
||||||
mkdir -p llm/build/darwin/$ARCH/stub/bin
|
|
||||||
touch llm/build/darwin/$ARCH/stub/bin/ollama_llama_server
|
|
||||||
if: ${{ startsWith(matrix.os, 'macos-') }}
|
|
||||||
shell: bash
|
|
||||||
- run: go generate ./...
|
|
||||||
- run: go build
|
|
||||||
- run: go test -v ./...
|
|
||||||
- uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: ${{ matrix.os }}-binaries
|
|
||||||
path: ollama
|
|
||||||
59
.github/workflows/upload-release-asset.yaml
vendored
Normal file
59
.github/workflows/upload-release-asset.yaml
vendored
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
name: Upload Release Assets
|
||||||
|
|
||||||
|
on:
|
||||||
|
release:
|
||||||
|
types: [created]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
upload-release-assets:
|
||||||
|
runs-on: windows
|
||||||
|
steps:
|
||||||
|
- name: Checkout code
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
# This step checks out the code from the repository to the runner.
|
||||||
|
|
||||||
|
# Assuming you already have compilation and packaging steps that generate the following files:
|
||||||
|
# dist/ollama-windows-amd64.7z
|
||||||
|
# dist/OllamaSetup.exe
|
||||||
|
|
||||||
|
- name: Get the version
|
||||||
|
id: get_version
|
||||||
|
run: |
|
||||||
|
echo ::set-output name=VERSION::${GITHUB_REF#refs/tags/}
|
||||||
|
# This step extracts the version number from the tag name.
|
||||||
|
|
||||||
|
- name: Create Release
|
||||||
|
id: create_release
|
||||||
|
uses: actions/create-release@v1
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
with:
|
||||||
|
tag_name: ${{ github.ref_name }}
|
||||||
|
release_name: Release ${{ github.ref_name }}
|
||||||
|
body: |
|
||||||
|
Description of the release goes here.
|
||||||
|
# draft: false (Uncomment if you want to create a non-draft release)
|
||||||
|
# prerelease: false (Uncomment if you want to create a non-prerelease version)
|
||||||
|
# This step creates a new release on GitHub.
|
||||||
|
|
||||||
|
- name: Upload ollama-windows-amd64.7z Release Asset
|
||||||
|
uses: actions/upload-release-asset@v1
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
with:
|
||||||
|
upload_url: ${{ steps.create_release.outputs.upload_url }}
|
||||||
|
asset_path: dist/ollama-windows-amd64.7z
|
||||||
|
asset_name: ollama-windows-amd64.7z
|
||||||
|
asset_content_type: application/x-7z-compressed
|
||||||
|
# This step uploads the .7z file as a release asset.
|
||||||
|
|
||||||
|
- name: Upload OllamaSetup.exe Release Asset
|
||||||
|
uses: actions/upload-release-asset@v1
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
with:
|
||||||
|
upload_url: ${{ steps.create_release.outputs.upload_url }}
|
||||||
|
asset_path: dist/OllamaSetup.exe
|
||||||
|
asset_name: OllamaSetup.exe
|
||||||
|
asset_content_type: application/vnd.microsoft.portable-executable
|
||||||
|
# This step uploads the .exe file as a release asset.
|
||||||
8
.gitignore
vendored
8
.gitignore
vendored
@@ -6,12 +6,14 @@
|
|||||||
.swp
|
.swp
|
||||||
0
|
0
|
||||||
dist
|
dist
|
||||||
ollama
|
build
|
||||||
ggml-metal.metal
|
|
||||||
.cache
|
.cache
|
||||||
|
.gocache
|
||||||
*.exe
|
*.exe
|
||||||
.idea
|
.idea
|
||||||
test_data
|
test_data
|
||||||
*.crt
|
*.crt
|
||||||
llm/build
|
|
||||||
__debug_bin*
|
__debug_bin*
|
||||||
|
llama/build
|
||||||
|
llama/vendor
|
||||||
|
/ollama
|
||||||
|
|||||||
5
.gitmodules
vendored
5
.gitmodules
vendored
@@ -1,5 +0,0 @@
|
|||||||
[submodule "llama.cpp"]
|
|
||||||
path = llm/llama.cpp
|
|
||||||
url = https://github.com/ggerganov/llama.cpp.git
|
|
||||||
shallow = true
|
|
||||||
|
|
||||||
@@ -6,10 +6,6 @@ linters:
|
|||||||
- bidichk
|
- bidichk
|
||||||
- bodyclose
|
- bodyclose
|
||||||
- containedctx
|
- containedctx
|
||||||
- contextcheck
|
|
||||||
- errcheck
|
|
||||||
- exportloopref
|
|
||||||
- gci
|
|
||||||
- gocheckcompilerdirectives
|
- gocheckcompilerdirectives
|
||||||
- gofmt
|
- gofmt
|
||||||
- gofumpt
|
- gofumpt
|
||||||
@@ -23,15 +19,14 @@ linters:
|
|||||||
- nolintlint
|
- nolintlint
|
||||||
- nosprintfhostport
|
- nosprintfhostport
|
||||||
- staticcheck
|
- staticcheck
|
||||||
- tenv
|
|
||||||
- unconvert
|
- unconvert
|
||||||
- unused
|
- usetesting
|
||||||
- usestdlibvars
|
|
||||||
- wastedassign
|
- wastedassign
|
||||||
- whitespace
|
- whitespace
|
||||||
|
disable:
|
||||||
|
- usestdlibvars
|
||||||
|
- errcheck
|
||||||
linters-settings:
|
linters-settings:
|
||||||
gci:
|
|
||||||
sections: [standard, default, localmodule]
|
|
||||||
staticcheck:
|
staticcheck:
|
||||||
checks:
|
checks:
|
||||||
- all
|
- all
|
||||||
@@ -43,5 +38,4 @@ severity:
|
|||||||
- gofmt
|
- gofmt
|
||||||
- goimports
|
- goimports
|
||||||
- intrange
|
- intrange
|
||||||
- usestdlibvars
|
|
||||||
severity: info
|
severity: info
|
||||||
|
|||||||
@@ -1,10 +0,0 @@
|
|||||||
{
|
|
||||||
"trailingComma": "es5",
|
|
||||||
"tabWidth": 2,
|
|
||||||
"useTabs": false,
|
|
||||||
"semi": false,
|
|
||||||
"singleQuote": true,
|
|
||||||
"jsxSingleQuote": true,
|
|
||||||
"printWidth": 120,
|
|
||||||
"arrowParens": "avoid"
|
|
||||||
}
|
|
||||||
154
CMakeLists.txt
Normal file
154
CMakeLists.txt
Normal file
@@ -0,0 +1,154 @@
|
|||||||
|
cmake_minimum_required(VERSION 3.21)
|
||||||
|
|
||||||
|
project(Ollama C CXX)
|
||||||
|
|
||||||
|
include(CheckLanguage)
|
||||||
|
include(GNUInstallDirs)
|
||||||
|
|
||||||
|
find_package(Threads REQUIRED)
|
||||||
|
|
||||||
|
set(CMAKE_BUILD_TYPE Release)
|
||||||
|
set(BUILD_SHARED_LIBS ON)
|
||||||
|
|
||||||
|
set(CMAKE_CXX_STANDARD 17)
|
||||||
|
set(CMAKE_CXX_STANDARD_REQUIRED ON)
|
||||||
|
set(CMAKE_CXX_EXTENSIONS OFF)
|
||||||
|
|
||||||
|
set(GGML_BUILD ON)
|
||||||
|
set(GGML_SHARED ON)
|
||||||
|
set(GGML_CCACHE ON)
|
||||||
|
set(GGML_BACKEND_DL ON)
|
||||||
|
set(GGML_BACKEND_SHARED ON)
|
||||||
|
set(GGML_SCHED_MAX_COPIES 4)
|
||||||
|
|
||||||
|
set(GGML_LLAMAFILE ON)
|
||||||
|
set(GGML_CUDA_PEER_MAX_BATCH_SIZE 128)
|
||||||
|
set(GGML_CUDA_GRAPHS ON)
|
||||||
|
set(GGML_CUDA_FA ON)
|
||||||
|
set(GGML_CUDA_COMPRESSION_MODE default)
|
||||||
|
|
||||||
|
if((CMAKE_OSX_ARCHITECTURES AND NOT CMAKE_OSX_ARCHITECTURES MATCHES "arm64")
|
||||||
|
OR (NOT CMAKE_OSX_ARCHITECTURES AND NOT CMAKE_SYSTEM_PROCESSOR MATCHES "arm|aarch64|ARM64|ARMv[0-9]+"))
|
||||||
|
set(GGML_CPU_ALL_VARIANTS ON)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if (CMAKE_OSX_ARCHITECTURES MATCHES "x86_64")
|
||||||
|
set(CMAKE_BUILD_RPATH "@loader_path")
|
||||||
|
set(CMAKE_INSTALL_RPATH "@loader_path")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
set(OLLAMA_BUILD_DIR ${CMAKE_BINARY_DIR}/lib/ollama)
|
||||||
|
set(OLLAMA_INSTALL_DIR ${CMAKE_INSTALL_PREFIX}/lib/ollama/${OLLAMA_RUNNER_DIR})
|
||||||
|
|
||||||
|
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${OLLAMA_BUILD_DIR})
|
||||||
|
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_DEBUG ${OLLAMA_BUILD_DIR})
|
||||||
|
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_RELEASE ${OLLAMA_BUILD_DIR})
|
||||||
|
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${OLLAMA_BUILD_DIR})
|
||||||
|
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY_DEBUG ${OLLAMA_BUILD_DIR})
|
||||||
|
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE ${OLLAMA_BUILD_DIR})
|
||||||
|
|
||||||
|
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src)
|
||||||
|
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/include)
|
||||||
|
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-cpu)
|
||||||
|
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-cpu/amx)
|
||||||
|
|
||||||
|
add_compile_definitions(NDEBUG GGML_VERSION=0x0 GGML_COMMIT=0x0)
|
||||||
|
|
||||||
|
set(GGML_CPU ON)
|
||||||
|
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src)
|
||||||
|
set_property(TARGET ggml PROPERTY EXCLUDE_FROM_ALL TRUE)
|
||||||
|
|
||||||
|
get_target_property(CPU_VARIANTS ggml-cpu MANUALLY_ADDED_DEPENDENCIES)
|
||||||
|
if(NOT CPU_VARIANTS)
|
||||||
|
set(CPU_VARIANTS "ggml-cpu")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
install(TARGETS ggml-base ${CPU_VARIANTS}
|
||||||
|
RUNTIME_DEPENDENCIES
|
||||||
|
PRE_EXCLUDE_REGEXES ".*"
|
||||||
|
RUNTIME DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT CPU
|
||||||
|
LIBRARY DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT CPU
|
||||||
|
FRAMEWORK DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT CPU
|
||||||
|
)
|
||||||
|
|
||||||
|
check_language(CUDA)
|
||||||
|
if(CMAKE_CUDA_COMPILER)
|
||||||
|
if(CMAKE_VERSION VERSION_GREATER_EQUAL "3.24" AND NOT CMAKE_CUDA_ARCHITECTURES)
|
||||||
|
set(CMAKE_CUDA_ARCHITECTURES "native")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
find_package(CUDAToolkit)
|
||||||
|
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-cuda)
|
||||||
|
install(TARGETS ggml-cuda
|
||||||
|
RUNTIME_DEPENDENCIES
|
||||||
|
DIRECTORIES ${CUDAToolkit_BIN_DIR} ${CUDAToolkit_BIN_DIR}/x64 ${CUDAToolkit_LIBRARY_DIR}
|
||||||
|
PRE_INCLUDE_REGEXES cublas cublasLt cudart
|
||||||
|
PRE_EXCLUDE_REGEXES ".*"
|
||||||
|
RUNTIME DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT CUDA
|
||||||
|
LIBRARY DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT CUDA
|
||||||
|
)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
|
||||||
|
set(WINDOWS_AMDGPU_TARGETS_EXCLUDE_REGEX ""
|
||||||
|
CACHE STRING
|
||||||
|
"Regular expression describing AMDGPU_TARGETS not supported on Windows. Override to force building these targets. Default \"^gfx(908|90a):xnack[+-]$\"."
|
||||||
|
)
|
||||||
|
|
||||||
|
check_language(HIP)
|
||||||
|
if(CMAKE_HIP_COMPILER)
|
||||||
|
set(HIP_PLATFORM "amd")
|
||||||
|
|
||||||
|
if(NOT AMDGPU_TARGETS)
|
||||||
|
find_package(hip REQUIRED)
|
||||||
|
list(FILTER AMDGPU_TARGETS INCLUDE REGEX "^gfx(803|90[012]|906(:xnack-)|90c(:xnack-)|1010(:xnack-)|1011(:xnack-)|1012(:xnack-)|103[0-6]|110[0-3]|115[0123]|120[01])$")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(WIN32 AND WINDOWS_AMDGPU_TARGETS_EXCLUDE_REGEX)
|
||||||
|
list(FILTER AMDGPU_TARGETS EXCLUDE REGEX ${WINDOWS_AMDGPU_TARGETS_EXCLUDE_REGEX})
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if(AMDGPU_TARGETS)
|
||||||
|
find_package(hip REQUIRED)
|
||||||
|
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-hip)
|
||||||
|
|
||||||
|
if (WIN32)
|
||||||
|
target_compile_definitions(ggml-hip PRIVATE GGML_CUDA_NO_PEER_COPY)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
target_compile_definitions(ggml-hip PRIVATE GGML_HIP_NO_VMM)
|
||||||
|
|
||||||
|
install(TARGETS ggml-hip
|
||||||
|
RUNTIME_DEPENDENCY_SET rocm
|
||||||
|
RUNTIME DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT HIP
|
||||||
|
LIBRARY DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT HIP
|
||||||
|
)
|
||||||
|
install(RUNTIME_DEPENDENCY_SET rocm
|
||||||
|
DIRECTORIES ${HIP_BIN_INSTALL_DIR} ${HIP_LIB_INSTALL_DIR}
|
||||||
|
PRE_INCLUDE_REGEXES hipblas rocblas amdhip64 rocsolver amd_comgr hsa-runtime64 rocsparse tinfo rocprofiler-register drm drm_amdgpu numa elf
|
||||||
|
PRE_EXCLUDE_REGEXES ".*"
|
||||||
|
POST_EXCLUDE_REGEXES "system32"
|
||||||
|
RUNTIME DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT HIP
|
||||||
|
LIBRARY DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT HIP
|
||||||
|
)
|
||||||
|
|
||||||
|
foreach(HIP_LIB_BIN_INSTALL_DIR IN ITEMS ${HIP_BIN_INSTALL_DIR} ${HIP_LIB_INSTALL_DIR})
|
||||||
|
if(EXISTS ${HIP_LIB_BIN_INSTALL_DIR}/rocblas)
|
||||||
|
install(DIRECTORY ${HIP_LIB_BIN_INSTALL_DIR}/rocblas DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT HIP)
|
||||||
|
break()
|
||||||
|
endif()
|
||||||
|
endforeach()
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
find_package(Vulkan)
|
||||||
|
if(Vulkan_FOUND)
|
||||||
|
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-vulkan)
|
||||||
|
install(TARGETS ggml-vulkan
|
||||||
|
RUNTIME_DEPENDENCIES
|
||||||
|
PRE_INCLUDE_REGEXES vulkan
|
||||||
|
PRE_EXCLUDE_REGEXES ".*"
|
||||||
|
RUNTIME DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT Vulkan
|
||||||
|
LIBRARY DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT Vulkan
|
||||||
|
)
|
||||||
|
endif()
|
||||||
136
CMakePresets.json
Normal file
136
CMakePresets.json
Normal file
@@ -0,0 +1,136 @@
|
|||||||
|
{
|
||||||
|
"version": 3,
|
||||||
|
"configurePresets": [
|
||||||
|
{
|
||||||
|
"name": "Default",
|
||||||
|
"binaryDir": "${sourceDir}/build",
|
||||||
|
"installDir": "${sourceDir}/dist",
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_BUILD_TYPE": "Release",
|
||||||
|
"CMAKE_MSVC_RUNTIME_LIBRARY": "MultiThreaded"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CPU",
|
||||||
|
"inherits": [ "Default" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA",
|
||||||
|
"inherits": [ "Default" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA 11",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_CUDA_ARCHITECTURES": "50-virtual;60-virtual;61-virtual;70-virtual;75-virtual;80-virtual;86-virtual;87-virtual;89-virtual;90-virtual",
|
||||||
|
"CMAKE_CUDA_FLAGS": "-Wno-deprecated-gpu-targets -t 2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA 12",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_CUDA_ARCHITECTURES": "50;52;60;61;70;75;80;86;89;90;90a;120",
|
||||||
|
"CMAKE_CUDA_FLAGS": "-Wno-deprecated-gpu-targets -t 2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA 13",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_CUDA_ARCHITECTURES": "75-virtual;80-virtual;86-virtual;87-virtual;89-virtual;90-virtual;90a-virtual;100-virtual;103-virtual;110-virtual;120-virtual;121-virtual",
|
||||||
|
"CMAKE_CUDA_FLAGS": "-t 2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "JetPack 5",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_CUDA_ARCHITECTURES": "72;87"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "JetPack 6",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_CUDA_ARCHITECTURES": "87"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "ROCm",
|
||||||
|
"inherits": [ "Default" ],
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_HIP_PLATFORM": "amd"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "ROCm 6",
|
||||||
|
"inherits": [ "ROCm" ],
|
||||||
|
"cacheVariables": {
|
||||||
|
"CMAKE_HIP_FLAGS": "-parallel-jobs=4",
|
||||||
|
"AMDGPU_TARGETS": "gfx940;gfx941;gfx942;gfx1010;gfx1012;gfx1030;gfx1100;gfx1101;gfx1102;gfx1151;gfx1200;gfx1201;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "Vulkan",
|
||||||
|
"inherits": [ "Default" ]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"buildPresets": [
|
||||||
|
{
|
||||||
|
"name": "Default",
|
||||||
|
"configurePreset": "Default",
|
||||||
|
"configuration": "Release"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CPU",
|
||||||
|
"configurePreset": "Default",
|
||||||
|
"targets": [ "ggml-cpu" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA",
|
||||||
|
"configurePreset": "CUDA",
|
||||||
|
"targets": [ "ggml-cuda" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA 11",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"configurePreset": "CUDA 11"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA 12",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"configurePreset": "CUDA 12"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "CUDA 13",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"configurePreset": "CUDA 13"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "JetPack 5",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"configurePreset": "JetPack 5"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "JetPack 6",
|
||||||
|
"inherits": [ "CUDA" ],
|
||||||
|
"configurePreset": "JetPack 6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "ROCm",
|
||||||
|
"configurePreset": "ROCm",
|
||||||
|
"targets": [ "ggml-hip" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "ROCm 6",
|
||||||
|
"inherits": [ "ROCm" ],
|
||||||
|
"configurePreset": "ROCm 6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "Vulkan",
|
||||||
|
"targets": [ "ggml-vulkan" ],
|
||||||
|
"configurePreset": "Vulkan"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
@@ -6,8 +6,6 @@ Thank you for your interest in contributing to Ollama! Here are a few guidelines
|
|||||||
|
|
||||||
See the [development documentation](./docs/development.md) for instructions on how to build and run Ollama locally.
|
See the [development documentation](./docs/development.md) for instructions on how to build and run Ollama locally.
|
||||||
|
|
||||||
## Pull requests
|
|
||||||
|
|
||||||
### Ideal issues
|
### Ideal issues
|
||||||
|
|
||||||
* [Bugs](https://github.com/ollama/ollama/issues?q=is%3Aissue+is%3Aopen+label%3Abug): issues where Ollama stops working or where it results in an unexpected error.
|
* [Bugs](https://github.com/ollama/ollama/issues?q=is%3Aissue+is%3Aopen+label%3Abug): issues where Ollama stops working or where it results in an unexpected error.
|
||||||
@@ -26,11 +24,65 @@ See the [development documentation](./docs/development.md) for instructions on h
|
|||||||
* Changes that add significant friction to the user experience
|
* Changes that add significant friction to the user experience
|
||||||
* Changes that create a large future maintenance burden for maintainers and contributors
|
* Changes that create a large future maintenance burden for maintainers and contributors
|
||||||
|
|
||||||
### Best practices
|
## Proposing a (non-trivial) change
|
||||||
|
|
||||||
* Commit messages: please leave both a title and a description in your commit messages. The title should be a short summary of the changes, with a leading word that explains the section of the code being changed (e.g. `api: fix parsing of prompt field`) . In the description, leave a short 2-3 sentences that explain more about the change and its impact.
|
> By "non-trivial", we mean a change that is not a bug fix or small
|
||||||
* Tests: please add test coverage to changes where possible.
|
> documentation update. If you are unsure, please ask us on our [Discord
|
||||||
* Minimize dependencies: avoid adding new dependencies unless absolutely necessary.
|
> server](https://discord.gg/ollama).
|
||||||
|
|
||||||
|
Before opening a non-trivial Pull Request, please open an issue to discuss the change and
|
||||||
|
get feedback from the maintainers. This helps us understand the context of the
|
||||||
|
change and how it fits into Ollama's roadmap and prevents us from duplicating
|
||||||
|
work or you from spending time on a change that we may not be able to accept.
|
||||||
|
|
||||||
|
Tips for proposals:
|
||||||
|
|
||||||
|
* Explain the problem you are trying to solve, not what you are trying to do.
|
||||||
|
* Explain why the change is important.
|
||||||
|
* Explain how the change will be used.
|
||||||
|
* Explain how the change will be tested.
|
||||||
|
|
||||||
|
Additionally, for bonus points: Provide draft documentation you would expect to
|
||||||
|
see if the change were accepted.
|
||||||
|
|
||||||
|
## Pull requests
|
||||||
|
|
||||||
|
**Commit messages**
|
||||||
|
|
||||||
|
The title should look like:
|
||||||
|
|
||||||
|
<package>: <short description>
|
||||||
|
|
||||||
|
The package is the most affected Go package. If the change does not affect Go
|
||||||
|
code, then use the directory name instead. Changes to a single well-known
|
||||||
|
file in the root directory may use the file name.
|
||||||
|
|
||||||
|
The short description should start with a lowercase letter and be a
|
||||||
|
continuation of the sentence:
|
||||||
|
|
||||||
|
"This changes Ollama to..."
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
llm/backend/mlx: support the llama architecture
|
||||||
|
CONTRIBUTING: provide clarity on good commit messages, and bad
|
||||||
|
docs: simplify manual installation with shorter curl commands
|
||||||
|
|
||||||
|
Bad Examples:
|
||||||
|
|
||||||
|
feat: add more emoji
|
||||||
|
fix: was not using famous web framework
|
||||||
|
chore: generify code
|
||||||
|
|
||||||
|
**Tests**
|
||||||
|
|
||||||
|
Please include tests. Strive to test behavior, not implementation.
|
||||||
|
|
||||||
|
**New dependencies**
|
||||||
|
|
||||||
|
Dependencies should be added sparingly. If you are adding a new dependency,
|
||||||
|
please explain why it is necessary and what other ways you attempted that
|
||||||
|
did not work without it.
|
||||||
|
|
||||||
## Need help?
|
## Need help?
|
||||||
|
|
||||||
|
|||||||
368
Dockerfile
368
Dockerfile
@@ -1,219 +1,201 @@
|
|||||||
ARG GOLANG_VERSION=1.22.5
|
# vim: filetype=dockerfile
|
||||||
ARG CMAKE_VERSION=3.22.1
|
|
||||||
ARG CUDA_VERSION_11=11.3.1
|
|
||||||
ARG CUDA_V11_ARCHITECTURES="50;52;53;60;61;62;70;72;75;80;86"
|
|
||||||
ARG CUDA_VERSION_12=12.4.0
|
|
||||||
ARG CUDA_V12_ARCHITECTURES="60;61;62;70;72;75;80;86;87;89;90;90a"
|
|
||||||
ARG ROCM_VERSION=6.1.2
|
|
||||||
|
|
||||||
# Copy the minimal context we need to run the generate scripts
|
ARG FLAVOR=${TARGETARCH}
|
||||||
FROM scratch AS llm-code
|
ARG PARALLEL=8
|
||||||
COPY .git .git
|
|
||||||
COPY .gitmodules .gitmodules
|
|
||||||
COPY llm llm
|
|
||||||
|
|
||||||
FROM --platform=linux/amd64 nvidia/cuda:$CUDA_VERSION_11-devel-centos7 AS cuda-11-build-amd64
|
ARG ROCMVERSION=6.3.3
|
||||||
ARG CMAKE_VERSION
|
ARG JETPACK5VERSION=r35.4.1
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
ARG JETPACK6VERSION=r36.4.0
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} sh /rh_linux_deps.sh
|
ARG CMAKEVERSION=3.31.2
|
||||||
ENV PATH /opt/rh/devtoolset-10/root/usr/bin:$PATH
|
ARG VULKANVERSION=1.4.321.1
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
# We require gcc v10 minimum. v10.3 has regressions, so the rockylinux 8.5 AppStream has the latest compatible version
|
||||||
ARG CGO_CFLAGS
|
FROM --platform=linux/amd64 rocm/dev-almalinux-8:${ROCMVERSION}-complete AS base-amd64
|
||||||
ARG CUDA_V11_ARCHITECTURES
|
RUN yum install -y yum-utils \
|
||||||
ENV GOARCH amd64
|
&& yum-config-manager --add-repo https://dl.rockylinux.org/vault/rocky/8.5/AppStream/\$basearch/os/ \
|
||||||
|
&& rpm --import https://dl.rockylinux.org/pub/rocky/RPM-GPG-KEY-Rocky-8 \
|
||||||
|
&& dnf install -y yum-utils ccache gcc-toolset-10-gcc-10.2.1-8.2.el8 gcc-toolset-10-gcc-c++-10.2.1-8.2.el8 gcc-toolset-10-binutils-2.35-11.el8 \
|
||||||
|
&& dnf install -y ccache \
|
||||||
|
&& yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo
|
||||||
|
ENV PATH=/opt/rh/gcc-toolset-10/root/usr/bin:$PATH
|
||||||
|
ARG VULKANVERSION
|
||||||
|
RUN wget https://sdk.lunarg.com/sdk/download/${VULKANVERSION}/linux/vulkansdk-linux-x86_64-${VULKANVERSION}.tar.xz -O /tmp/vulkansdk-linux-x86_64-${VULKANVERSION}.tar.xz \
|
||||||
|
&& tar xvf /tmp/vulkansdk-linux-x86_64-${VULKANVERSION}.tar.xz \
|
||||||
|
&& dnf -y install ninja-build \
|
||||||
|
&& ln -s /usr/bin/python3 /usr/bin/python \
|
||||||
|
&& /${VULKANVERSION}/vulkansdk -j 8 vulkan-headers \
|
||||||
|
&& /${VULKANVERSION}/vulkansdk -j 8 shaderc
|
||||||
|
RUN cp -r /${VULKANVERSION}/x86_64/include/* /usr/local/include/ \
|
||||||
|
&& cp -r /${VULKANVERSION}/x86_64/lib/* /usr/local/lib
|
||||||
|
ENV PATH=/${VULKANVERSION}/x86_64/bin:$PATH
|
||||||
|
|
||||||
|
FROM --platform=linux/arm64 almalinux:8 AS base-arm64
|
||||||
|
# install epel-release for ccache
|
||||||
|
RUN yum install -y yum-utils epel-release \
|
||||||
|
&& dnf install -y clang ccache \
|
||||||
|
&& yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/sbsa/cuda-rhel8.repo
|
||||||
|
ENV CC=clang CXX=clang++
|
||||||
|
|
||||||
|
FROM base-${TARGETARCH} AS base
|
||||||
|
ARG CMAKEVERSION
|
||||||
|
RUN curl -fsSL https://github.com/Kitware/CMake/releases/download/v${CMAKEVERSION}/cmake-${CMAKEVERSION}-linux-$(uname -m).tar.gz | tar xz -C /usr/local --strip-components 1
|
||||||
|
COPY CMakeLists.txt CMakePresets.json .
|
||||||
|
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
|
||||||
|
ENV LDFLAGS=-s
|
||||||
|
|
||||||
|
FROM base AS cpu
|
||||||
|
RUN dnf install -y gcc-toolset-11-gcc gcc-toolset-11-gcc-c++
|
||||||
|
ENV PATH=/opt/rh/gcc-toolset-11/root/usr/bin:$PATH
|
||||||
|
ARG PARALLEL
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 \
|
cmake --preset 'CPU' \
|
||||||
OLLAMA_SKIP_CPU_GENERATE=1 \
|
&& cmake --build --parallel ${PARALLEL} --preset 'CPU' \
|
||||||
CMAKE_CUDA_ARCHITECTURES="${CUDA_V11_ARCHITECTURES}" \
|
&& cmake --install build --component CPU --strip --parallel ${PARALLEL}
|
||||||
CUDA_VARIANT="_v11" \
|
|
||||||
bash gen_linux.sh
|
|
||||||
|
|
||||||
FROM --platform=linux/amd64 nvidia/cuda:$CUDA_VERSION_12-devel-centos7 AS cuda-12-build-amd64
|
FROM base AS cuda-11
|
||||||
ARG CMAKE_VERSION
|
ARG CUDA11VERSION=11.8
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
RUN dnf install -y cuda-toolkit-${CUDA11VERSION//./-}
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} sh /rh_linux_deps.sh
|
ENV PATH=/usr/local/cuda-11/bin:$PATH
|
||||||
ENV PATH /opt/rh/devtoolset-10/root/usr/bin:$PATH
|
ARG PARALLEL
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
ARG CUDA_V12_ARCHITECTURES
|
|
||||||
ENV GOARCH amd64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 \
|
cmake --preset 'CUDA 11' -DOLLAMA_RUNNER_DIR="cuda_v11" \
|
||||||
OLLAMA_SKIP_CPU_GENERATE=1 \
|
&& cmake --build --parallel ${PARALLEL} --preset 'CUDA 11' \
|
||||||
CMAKE_CUDA_ARCHITECTURES="${CUDA_V12_ARCHITECTURES}" \
|
&& cmake --install build --component CUDA --strip --parallel ${PARALLEL}
|
||||||
CUDA_VARIANT="_v12" \
|
|
||||||
OLLAMA_CUSTOM_CUDA_DEFS="-DGGML_CUDA_USE_GRAPHS=on" \
|
|
||||||
bash gen_linux.sh
|
|
||||||
|
|
||||||
FROM --platform=linux/arm64 nvidia/cuda:$CUDA_VERSION_11-devel-rockylinux8 AS cuda-11-build-server-arm64
|
FROM base AS cuda-12
|
||||||
ARG CMAKE_VERSION
|
ARG CUDA12VERSION=12.8
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
RUN dnf install -y cuda-toolkit-${CUDA12VERSION//./-}
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} sh /rh_linux_deps.sh
|
ENV PATH=/usr/local/cuda-12/bin:$PATH
|
||||||
ENV PATH /opt/rh/gcc-toolset-10/root/usr/bin:$PATH
|
ARG PARALLEL
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
ARG CUDA_V11_ARCHITECTURES
|
|
||||||
ENV GOARCH arm64
|
|
||||||
RUN OLLAMA_SKIP_STATIC_GENERATE=1 \
|
|
||||||
OLLAMA_SKIP_CPU_GENERATE=1 \
|
|
||||||
CMAKE_CUDA_ARCHITECTURES="${CUDA_V11_ARCHITECTURES}" \
|
|
||||||
CUDA_VARIANT="_v11" \
|
|
||||||
bash gen_linux.sh
|
|
||||||
|
|
||||||
FROM --platform=linux/arm64 nvidia/cuda:$CUDA_VERSION_12-devel-rockylinux8 AS cuda-12-build-server-arm64
|
|
||||||
ARG CMAKE_VERSION
|
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} sh /rh_linux_deps.sh
|
|
||||||
ENV PATH /opt/rh/gcc-toolset-10/root/usr/bin:$PATH
|
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
ARG CUDA_V12_ARCHITECTURES
|
|
||||||
ENV GOARCH arm64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 \
|
cmake --preset 'CUDA 12' -DOLLAMA_RUNNER_DIR="cuda_v12"\
|
||||||
OLLAMA_SKIP_CPU_GENERATE=1 \
|
&& cmake --build --parallel ${PARALLEL} --preset 'CUDA 12' \
|
||||||
CMAKE_CUDA_ARCHITECTURES="${CUDA_V12_ARCHITECTURES}" \
|
&& cmake --install build --component CUDA --strip --parallel ${PARALLEL}
|
||||||
CUDA_VARIANT="_v12" \
|
|
||||||
OLLAMA_CUSTOM_CUDA_DEFS="-DGGML_CUDA_USE_GRAPHS=on" \
|
|
||||||
bash gen_linux.sh
|
|
||||||
|
|
||||||
|
|
||||||
FROM --platform=linux/amd64 rocm/dev-centos-7:${ROCM_VERSION}-complete AS rocm-build-amd64
|
FROM base AS cuda-13
|
||||||
ARG CMAKE_VERSION
|
ARG CUDA13VERSION=13.0
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
RUN dnf install -y cuda-toolkit-${CUDA13VERSION//./-}
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} sh /rh_linux_deps.sh
|
ENV PATH=/usr/local/cuda-13/bin:$PATH
|
||||||
ENV PATH /opt/rh/devtoolset-10/root/usr/bin:$PATH
|
ARG PARALLEL
|
||||||
ENV LIBRARY_PATH /opt/amdgpu/lib64
|
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
ARG AMDGPU_TARGETS
|
|
||||||
ENV GOARCH amd64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_SKIP_CPU_GENERATE=1 bash gen_linux.sh
|
cmake --preset 'CUDA 13' -DOLLAMA_RUNNER_DIR="cuda_v13" \
|
||||||
RUN mkdir -p ../../dist/linux-amd64-rocm/lib/ollama && \
|
&& cmake --build --parallel ${PARALLEL} --preset 'CUDA 13' \
|
||||||
(cd /opt/rocm/lib && tar cf - rocblas/library) | (cd ../../dist/linux-amd64-rocm/lib/ollama && tar xf - )
|
&& cmake --install build --component CUDA --strip --parallel ${PARALLEL}
|
||||||
|
|
||||||
FROM --platform=linux/amd64 centos:7 AS cpu-builder-amd64
|
|
||||||
ARG CMAKE_VERSION
|
|
||||||
ARG GOLANG_VERSION
|
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} GOLANG_VERSION=${GOLANG_VERSION} sh /rh_linux_deps.sh
|
|
||||||
ENV PATH /opt/rh/devtoolset-10/root/usr/bin:$PATH
|
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
|
||||||
ARG OLLAMA_CUSTOM_CPU_DEFS
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
ENV GOARCH amd64
|
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
|
||||||
|
|
||||||
FROM --platform=linux/amd64 cpu-builder-amd64 AS static-build-amd64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
OLLAMA_CPU_TARGET="static" bash gen_linux.sh
|
|
||||||
FROM --platform=linux/amd64 cpu-builder-amd64 AS cpu-build-amd64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_CPU_TARGET="cpu" bash gen_linux.sh
|
|
||||||
FROM --platform=linux/amd64 cpu-builder-amd64 AS cpu_avx-build-amd64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_CPU_TARGET="cpu_avx" bash gen_linux.sh
|
|
||||||
FROM --platform=linux/amd64 cpu-builder-amd64 AS cpu_avx2-build-amd64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_CPU_TARGET="cpu_avx2" bash gen_linux.sh
|
|
||||||
|
|
||||||
FROM --platform=linux/arm64 rockylinux:8 AS cpu-builder-arm64
|
|
||||||
ARG CMAKE_VERSION
|
|
||||||
ARG GOLANG_VERSION
|
|
||||||
COPY ./scripts/rh_linux_deps.sh /
|
|
||||||
RUN CMAKE_VERSION=${CMAKE_VERSION} GOLANG_VERSION=${GOLANG_VERSION} sh /rh_linux_deps.sh
|
|
||||||
ENV PATH /opt/rh/gcc-toolset-10/root/usr/bin:$PATH
|
|
||||||
COPY --from=llm-code / /go/src/github.com/ollama/ollama/
|
|
||||||
ARG OLLAMA_CUSTOM_CPU_DEFS
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
ENV GOARCH arm64
|
|
||||||
WORKDIR /go/src/github.com/ollama/ollama/llm/generate
|
|
||||||
|
|
||||||
FROM --platform=linux/arm64 cpu-builder-arm64 AS static-build-arm64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
OLLAMA_CPU_TARGET="static" bash gen_linux.sh
|
|
||||||
FROM --platform=linux/arm64 cpu-builder-arm64 AS cpu-build-arm64
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
OLLAMA_SKIP_STATIC_GENERATE=1 OLLAMA_CPU_TARGET="cpu" bash gen_linux.sh
|
|
||||||
|
|
||||||
|
|
||||||
# Intermediate stage used for ./scripts/build_linux.sh
|
FROM base AS rocm-6
|
||||||
FROM --platform=linux/amd64 cpu-build-amd64 AS build-amd64
|
ENV PATH=/opt/rocm/hcc/bin:/opt/rocm/hip/bin:/opt/rocm/bin:/opt/rocm/hcc/bin:$PATH
|
||||||
ENV CGO_ENABLED 1
|
ARG PARALLEL
|
||||||
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
|
cmake --preset 'ROCm 6' -DOLLAMA_RUNNER_DIR="rocm" \
|
||||||
|
&& cmake --build --parallel ${PARALLEL} --preset 'ROCm 6' \
|
||||||
|
&& cmake --install build --component HIP --strip --parallel ${PARALLEL}
|
||||||
|
RUN rm -f dist/lib/ollama/rocm/rocblas/library/*gfx90[06]*
|
||||||
|
|
||||||
|
FROM --platform=linux/arm64 nvcr.io/nvidia/l4t-jetpack:${JETPACK5VERSION} AS jetpack-5
|
||||||
|
ARG CMAKEVERSION
|
||||||
|
RUN apt-get update && apt-get install -y curl ccache \
|
||||||
|
&& curl -fsSL https://github.com/Kitware/CMake/releases/download/v${CMAKEVERSION}/cmake-${CMAKEVERSION}-linux-$(uname -m).tar.gz | tar xz -C /usr/local --strip-components 1
|
||||||
|
COPY CMakeLists.txt CMakePresets.json .
|
||||||
|
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
|
||||||
|
ARG PARALLEL
|
||||||
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
|
cmake --preset 'JetPack 5' -DOLLAMA_RUNNER_DIR="cuda_jetpack5" \
|
||||||
|
&& cmake --build --parallel ${PARALLEL} --preset 'JetPack 5' \
|
||||||
|
&& cmake --install build --component CUDA --strip --parallel ${PARALLEL}
|
||||||
|
|
||||||
|
FROM --platform=linux/arm64 nvcr.io/nvidia/l4t-jetpack:${JETPACK6VERSION} AS jetpack-6
|
||||||
|
ARG CMAKEVERSION
|
||||||
|
RUN apt-get update && apt-get install -y curl ccache \
|
||||||
|
&& curl -fsSL https://github.com/Kitware/CMake/releases/download/v${CMAKEVERSION}/cmake-${CMAKEVERSION}-linux-$(uname -m).tar.gz | tar xz -C /usr/local --strip-components 1
|
||||||
|
COPY CMakeLists.txt CMakePresets.json .
|
||||||
|
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
|
||||||
|
ARG PARALLEL
|
||||||
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
|
cmake --preset 'JetPack 6' -DOLLAMA_RUNNER_DIR="cuda_jetpack6" \
|
||||||
|
&& cmake --build --parallel ${PARALLEL} --preset 'JetPack 6' \
|
||||||
|
&& cmake --install build --component CUDA --strip --parallel ${PARALLEL}
|
||||||
|
|
||||||
|
FROM base AS vulkan
|
||||||
|
RUN --mount=type=cache,target=/root/.ccache \
|
||||||
|
cmake --preset 'Vulkan' -DOLLAMA_RUNNER_DIR="vulkan" \
|
||||||
|
&& cmake --build --parallel --preset 'Vulkan' \
|
||||||
|
&& cmake --install build --component Vulkan --strip --parallel 8
|
||||||
|
|
||||||
|
|
||||||
|
FROM base AS build
|
||||||
WORKDIR /go/src/github.com/ollama/ollama
|
WORKDIR /go/src/github.com/ollama/ollama
|
||||||
|
COPY go.mod go.sum .
|
||||||
|
RUN curl -fsSL https://golang.org/dl/go$(awk '/^go/ { print $2 }' go.mod).linux-$(case $(uname -m) in x86_64) echo amd64 ;; aarch64) echo arm64 ;; esac).tar.gz | tar xz -C /usr/local
|
||||||
|
ENV PATH=/usr/local/go/bin:$PATH
|
||||||
|
RUN go mod download
|
||||||
COPY . .
|
COPY . .
|
||||||
COPY --from=static-build-amd64 /go/src/github.com/ollama/ollama/llm/build/linux/ llm/build/linux/
|
ARG GOFLAGS="'-ldflags=-w -s'"
|
||||||
COPY --from=cpu_avx-build-amd64 /go/src/github.com/ollama/ollama/llm/build/linux/ llm/build/linux/
|
ENV CGO_ENABLED=1
|
||||||
COPY --from=cpu_avx2-build-amd64 /go/src/github.com/ollama/ollama/llm/build/linux/ llm/build/linux/
|
|
||||||
COPY --from=cuda-11-build-amd64 /go/src/github.com/ollama/ollama/dist/ dist/
|
|
||||||
COPY --from=cuda-11-build-amd64 /go/src/github.com/ollama/ollama/llm/build/linux/ llm/build/linux/
|
|
||||||
COPY --from=cuda-12-build-amd64 /go/src/github.com/ollama/ollama/dist/ dist/
|
|
||||||
COPY --from=cuda-12-build-amd64 /go/src/github.com/ollama/ollama/llm/build/linux/ llm/build/linux/
|
|
||||||
COPY --from=rocm-build-amd64 /go/src/github.com/ollama/ollama/dist/ dist/
|
|
||||||
COPY --from=rocm-build-amd64 /go/src/github.com/ollama/ollama/llm/build/linux/ llm/build/linux/
|
|
||||||
ARG GOFLAGS
|
|
||||||
ARG CGO_CFLAGS
|
ARG CGO_CFLAGS
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
ARG CGO_CXXFLAGS
|
||||||
go build -trimpath -o dist/linux-amd64/bin/ollama .
|
RUN --mount=type=cache,target=/root/.cache/go-build \
|
||||||
|
go build -trimpath -buildmode=pie -o /bin/ollama .
|
||||||
|
|
||||||
# Intermediate stage used for ./scripts/build_linux.sh
|
FROM --platform=linux/amd64 scratch AS amd64
|
||||||
FROM --platform=linux/arm64 cpu-build-arm64 AS build-arm64
|
# COPY --from=cuda-11 dist/lib/ollama/ /lib/ollama/
|
||||||
ENV CGO_ENABLED 1
|
COPY --from=cuda-12 dist/lib/ollama /lib/ollama/
|
||||||
ARG GOLANG_VERSION
|
COPY --from=cuda-13 dist/lib/ollama /lib/ollama/
|
||||||
WORKDIR /go/src/github.com/ollama/ollama
|
COPY --from=vulkan dist/lib/ollama /lib/ollama/
|
||||||
COPY . .
|
|
||||||
COPY --from=static-build-arm64 /go/src/github.com/ollama/ollama/llm/build/linux/ llm/build/linux/
|
|
||||||
COPY --from=cuda-11-build-server-arm64 /go/src/github.com/ollama/ollama/dist/ dist/
|
|
||||||
COPY --from=cuda-11-build-server-arm64 /go/src/github.com/ollama/ollama/llm/build/linux/ llm/build/linux/
|
|
||||||
COPY --from=cuda-12-build-server-arm64 /go/src/github.com/ollama/ollama/dist/ dist/
|
|
||||||
COPY --from=cuda-12-build-server-arm64 /go/src/github.com/ollama/ollama/llm/build/linux/ llm/build/linux/
|
|
||||||
ARG GOFLAGS
|
|
||||||
ARG CGO_CFLAGS
|
|
||||||
RUN --mount=type=cache,target=/root/.ccache \
|
|
||||||
go build -trimpath -o dist/linux-arm64/bin/ollama .
|
|
||||||
|
|
||||||
# Strip out ROCm dependencies to keep the primary image lean
|
FROM --platform=linux/arm64 scratch AS arm64
|
||||||
FROM --platform=linux/amd64 ubuntu:22.04 as amd64-libs-without-rocm
|
# COPY --from=cuda-11 dist/lib/ollama/ /lib/ollama/
|
||||||
COPY --from=build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/lib/ /scratch/
|
COPY --from=cuda-12 dist/lib/ollama /lib/ollama/
|
||||||
RUN cd /scratch/ollama/ && rm -rf rocblas libamd* libdrm* libroc* libhip* libhsa*
|
COPY --from=cuda-13 dist/lib/ollama/ /lib/ollama/
|
||||||
|
COPY --from=jetpack-5 dist/lib/ollama/ /lib/ollama/
|
||||||
|
COPY --from=jetpack-6 dist/lib/ollama/ /lib/ollama/
|
||||||
|
|
||||||
# Runtime stages
|
FROM scratch AS rocm
|
||||||
FROM --platform=linux/amd64 ubuntu:22.04 as runtime-amd64
|
COPY --from=rocm-6 dist/lib/ollama /lib/ollama
|
||||||
COPY --from=amd64-libs-without-rocm /scratch/ /lib/
|
|
||||||
RUN apt-get update && apt-get install -y ca-certificates && \
|
|
||||||
apt-get clean && rm -rf /var/lib/apt/lists/*
|
|
||||||
COPY --from=build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/bin/ /bin/
|
|
||||||
|
|
||||||
FROM --platform=linux/arm64 ubuntu:22.04 as runtime-arm64
|
FROM ${FLAVOR} AS archive
|
||||||
COPY --from=build-arm64 /go/src/github.com/ollama/ollama/dist/linux-arm64/lib/ /lib/
|
ARG VULKANVERSION
|
||||||
RUN apt-get update && apt-get install -y ca-certificates && \
|
COPY --from=cpu dist/lib/ollama /lib/ollama
|
||||||
apt-get clean && rm -rf /var/lib/apt/lists/*
|
COPY --from=build /bin/ollama /bin/ollama
|
||||||
COPY --from=build-arm64 /go/src/github.com/ollama/ollama/dist/linux-arm64/bin/ /bin/
|
|
||||||
|
|
||||||
# Radeon images are much larger so we keep it distinct from the CPU/CUDA image
|
# Temporary opt-out stages for Vulkan
|
||||||
FROM --platform=linux/amd64 rocm/dev-centos-7:${ROCM_VERSION}-complete as runtime-rocm
|
FROM --platform=linux/amd64 scratch AS amd64_novulkan
|
||||||
RUN update-pciids
|
# COPY --from=cuda-11 dist/lib/ollama/ /lib/ollama/
|
||||||
COPY --from=build-amd64 /go/src/github.com/ollama/ollama/dist/linux-amd64/bin/ /bin/
|
COPY --from=cuda-12 dist/lib/ollama /lib/ollama/
|
||||||
RUN ln -s /opt/rocm/lib /lib/ollama
|
COPY --from=cuda-13 dist/lib/ollama /lib/ollama/
|
||||||
EXPOSE 11434
|
FROM arm64 AS arm64_novulkan
|
||||||
ENV OLLAMA_HOST 0.0.0.0
|
FROM ${FLAVOR}_novulkan AS archive_novulkan
|
||||||
|
COPY --from=cpu dist/lib/ollama /lib/ollama
|
||||||
ENTRYPOINT ["/bin/ollama"]
|
COPY --from=build /bin/ollama /bin/ollama
|
||||||
CMD ["serve"]
|
FROM ubuntu:24.04 AS novulkan
|
||||||
|
RUN apt-get update \
|
||||||
FROM runtime-$TARGETARCH
|
&& apt-get install -y ca-certificates \
|
||||||
EXPOSE 11434
|
&& apt-get clean \
|
||||||
ENV OLLAMA_HOST 0.0.0.0
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
COPY --from=archive_novulkan /bin /usr/bin
|
||||||
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||||
|
COPY --from=archive_novulkan /lib/ollama /usr/lib/ollama
|
||||||
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
|
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
|
||||||
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
|
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
|
||||||
ENV NVIDIA_VISIBLE_DEVICES=all
|
ENV NVIDIA_VISIBLE_DEVICES=all
|
||||||
|
ENV OLLAMA_HOST=0.0.0.0:11434
|
||||||
|
EXPOSE 11434
|
||||||
|
ENTRYPOINT ["/bin/ollama"]
|
||||||
|
CMD ["serve"]
|
||||||
|
|
||||||
|
FROM ubuntu:24.04 AS default
|
||||||
|
RUN apt-get update \
|
||||||
|
&& apt-get install -y ca-certificates libvulkan1 \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
COPY --from=archive /bin /usr/bin
|
||||||
|
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||||
|
COPY --from=archive /lib/ollama /usr/lib/ollama
|
||||||
|
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
|
||||||
|
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
|
||||||
|
ENV NVIDIA_VISIBLE_DEVICES=all
|
||||||
|
ENV OLLAMA_HOST=0.0.0.0:11434
|
||||||
|
EXPOSE 11434
|
||||||
ENTRYPOINT ["/bin/ollama"]
|
ENTRYPOINT ["/bin/ollama"]
|
||||||
CMD ["serve"]
|
CMD ["serve"]
|
||||||
|
|||||||
72
Makefile.sync
Normal file
72
Makefile.sync
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
UPSTREAM=https://github.com/ggml-org/llama.cpp.git
|
||||||
|
WORKDIR=llama/vendor
|
||||||
|
FETCH_HEAD=7049736b2dd9011bf819e298b844ebbc4b5afdc9
|
||||||
|
|
||||||
|
.PHONY: help
|
||||||
|
help:
|
||||||
|
@echo "Available targets:"
|
||||||
|
@echo " sync Sync with upstream repositories"
|
||||||
|
@echo " checkout Checkout upstream repository"
|
||||||
|
@echo " apply-patches Apply patches to local repository"
|
||||||
|
@echo " format-patches Format patches from local repository"
|
||||||
|
@echo " clean Clean local repository"
|
||||||
|
@echo
|
||||||
|
@echo "Example:"
|
||||||
|
@echo " make -f $(lastword $(MAKEFILE_LIST)) clean apply-patches sync"
|
||||||
|
|
||||||
|
.PHONY: sync
|
||||||
|
sync: llama/build-info.cpp ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal
|
||||||
|
|
||||||
|
llama/build-info.cpp: llama/build-info.cpp.in llama/llama.cpp
|
||||||
|
sed -e 's|@FETCH_HEAD@|$(FETCH_HEAD)|' <$< >$@
|
||||||
|
|
||||||
|
ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal: ml/backend/ggml/ggml
|
||||||
|
go generate ./$(@D)
|
||||||
|
|
||||||
|
.PHONY: llama/llama.cpp
|
||||||
|
llama/llama.cpp: llama/vendor
|
||||||
|
rsync -arvzc --delete -f "include LICENSE" -f "merge $@/.rsync-filter" $(addprefix $<,/LICENSE /) $@
|
||||||
|
|
||||||
|
.PHONY: ml/backend/ggml/ggml
|
||||||
|
ml/backend/ggml/ggml: llama/vendor
|
||||||
|
rsync -arvzc --delete -f "include LICENSE" -f "merge $@/.rsync-filter" $(addprefix $<,/LICENSE /ggml/) $@
|
||||||
|
|
||||||
|
PATCHES=$(wildcard llama/patches/*.patch)
|
||||||
|
PATCHED=$(join $(dir $(PATCHES)), $(addsuffix ed, $(addprefix ., $(notdir $(PATCHES)))))
|
||||||
|
|
||||||
|
.PHONY: apply-patches
|
||||||
|
.NOTPARALLEL:
|
||||||
|
apply-patches: $(PATCHED)
|
||||||
|
|
||||||
|
llama/patches/.%.patched: llama/patches/%.patch
|
||||||
|
@if git -c user.name=nobody -c 'user.email=<>' -C $(WORKDIR) am -3 $(realpath $<); then \
|
||||||
|
touch $@; \
|
||||||
|
else \
|
||||||
|
echo "Patch failed. Resolve any conflicts then continue."; \
|
||||||
|
echo "1. Run 'git -C $(WORKDIR) am --continue'"; \
|
||||||
|
echo "2. Run 'make -f $(lastword $(MAKEFILE_LIST)) format-patches'"; \
|
||||||
|
echo "3. Run 'make -f $(lastword $(MAKEFILE_LIST)) clean apply-patches'"; \
|
||||||
|
exit 1; \
|
||||||
|
fi
|
||||||
|
|
||||||
|
.PHONY: checkout
|
||||||
|
checkout: $(WORKDIR)
|
||||||
|
git -C $(WORKDIR) fetch
|
||||||
|
git -C $(WORKDIR) checkout -f $(FETCH_HEAD)
|
||||||
|
|
||||||
|
$(WORKDIR):
|
||||||
|
git clone $(UPSTREAM) $(WORKDIR)
|
||||||
|
|
||||||
|
.PHONE: format-patches
|
||||||
|
format-patches: llama/patches
|
||||||
|
git -C $(WORKDIR) format-patch \
|
||||||
|
--no-signature \
|
||||||
|
--no-numbered \
|
||||||
|
--zero-commit \
|
||||||
|
-o $(realpath $<) \
|
||||||
|
$(FETCH_HEAD)
|
||||||
|
|
||||||
|
.PHONE: clean
|
||||||
|
clean: checkout
|
||||||
|
@git -C $(WORKDIR) am --abort || true
|
||||||
|
$(RM) llama/patches/.*.patched
|
||||||
316
README.md
316
README.md
@@ -1,18 +1,18 @@
|
|||||||
<div align="center">
|
<div align="center">
|
||||||
<img alt="ollama" height="200px" src="https://github.com/ollama/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
|
<a href="https://ollama.com">
|
||||||
|
<img alt="ollama" width="240" src="https://github.com/ollama/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
|
||||||
|
</a>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
# Ollama
|
# Ollama
|
||||||
|
|
||||||
[](https://discord.gg/ollama)
|
|
||||||
|
|
||||||
Get up and running with large language models.
|
Get up and running with large language models.
|
||||||
|
|
||||||
### macOS
|
### macOS
|
||||||
|
|
||||||
[Download](https://ollama.com/download/Ollama-darwin.zip)
|
[Download](https://ollama.com/download/Ollama.dmg)
|
||||||
|
|
||||||
### Windows preview
|
### Windows
|
||||||
|
|
||||||
[Download](https://github.com/likelovewant/ollama-for-amd/releases)
|
[Download](https://github.com/likelovewant/ollama-for-amd/releases)
|
||||||
|
|
||||||
@@ -20,28 +20,32 @@ For AMD use or build , please follow the guide on [wiki](https://github.com/like
|
|||||||
|
|
||||||
official support list
|
official support list
|
||||||
```
|
```
|
||||||
"gfx900" "gfx906:xnack-" "gfx908:xnack-" "gfx90a:xnack+" "gfx90a:xnack-" "gfx940" "gfx941" "gfx942" "gfx1010""gfx1012" "gfx1030" "gfx1100""gfx1101" "gfx1102"
|
"gfx900" "gfx940" "gfx941" "gfx942" "gfx1010""gfx1012" "gfx1030" "gfx1100""gfx1101" "gfx1102"
|
||||||
```
|
```
|
||||||
Please download from ollama [official](https://ollama.com/download/OllamaSetup.exe)
|
Please download from ollama [official](https://ollama.com/download/OllamaSetup.exe)
|
||||||
|
|
||||||
Example extra list add on this repo.
|
Example extra list add on this repo.
|
||||||
```
|
```
|
||||||
"gfx803" "gfx902" "gfx90c:xnack-" "gfx904" "gfx940" "gfx941" "gfx942" "gfx1010:xnack-" "gfx1011" "gfx1012:xnack-" "gfx1031" "gfx1032" "gfx1033" "gfx1034" "gfx1035" "gfx1036" "gfx1103"
|
(ROCm5) "gfx803" "gfx900:xnack-" "gfx902" (ROCm6) gfx906:xnack- "gfx1010:xnack-" "gfx1011" "gfx1012:xnack-" "gfx1031" "gfx1032" "gfx1034" "gfx1035" "gfx1036" "gfx1103" "gfx1150" "gfx1201" (expertimental)"...
|
||||||
```
|
```
|
||||||
Please follow the [wiki](https://github.com/likelovewant/ollama-for-amd/wiki) guide to build or use the pre-release version.
|
Please follow the [wiki](https://github.com/likelovewant/ollama-for-amd/wiki) guide to build or use the pre-release version.
|
||||||
|
|
||||||
Note: `gfx803` reported partialy working by the wiki method ,expected a future support
|
Note: **gfx803:** Reported as partially functional in HIP SDK 5.7 using the wiki method, but disabled in HIP SDK 6.1.2.
|
||||||
|
|
||||||
|
Note: **gfx90c (with xnack-):** Reported as partially functional in HIP SDK 5.7, with some testers experiencing partial success while others encountered issues in recent update. removed from
|
||||||
|
support lists. Explore its through self-build as guided on the wiki.
|
||||||
|
|
||||||
|
|
||||||
### Linux
|
### Linux
|
||||||
|
|
||||||
```
|
```shell
|
||||||
curl -fsSL https://ollama.com/install.sh | sh
|
curl -fsSL https://ollama.com/install.sh | sh
|
||||||
```
|
```
|
||||||
|
|
||||||
[Manual install instructions](https://github.com/ollama/ollama/blob/main/docs/linux.md)
|
[Manual install instructions](https://github.com/ollama/ollama/blob/main/docs/linux.md)
|
||||||
|
|
||||||
|
[Configuring Environment Variables Tip For Unsupport GPUs](https://github.com/likelovewant/ollama-for-amd/wiki#troubleshooting-amd-gpu-support-in-linux)
|
||||||
|
|
||||||
### Docker
|
### Docker
|
||||||
|
|
||||||
The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub.
|
The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub.
|
||||||
@@ -51,12 +55,17 @@ The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `olla
|
|||||||
- [ollama-python](https://github.com/ollama/ollama-python)
|
- [ollama-python](https://github.com/ollama/ollama-python)
|
||||||
- [ollama-js](https://github.com/ollama/ollama-js)
|
- [ollama-js](https://github.com/ollama/ollama-js)
|
||||||
|
|
||||||
|
### Community
|
||||||
|
|
||||||
|
- [Discord](https://discord.gg/ollama)
|
||||||
|
- [Reddit](https://reddit.com/r/ollama)
|
||||||
|
|
||||||
## Quickstart
|
## Quickstart
|
||||||
|
|
||||||
To run and chat with [Llama 3.1](https://ollama.com/library/llama3.1):
|
To run and chat with [Gemma 3](https://ollama.com/library/gemma3):
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama run llama3.1
|
ollama run gemma3
|
||||||
```
|
```
|
||||||
|
|
||||||
## Model library
|
## Model library
|
||||||
@@ -66,15 +75,25 @@ Ollama supports a list of models available on [ollama.com/library](https://ollam
|
|||||||
Here are some example models that can be downloaded:
|
Here are some example models that can be downloaded:
|
||||||
|
|
||||||
| Model | Parameters | Size | Download |
|
| Model | Parameters | Size | Download |
|
||||||
| ------------------ | ---------- | ----- | ------------------------------ |
|
| ------------------ | ---------- | ----- | -------------------------------- |
|
||||||
|
| Gemma 3 | 1B | 815MB | `ollama run gemma3:1b` |
|
||||||
|
| Gemma 3 | 4B | 3.3GB | `ollama run gemma3` |
|
||||||
|
| Gemma 3 | 12B | 8.1GB | `ollama run gemma3:12b` |
|
||||||
|
| Gemma 3 | 27B | 17GB | `ollama run gemma3:27b` |
|
||||||
|
| QwQ | 32B | 20GB | `ollama run qwq` |
|
||||||
|
| DeepSeek-R1 | 7B | 4.7GB | `ollama run deepseek-r1` |
|
||||||
|
| DeepSeek-R1 | 671B | 404GB | `ollama run deepseek-r1:671b` |
|
||||||
|
| Llama 4 | 109B | 67GB | `ollama run llama4:scout` |
|
||||||
|
| Llama 4 | 400B | 245GB | `ollama run llama4:maverick` |
|
||||||
|
| Llama 3.3 | 70B | 43GB | `ollama run llama3.3` |
|
||||||
|
| Llama 3.2 | 3B | 2.0GB | `ollama run llama3.2` |
|
||||||
|
| Llama 3.2 | 1B | 1.3GB | `ollama run llama3.2:1b` |
|
||||||
|
| Llama 3.2 Vision | 11B | 7.9GB | `ollama run llama3.2-vision` |
|
||||||
|
| Llama 3.2 Vision | 90B | 55GB | `ollama run llama3.2-vision:90b` |
|
||||||
| Llama 3.1 | 8B | 4.7GB | `ollama run llama3.1` |
|
| Llama 3.1 | 8B | 4.7GB | `ollama run llama3.1` |
|
||||||
| Llama 3.1 | 70B | 40GB | `ollama run llama3.1:70b` |
|
|
||||||
| Llama 3.1 | 405B | 231GB | `ollama run llama3.1:405b` |
|
| Llama 3.1 | 405B | 231GB | `ollama run llama3.1:405b` |
|
||||||
| Phi 3 Mini | 3.8B | 2.3GB | `ollama run phi3` |
|
| Phi 4 | 14B | 9.1GB | `ollama run phi4` |
|
||||||
| Phi 3 Medium | 14B | 7.9GB | `ollama run phi3:medium` |
|
| Phi 4 Mini | 3.8B | 2.5GB | `ollama run phi4-mini` |
|
||||||
| Gemma 2 | 2B | 1.6GB | `ollama run gemma2:2b` |
|
|
||||||
| Gemma 2 | 9B | 5.5GB | `ollama run gemma2` |
|
|
||||||
| Gemma 2 | 27B | 16GB | `ollama run gemma2:27b` |
|
|
||||||
| Mistral | 7B | 4.1GB | `ollama run mistral` |
|
| Mistral | 7B | 4.1GB | `ollama run mistral` |
|
||||||
| Moondream 2 | 1.4B | 829MB | `ollama run moondream` |
|
| Moondream 2 | 1.4B | 829MB | `ollama run moondream` |
|
||||||
| Neural Chat | 7B | 4.1GB | `ollama run neural-chat` |
|
| Neural Chat | 7B | 4.1GB | `ollama run neural-chat` |
|
||||||
@@ -82,7 +101,7 @@ Here are some example models that can be downloaded:
|
|||||||
| Code Llama | 7B | 3.8GB | `ollama run codellama` |
|
| Code Llama | 7B | 3.8GB | `ollama run codellama` |
|
||||||
| Llama 2 Uncensored | 7B | 3.8GB | `ollama run llama2-uncensored` |
|
| Llama 2 Uncensored | 7B | 3.8GB | `ollama run llama2-uncensored` |
|
||||||
| LLaVA | 7B | 4.5GB | `ollama run llava` |
|
| LLaVA | 7B | 4.5GB | `ollama run llava` |
|
||||||
| Solar | 10.7B | 6.1GB | `ollama run solar` |
|
| Granite-3.3 | 8B | 4.9GB | `ollama run granite3.3` |
|
||||||
|
|
||||||
> [!NOTE]
|
> [!NOTE]
|
||||||
> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
|
> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
|
||||||
@@ -101,32 +120,32 @@ Ollama supports importing GGUF models in the Modelfile:
|
|||||||
|
|
||||||
2. Create the model in Ollama
|
2. Create the model in Ollama
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama create example -f Modelfile
|
ollama create example -f Modelfile
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Run the model
|
3. Run the model
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama run example
|
ollama run example
|
||||||
```
|
```
|
||||||
|
|
||||||
### Import from PyTorch or Safetensors
|
### Import from Safetensors
|
||||||
|
|
||||||
See the [guide](docs/import.md) on importing models for more information.
|
See the [guide](docs/import.md) on importing models for more information.
|
||||||
|
|
||||||
### Customize a prompt
|
### Customize a prompt
|
||||||
|
|
||||||
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama3.1` model:
|
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama3.2` model:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama pull llama3.1
|
ollama pull llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
Create a `Modelfile`:
|
Create a `Modelfile`:
|
||||||
|
|
||||||
```
|
```
|
||||||
FROM llama3.1
|
FROM llama3.2
|
||||||
|
|
||||||
# set the temperature to 1 [higher is more creative, lower is more coherent]
|
# set the temperature to 1 [higher is more creative, lower is more coherent]
|
||||||
PARAMETER temperature 1
|
PARAMETER temperature 1
|
||||||
@@ -146,7 +165,7 @@ ollama run mario
|
|||||||
Hello! It's your friend Mario.
|
Hello! It's your friend Mario.
|
||||||
```
|
```
|
||||||
|
|
||||||
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
|
For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
|
||||||
|
|
||||||
## CLI Reference
|
## CLI Reference
|
||||||
|
|
||||||
@@ -154,28 +173,28 @@ For more examples, see the [examples](examples) directory. For more information
|
|||||||
|
|
||||||
`ollama create` is used to create a model from a Modelfile.
|
`ollama create` is used to create a model from a Modelfile.
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama create mymodel -f ./Modelfile
|
ollama create mymodel -f ./Modelfile
|
||||||
```
|
```
|
||||||
|
|
||||||
### Pull a model
|
### Pull a model
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama pull llama3.1
|
ollama pull llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
> This command can also be used to update a local model. Only the diff will be pulled.
|
> This command can also be used to update a local model. Only the diff will be pulled.
|
||||||
|
|
||||||
### Remove a model
|
### Remove a model
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama rm llama3.1
|
ollama rm llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
### Copy a model
|
### Copy a model
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama cp llama3.1 my-model
|
ollama cp llama3.2 my-model
|
||||||
```
|
```
|
||||||
|
|
||||||
### Multiline input
|
### Multiline input
|
||||||
@@ -193,28 +212,42 @@ I'm a basic program that prints the famous "Hello, world!" message to the consol
|
|||||||
|
|
||||||
```
|
```
|
||||||
ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
|
ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
|
||||||
The image features a yellow smiley face, which is likely the central focus of the picture.
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> **Output**: The image features a yellow smiley face, which is likely the central focus of the picture.
|
||||||
|
|
||||||
### Pass the prompt as an argument
|
### Pass the prompt as an argument
|
||||||
|
|
||||||
|
```shell
|
||||||
|
ollama run llama3.2 "Summarize this file: $(cat README.md)"
|
||||||
```
|
```
|
||||||
$ ollama run llama3.1 "Summarize this file: $(cat README.md)"
|
|
||||||
Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
|
> **Output**: Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
|
||||||
```
|
|
||||||
|
|
||||||
### Show model information
|
### Show model information
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama show llama3.1
|
ollama show llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
### List models on your computer
|
### List models on your computer
|
||||||
|
|
||||||
```
|
```shell
|
||||||
ollama list
|
ollama list
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### List which models are currently loaded
|
||||||
|
|
||||||
|
```shell
|
||||||
|
ollama ps
|
||||||
|
```
|
||||||
|
|
||||||
|
### Stop a model which is currently running
|
||||||
|
|
||||||
|
```shell
|
||||||
|
ollama stop llama3.2
|
||||||
|
```
|
||||||
|
|
||||||
### Start Ollama
|
### Start Ollama
|
||||||
|
|
||||||
`ollama serve` is used when you want to start ollama without running the desktop application.
|
`ollama serve` is used when you want to start ollama without running the desktop application.
|
||||||
@@ -227,14 +260,14 @@ See the [developer guide](https://github.com/ollama/ollama/blob/main/docs/develo
|
|||||||
|
|
||||||
Next, start the server:
|
Next, start the server:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
./ollama serve
|
./ollama serve
|
||||||
```
|
```
|
||||||
|
|
||||||
Finally, in a separate shell, run a model:
|
Finally, in a separate shell, run a model:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
./ollama run llama3.1
|
./ollama run llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
## REST API
|
## REST API
|
||||||
@@ -243,18 +276,18 @@ Ollama has a REST API for running and managing models.
|
|||||||
|
|
||||||
### Generate a response
|
### Generate a response
|
||||||
|
|
||||||
```
|
```shell
|
||||||
curl http://localhost:11434/api/generate -d '{
|
curl http://localhost:11434/api/generate -d '{
|
||||||
"model": "llama3.1",
|
"model": "llama3.2",
|
||||||
"prompt":"Why is the sky blue?"
|
"prompt":"Why is the sky blue?"
|
||||||
}'
|
}'
|
||||||
```
|
```
|
||||||
|
|
||||||
### Chat with a model
|
### Chat with a model
|
||||||
|
|
||||||
```
|
```shell
|
||||||
curl http://localhost:11434/api/chat -d '{
|
curl http://localhost:11434/api/chat -d '{
|
||||||
"model": "llama3.1",
|
"model": "llama3.2",
|
||||||
"messages": [
|
"messages": [
|
||||||
{ "role": "user", "content": "why is the sky blue?" }
|
{ "role": "user", "content": "why is the sky blue?" }
|
||||||
]
|
]
|
||||||
@@ -268,6 +301,7 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
### Web & Desktop
|
### Web & Desktop
|
||||||
|
|
||||||
- [Open WebUI](https://github.com/open-webui/open-webui)
|
- [Open WebUI](https://github.com/open-webui/open-webui)
|
||||||
|
- [SwiftChat (macOS with ReactNative)](https://github.com/aws-samples/swift-chat)
|
||||||
- [Enchanted (macOS native)](https://github.com/AugustDev/enchanted)
|
- [Enchanted (macOS native)](https://github.com/AugustDev/enchanted)
|
||||||
- [Hollama](https://github.com/fmaclen/hollama)
|
- [Hollama](https://github.com/fmaclen/hollama)
|
||||||
- [Lollms-Webui](https://github.com/ParisNeo/lollms-webui)
|
- [Lollms-Webui](https://github.com/ParisNeo/lollms-webui)
|
||||||
@@ -275,12 +309,13 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [Bionic GPT](https://github.com/bionic-gpt/bionic-gpt)
|
- [Bionic GPT](https://github.com/bionic-gpt/bionic-gpt)
|
||||||
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
|
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
|
||||||
- [Saddle](https://github.com/jikkuatwork/saddle)
|
- [Saddle](https://github.com/jikkuatwork/saddle)
|
||||||
|
- [TagSpaces](https://www.tagspaces.org) (A platform for file-based apps, [utilizing Ollama](https://docs.tagspaces.org/ai/) for the generation of tags and descriptions)
|
||||||
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
|
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
|
||||||
- [Chatbot UI v2](https://github.com/mckaywrigley/chatbot-ui)
|
- [Chatbot UI v2](https://github.com/mckaywrigley/chatbot-ui)
|
||||||
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
|
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
|
||||||
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)
|
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)
|
||||||
- [Ollamac](https://github.com/kevinhermawan/Ollamac)
|
- [Ollamac](https://github.com/kevinhermawan/Ollamac)
|
||||||
- [big-AGI](https://github.com/enricoros/big-AGI/blob/main/docs/config-local-ollama.md)
|
- [big-AGI](https://github.com/enricoros/big-AGI)
|
||||||
- [Cheshire Cat assistant framework](https://github.com/cheshire-cat-ai/core)
|
- [Cheshire Cat assistant framework](https://github.com/cheshire-cat-ai/core)
|
||||||
- [Amica](https://github.com/semperai/amica)
|
- [Amica](https://github.com/semperai/amica)
|
||||||
- [chatd](https://github.com/BruceMacD/chatd)
|
- [chatd](https://github.com/BruceMacD/chatd)
|
||||||
@@ -300,7 +335,10 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [AnythingLLM (Docker + MacOs/Windows/Linux native app)](https://github.com/Mintplex-Labs/anything-llm)
|
- [AnythingLLM (Docker + MacOs/Windows/Linux native app)](https://github.com/Mintplex-Labs/anything-llm)
|
||||||
- [Ollama Basic Chat: Uses HyperDiv Reactive UI](https://github.com/rapidarchitect/ollama_basic_chat)
|
- [Ollama Basic Chat: Uses HyperDiv Reactive UI](https://github.com/rapidarchitect/ollama_basic_chat)
|
||||||
- [Ollama-chats RPG](https://github.com/drazdra/ollama-chats)
|
- [Ollama-chats RPG](https://github.com/drazdra/ollama-chats)
|
||||||
- [QA-Pilot](https://github.com/reid41/QA-Pilot) (Chat with Code Repository)
|
- [IntelliBar](https://intellibar.app/) (AI-powered assistant for macOS)
|
||||||
|
- [Jirapt](https://github.com/AliAhmedNada/jirapt) (Jira Integration to generate issues, tasks, epics)
|
||||||
|
- [ojira](https://github.com/AliAhmedNada/ojira) (Jira chrome plugin to easily generate descriptions for tasks)
|
||||||
|
- [QA-Pilot](https://github.com/reid41/QA-Pilot) (Interactive chat tool that can leverage Ollama models for rapid understanding and navigation of GitHub code repositories)
|
||||||
- [ChatOllama](https://github.com/sugarforever/chat-ollama) (Open Source Chatbot based on Ollama with Knowledge Bases)
|
- [ChatOllama](https://github.com/sugarforever/chat-ollama) (Open Source Chatbot based on Ollama with Knowledge Bases)
|
||||||
- [CRAG Ollama Chat](https://github.com/Nagi-ovo/CRAG-Ollama-Chat) (Simple Web Search with Corrective RAG)
|
- [CRAG Ollama Chat](https://github.com/Nagi-ovo/CRAG-Ollama-Chat) (Simple Web Search with Corrective RAG)
|
||||||
- [RAGFlow](https://github.com/infiniflow/ragflow) (Open-source Retrieval-Augmented Generation engine based on deep document understanding)
|
- [RAGFlow](https://github.com/infiniflow/ragflow) (Open-source Retrieval-Augmented Generation engine based on deep document understanding)
|
||||||
@@ -310,11 +348,18 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [Ollama RAG Chatbot](https://github.com/datvodinh/rag-chatbot.git) (Local Chat with multiple PDFs using Ollama and RAG)
|
- [Ollama RAG Chatbot](https://github.com/datvodinh/rag-chatbot.git) (Local Chat with multiple PDFs using Ollama and RAG)
|
||||||
- [BrainSoup](https://www.nurgo-software.com/products/brainsoup) (Flexible native client with RAG & multi-agent automation)
|
- [BrainSoup](https://www.nurgo-software.com/products/brainsoup) (Flexible native client with RAG & multi-agent automation)
|
||||||
- [macai](https://github.com/Renset/macai) (macOS client for Ollama, ChatGPT, and other compatible API back-ends)
|
- [macai](https://github.com/Renset/macai) (macOS client for Ollama, ChatGPT, and other compatible API back-ends)
|
||||||
|
- [RWKV-Runner](https://github.com/josStorer/RWKV-Runner) (RWKV offline LLM deployment tool, also usable as a client for ChatGPT and Ollama)
|
||||||
|
- [Ollama Grid Search](https://github.com/dezoito/ollama-grid-search) (app to evaluate and compare models)
|
||||||
- [Olpaka](https://github.com/Otacon/olpaka) (User-friendly Flutter Web App for Ollama)
|
- [Olpaka](https://github.com/Otacon/olpaka) (User-friendly Flutter Web App for Ollama)
|
||||||
|
- [Casibase](https://casibase.org) (An open source AI knowledge base and dialogue system combining the latest RAG, SSO, ollama support, and multiple large language models.)
|
||||||
- [OllamaSpring](https://github.com/CrazyNeil/OllamaSpring) (Ollama Client for macOS)
|
- [OllamaSpring](https://github.com/CrazyNeil/OllamaSpring) (Ollama Client for macOS)
|
||||||
- [LLocal.in](https://github.com/kartikm7/llocal) (Easy to use Electron Desktop Client for Ollama)
|
- [LLocal.in](https://github.com/kartikm7/llocal) (Easy to use Electron Desktop Client for Ollama)
|
||||||
- [AiLama](https://github.com/zeyoyt/ailama) (A Discord User App that allows you to interact with Ollama anywhere in discord )
|
- [Shinkai Desktop](https://github.com/dcSpark/shinkai-apps) (Two click install Local AI using Ollama + Files + RAG)
|
||||||
|
- [AiLama](https://github.com/zeyoyt/ailama) (A Discord User App that allows you to interact with Ollama anywhere in Discord)
|
||||||
- [Ollama with Google Mesop](https://github.com/rapidarchitect/ollama_mesop/) (Mesop Chat Client implementation with Ollama)
|
- [Ollama with Google Mesop](https://github.com/rapidarchitect/ollama_mesop/) (Mesop Chat Client implementation with Ollama)
|
||||||
|
- [R2R](https://github.com/SciPhi-AI/R2R) (Open-source RAG engine)
|
||||||
|
- [Ollama-Kis](https://github.com/elearningshow/ollama-kis) (A simple easy-to-use GUI with sample custom LLM for Drivers Education)
|
||||||
|
- [OpenGPA](https://opengpa.org) (Open-source offline-first Enterprise Agentic Application)
|
||||||
- [Painting Droid](https://github.com/mateuszmigas/painting-droid) (Painting app with AI integrations)
|
- [Painting Droid](https://github.com/mateuszmigas/painting-droid) (Painting app with AI integrations)
|
||||||
- [Kerlig AI](https://www.kerlig.com/) (AI writing assistant for macOS)
|
- [Kerlig AI](https://www.kerlig.com/) (AI writing assistant for macOS)
|
||||||
- [AI Studio](https://github.com/MindWorkAI/AI-Studio)
|
- [AI Studio](https://github.com/MindWorkAI/AI-Studio)
|
||||||
@@ -322,21 +367,89 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [LLMStack](https://github.com/trypromptly/LLMStack) (No-code multi-agent framework to build LLM agents and workflows)
|
- [LLMStack](https://github.com/trypromptly/LLMStack) (No-code multi-agent framework to build LLM agents and workflows)
|
||||||
- [BoltAI for Mac](https://boltai.com) (AI Chat Client for Mac)
|
- [BoltAI for Mac](https://boltai.com) (AI Chat Client for Mac)
|
||||||
- [Harbor](https://github.com/av/harbor) (Containerized LLM Toolkit with Ollama as default backend)
|
- [Harbor](https://github.com/av/harbor) (Containerized LLM Toolkit with Ollama as default backend)
|
||||||
|
- [PyGPT](https://github.com/szczyglis-dev/py-gpt) (AI desktop assistant for Linux, Windows, and Mac)
|
||||||
|
- [Alpaca](https://github.com/Jeffser/Alpaca) (An Ollama client application for Linux and macOS made with GTK4 and Adwaita)
|
||||||
|
- [AutoGPT](https://github.com/Significant-Gravitas/AutoGPT/blob/master/docs/content/platform/ollama.md) (AutoGPT Ollama integration)
|
||||||
- [Go-CREW](https://www.jonathanhecl.com/go-crew/) (Powerful Offline RAG in Golang)
|
- [Go-CREW](https://www.jonathanhecl.com/go-crew/) (Powerful Offline RAG in Golang)
|
||||||
- [PartCAD](https://github.com/openvmp/partcad/) (CAD model generation with OpenSCAD and CadQuery)
|
- [PartCAD](https://github.com/openvmp/partcad/) (CAD model generation with OpenSCAD and CadQuery)
|
||||||
- [Ollama4j Web UI](https://github.com/ollama4j/ollama4j-web-ui) - Java-based Web UI for Ollama built with Vaadin, Spring Boot and Ollama4j
|
- [Ollama4j Web UI](https://github.com/ollama4j/ollama4j-web-ui) - Java-based Web UI for Ollama built with Vaadin, Spring Boot, and Ollama4j
|
||||||
- [PyOllaMx](https://github.com/kspviswa/pyOllaMx) - macOS application capable of chatting with both Ollama and Apple MLX models.
|
- [PyOllaMx](https://github.com/kspviswa/pyOllaMx) - macOS application capable of chatting with both Ollama and Apple MLX models.
|
||||||
- [Claude Dev](https://github.com/saoudrizwan/claude-dev) - VSCode extension for multi-file/whole-repo coding
|
- [Cline](https://github.com/cline/cline) - Formerly known as Claude Dev is a VSCode extension for multi-file/whole-repo coding
|
||||||
- [Cherry Studio](https://github.com/kangfenmao/cherry-studio) (Desktop client with Ollama support)
|
- [Cherry Studio](https://github.com/kangfenmao/cherry-studio) (Desktop client with Ollama support)
|
||||||
- [ConfiChat](https://github.com/1runeberg/confichat) (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)
|
- [ConfiChat](https://github.com/1runeberg/confichat) (Lightweight, standalone, multi-platform, and privacy-focused LLM chat interface with optional encryption)
|
||||||
- [Archyve](https://github.com/nickthecook/archyve) (RAG-enabling document library)
|
- [Archyve](https://github.com/nickthecook/archyve) (RAG-enabling document library)
|
||||||
- [crewAI with Mesop](https://github.com/rapidarchitect/ollama-crew-mesop) (Mesop Web Interface to run crewAI with Ollama)
|
- [crewAI with Mesop](https://github.com/rapidarchitect/ollama-crew-mesop) (Mesop Web Interface to run crewAI with Ollama)
|
||||||
|
- [Tkinter-based client](https://github.com/chyok/ollama-gui) (Python tkinter-based Client for Ollama)
|
||||||
|
- [LLMChat](https://github.com/trendy-design/llmchat) (Privacy focused, 100% local, intuitive all-in-one chat interface)
|
||||||
|
- [Local Multimodal AI Chat](https://github.com/Leon-Sander/Local-Multimodal-AI-Chat) (Ollama-based LLM Chat with support for multiple features, including PDF RAG, voice chat, image-based interactions, and integration with OpenAI.)
|
||||||
|
- [ARGO](https://github.com/xark-argo/argo) (Locally download and run Ollama and Huggingface models with RAG and deep research on Mac/Windows/Linux)
|
||||||
|
- [OrionChat](https://github.com/EliasPereirah/OrionChat) - OrionChat is a web interface for chatting with different AI providers
|
||||||
|
- [G1](https://github.com/bklieger-groq/g1) (Prototype of using prompting strategies to improve the LLM's reasoning through o1-like reasoning chains.)
|
||||||
|
- [Web management](https://github.com/lemonit-eric-mao/ollama-web-management) (Web management page)
|
||||||
|
- [Promptery](https://github.com/promptery/promptery) (desktop client for Ollama.)
|
||||||
|
- [Ollama App](https://github.com/JHubi1/ollama-app) (Modern and easy-to-use multi-platform client for Ollama)
|
||||||
|
- [chat-ollama](https://github.com/annilq/chat-ollama) (a React Native client for Ollama)
|
||||||
|
- [SpaceLlama](https://github.com/tcsenpai/spacellama) (Firefox and Chrome extension to quickly summarize web pages with ollama in a sidebar)
|
||||||
|
- [YouLama](https://github.com/tcsenpai/youlama) (Webapp to quickly summarize any YouTube video, supporting Invidious as well)
|
||||||
|
- [DualMind](https://github.com/tcsenpai/dualmind) (Experimental app allowing two models to talk to each other in the terminal or in a web interface)
|
||||||
|
- [ollamarama-matrix](https://github.com/h1ddenpr0cess20/ollamarama-matrix) (Ollama chatbot for the Matrix chat protocol)
|
||||||
|
- [ollama-chat-app](https://github.com/anan1213095357/ollama-chat-app) (Flutter-based chat app)
|
||||||
|
- [Perfect Memory AI](https://www.perfectmemory.ai/) (Productivity AI assists personalized by what you have seen on your screen, heard, and said in the meetings)
|
||||||
|
- [Hexabot](https://github.com/hexastack/hexabot) (A conversational AI builder)
|
||||||
|
- [Reddit Rate](https://github.com/rapidarchitect/reddit_analyzer) (Search and Rate Reddit topics with a weighted summation)
|
||||||
|
- [OpenTalkGpt](https://github.com/adarshM84/OpenTalkGpt) (Chrome Extension to manage open-source models supported by Ollama, create custom models, and chat with models from a user-friendly UI)
|
||||||
|
- [VT](https://github.com/vinhnx/vt.ai) (A minimal multimodal AI chat app, with dynamic conversation routing. Supports local models via Ollama)
|
||||||
|
- [Nosia](https://github.com/nosia-ai/nosia) (Easy to install and use RAG platform based on Ollama)
|
||||||
|
- [Witsy](https://github.com/nbonamy/witsy) (An AI Desktop application available for Mac/Windows/Linux)
|
||||||
|
- [Abbey](https://github.com/US-Artificial-Intelligence/abbey) (A configurable AI interface server with notebooks, document storage, and YouTube support)
|
||||||
|
- [Minima](https://github.com/dmayboroda/minima) (RAG with on-premises or fully local workflow)
|
||||||
|
- [aidful-ollama-model-delete](https://github.com/AidfulAI/aidful-ollama-model-delete) (User interface for simplified model cleanup)
|
||||||
|
- [Perplexica](https://github.com/ItzCrazyKns/Perplexica) (An AI-powered search engine & an open-source alternative to Perplexity AI)
|
||||||
|
- [Ollama Chat WebUI for Docker ](https://github.com/oslook/ollama-webui) (Support for local docker deployment, lightweight ollama webui)
|
||||||
|
- [AI Toolkit for Visual Studio Code](https://aka.ms/ai-tooklit/ollama-docs) (Microsoft-official VSCode extension to chat, test, evaluate models with Ollama support, and use them in your AI applications.)
|
||||||
|
- [MinimalNextOllamaChat](https://github.com/anilkay/MinimalNextOllamaChat) (Minimal Web UI for Chat and Model Control)
|
||||||
|
- [Chipper](https://github.com/TilmanGriesel/chipper) AI interface for tinkerers (Ollama, Haystack RAG, Python)
|
||||||
|
- [ChibiChat](https://github.com/CosmicEventHorizon/ChibiChat) (Kotlin-based Android app to chat with Ollama and Koboldcpp API endpoints)
|
||||||
|
- [LocalLLM](https://github.com/qusaismael/localllm) (Minimal Web-App to run ollama models on it with a GUI)
|
||||||
|
- [Ollamazing](https://github.com/buiducnhat/ollamazing) (Web extension to run Ollama models)
|
||||||
|
- [OpenDeepResearcher-via-searxng](https://github.com/benhaotang/OpenDeepResearcher-via-searxng) (A Deep Research equivalent endpoint with Ollama support for running locally)
|
||||||
|
- [AntSK](https://github.com/AIDotNet/AntSK) (Out-of-the-box & Adaptable RAG Chatbot)
|
||||||
|
- [MaxKB](https://github.com/1Panel-dev/MaxKB/) (Ready-to-use & flexible RAG Chatbot)
|
||||||
|
- [yla](https://github.com/danielekp/yla) (Web interface to freely interact with your customized models)
|
||||||
|
- [LangBot](https://github.com/RockChinQ/LangBot) (LLM-based instant messaging bots platform, with Agents, RAG features, supports multiple platforms)
|
||||||
|
- [1Panel](https://github.com/1Panel-dev/1Panel/) (Web-based Linux Server Management Tool)
|
||||||
|
- [AstrBot](https://github.com/Soulter/AstrBot/) (User-friendly LLM-based multi-platform chatbot with a WebUI, supporting RAG, LLM agents, and plugins integration)
|
||||||
|
- [Reins](https://github.com/ibrahimcetin/reins) (Easily tweak parameters, customize system prompts per chat, and enhance your AI experiments with reasoning model support.)
|
||||||
|
- [Flufy](https://github.com/Aharon-Bensadoun/Flufy) (A beautiful chat interface for interacting with Ollama's API. Built with React, TypeScript, and Material-UI.)
|
||||||
|
- [Ellama](https://github.com/zeozeozeo/ellama) (Friendly native app to chat with an Ollama instance)
|
||||||
|
- [screenpipe](https://github.com/mediar-ai/screenpipe) Build agents powered by your screen history
|
||||||
|
- [Ollamb](https://github.com/hengkysteen/ollamb) (Simple yet rich in features, cross-platform built with Flutter and designed for Ollama. Try the [web demo](https://hengkysteen.github.io/demo/ollamb/).)
|
||||||
|
- [Writeopia](https://github.com/Writeopia/Writeopia) (Text editor with integration with Ollama)
|
||||||
|
- [AppFlowy](https://github.com/AppFlowy-IO/AppFlowy) (AI collaborative workspace with Ollama, cross-platform and self-hostable)
|
||||||
|
- [Lumina](https://github.com/cushydigit/lumina.git) (A lightweight, minimal React.js frontend for interacting with Ollama servers)
|
||||||
|
- [Tiny Notepad](https://pypi.org/project/tiny-notepad) (A lightweight, notepad-like interface to chat with ollama available on PyPI)
|
||||||
|
- [macLlama (macOS native)](https://github.com/hellotunamayo/macLlama) (A native macOS GUI application for interacting with Ollama models, featuring a chat interface.)
|
||||||
|
- [GPTranslate](https://github.com/philberndt/GPTranslate) (A fast and lightweight, AI powered desktop translation application written with Rust and Tauri. Features real-time translation with OpenAI/Azure/Ollama.)
|
||||||
|
- [ollama launcher](https://github.com/NGC13009/ollama-launcher) (A launcher for Ollama, aiming to provide users with convenient functions such as ollama server launching, management, or configuration.)
|
||||||
|
- [ai-hub](https://github.com/Aj-Seven/ai-hub) (AI Hub supports multiple models via API keys and Chat support via Ollama API.)
|
||||||
|
- [Mayan EDMS](https://gitlab.com/mayan-edms/mayan-edms) (Open source document management system to organize, tag, search, and automate your files with powerful Ollama driven workflows.)
|
||||||
|
- [Serene Pub](https://github.com/doolijb/serene-pub) (Beginner friendly, open source AI Roleplaying App for Windows, Mac OS and Linux. Search, download and use models with Ollama all inside the app.)
|
||||||
|
- [Andes](https://github.com/aqerd/andes) (A Visual Studio Code extension that provides a local UI interface for Ollama models)
|
||||||
|
- [Clueless](https://github.com/KashyapTan/clueless) (Open Source & Local Cluely: A desktop application LLM assistant to help you talk to anything on your screen using locally served Ollama models. Also undetectable to screenshare)
|
||||||
|
- [ollama-co2](https://github.com/carbonatedWaterOrg/ollama-co2) (FastAPI web interface for monitoring and managing local and remote Ollama servers with real-time model monitoring and concurrent downloads)
|
||||||
|
|
||||||
|
### Cloud
|
||||||
|
|
||||||
|
- [Google Cloud](https://cloud.google.com/run/docs/tutorials/gpu-gemma2-with-ollama)
|
||||||
|
- [Fly.io](https://fly.io/docs/python/do-more/add-ollama/)
|
||||||
|
- [Koyeb](https://www.koyeb.com/deploy/ollama)
|
||||||
|
|
||||||
### Terminal
|
### Terminal
|
||||||
|
|
||||||
- [oterm](https://github.com/ggozad/oterm)
|
- [oterm](https://github.com/ggozad/oterm)
|
||||||
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
|
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
|
||||||
- [Emacs client](https://github.com/zweifisch/ollama)
|
- [Emacs client](https://github.com/zweifisch/ollama)
|
||||||
|
- [neollama](https://github.com/paradoxical-dev/neollama) UI client for interacting with models from within Neovim
|
||||||
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
|
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
|
||||||
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
|
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
|
||||||
- [ollero.nvim](https://github.com/marco-souza/ollero.nvim)
|
- [ollero.nvim](https://github.com/marco-souza/ollero.nvim)
|
||||||
@@ -346,7 +459,7 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [Oatmeal](https://github.com/dustinblackman/oatmeal)
|
- [Oatmeal](https://github.com/dustinblackman/oatmeal)
|
||||||
- [cmdh](https://github.com/pgibler/cmdh)
|
- [cmdh](https://github.com/pgibler/cmdh)
|
||||||
- [ooo](https://github.com/npahlfer/ooo)
|
- [ooo](https://github.com/npahlfer/ooo)
|
||||||
- [shell-pilot](https://github.com/reid41/shell-pilot)
|
- [shell-pilot](https://github.com/reid41/shell-pilot)(Interact with models via pure shell scripts on Linux or macOS)
|
||||||
- [tenere](https://github.com/pythops/tenere)
|
- [tenere](https://github.com/pythops/tenere)
|
||||||
- [llm-ollama](https://github.com/taketwo/llm-ollama) for [Datasette's LLM CLI](https://llm.datasette.io/en/stable/).
|
- [llm-ollama](https://github.com/taketwo/llm-ollama) for [Datasette's LLM CLI](https://llm.datasette.io/en/stable/).
|
||||||
- [typechat-cli](https://github.com/anaisbetts/typechat-cli)
|
- [typechat-cli](https://github.com/anaisbetts/typechat-cli)
|
||||||
@@ -354,34 +467,59 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [tlm](https://github.com/yusufcanb/tlm)
|
- [tlm](https://github.com/yusufcanb/tlm)
|
||||||
- [podman-ollama](https://github.com/ericcurtin/podman-ollama)
|
- [podman-ollama](https://github.com/ericcurtin/podman-ollama)
|
||||||
- [gollama](https://github.com/sammcj/gollama)
|
- [gollama](https://github.com/sammcj/gollama)
|
||||||
|
- [ParLlama](https://github.com/paulrobello/parllama)
|
||||||
- [Ollama eBook Summary](https://github.com/cognitivetech/ollama-ebook-summary/)
|
- [Ollama eBook Summary](https://github.com/cognitivetech/ollama-ebook-summary/)
|
||||||
|
- [Ollama Mixture of Experts (MOE) in 50 lines of code](https://github.com/rapidarchitect/ollama_moe)
|
||||||
|
- [vim-intelligence-bridge](https://github.com/pepo-ec/vim-intelligence-bridge) Simple interaction of "Ollama" with the Vim editor
|
||||||
|
- [x-cmd ollama](https://x-cmd.com/mod/ollama)
|
||||||
|
- [bb7](https://github.com/drunkwcodes/bb7)
|
||||||
|
- [SwollamaCLI](https://github.com/marcusziade/Swollama) bundled with the Swollama Swift package. [Demo](https://github.com/marcusziade/Swollama?tab=readme-ov-file#cli-usage)
|
||||||
|
- [aichat](https://github.com/sigoden/aichat) All-in-one LLM CLI tool featuring Shell Assistant, Chat-REPL, RAG, AI tools & agents, with access to OpenAI, Claude, Gemini, Ollama, Groq, and more.
|
||||||
|
- [PowershAI](https://github.com/rrg92/powershai) PowerShell module that brings AI to terminal on Windows, including support for Ollama
|
||||||
|
- [DeepShell](https://github.com/Abyss-c0re/deepshell) Your self-hosted AI assistant. Interactive Shell, Files and Folders analysis.
|
||||||
|
- [orbiton](https://github.com/xyproto/orbiton) Configuration-free text editor and IDE with support for tab completion with Ollama.
|
||||||
|
- [orca-cli](https://github.com/molbal/orca-cli) Ollama Registry CLI Application - Browse, pull, and download models from Ollama Registry in your terminal.
|
||||||
|
- [GGUF-to-Ollama](https://github.com/jonathanhecl/gguf-to-ollama) - Importing GGUF to Ollama made easy (multiplatform)
|
||||||
|
- [AWS-Strands-With-Ollama](https://github.com/rapidarchitect/ollama_strands) - AWS Strands Agents with Ollama Examples
|
||||||
|
- [ollama-multirun](https://github.com/attogram/ollama-multirun) - A bash shell script to run a single prompt against any or all of your locally installed ollama models, saving the output and performance statistics as easily navigable web pages. ([Demo](https://attogram.github.io/ai_test_zone/))
|
||||||
|
- [ollama-bash-toolshed](https://github.com/attogram/ollama-bash-toolshed) - Bash scripts to chat with tool using models. Add new tools to your shed with ease. Runs on Ollama.
|
||||||
|
|
||||||
### Apple Vision Pro
|
### Apple Vision Pro
|
||||||
|
|
||||||
|
- [SwiftChat](https://github.com/aws-samples/swift-chat) (Cross-platform AI chat app supporting Apple Vision Pro via "Designed for iPad")
|
||||||
- [Enchanted](https://github.com/AugustDev/enchanted)
|
- [Enchanted](https://github.com/AugustDev/enchanted)
|
||||||
|
|
||||||
### Database
|
### Database
|
||||||
|
|
||||||
|
- [pgai](https://github.com/timescale/pgai) - PostgreSQL as a vector database (Create and search embeddings from Ollama models using pgvector)
|
||||||
|
- [Get started guide](https://github.com/timescale/pgai/blob/main/docs/vectorizer-quick-start.md)
|
||||||
- [MindsDB](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md) (Connects Ollama models with nearly 200 data platforms and apps)
|
- [MindsDB](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md) (Connects Ollama models with nearly 200 data platforms and apps)
|
||||||
- [chromem-go](https://github.com/philippgille/chromem-go/blob/v0.5.0/embed_ollama.go) with [example](https://github.com/philippgille/chromem-go/tree/v0.5.0/examples/rag-wikipedia-ollama)
|
- [chromem-go](https://github.com/philippgille/chromem-go/blob/v0.5.0/embed_ollama.go) with [example](https://github.com/philippgille/chromem-go/tree/v0.5.0/examples/rag-wikipedia-ollama)
|
||||||
|
- [Kangaroo](https://github.com/dbkangaroo/kangaroo) (AI-powered SQL client and admin tool for popular databases)
|
||||||
|
|
||||||
### Package managers
|
### Package managers
|
||||||
|
|
||||||
- [Pacman](https://archlinux.org/packages/extra/x86_64/ollama/)
|
- [Pacman](https://archlinux.org/packages/extra/x86_64/ollama/)
|
||||||
- [Gentoo](https://github.com/gentoo/guru/tree/master/app-misc/ollama)
|
- [Gentoo](https://github.com/gentoo/guru/tree/master/app-misc/ollama)
|
||||||
|
- [Homebrew](https://formulae.brew.sh/formula/ollama)
|
||||||
- [Helm Chart](https://artifacthub.io/packages/helm/ollama-helm/ollama)
|
- [Helm Chart](https://artifacthub.io/packages/helm/ollama-helm/ollama)
|
||||||
- [Guix channel](https://codeberg.org/tusharhero/ollama-guix)
|
- [Guix channel](https://codeberg.org/tusharhero/ollama-guix)
|
||||||
- [Nix package](https://search.nixos.org/packages?channel=24.05&show=ollama&from=0&size=50&sort=relevance&type=packages&query=ollama)
|
- [Nix package](https://search.nixos.org/packages?show=ollama&from=0&size=50&sort=relevance&type=packages&query=ollama)
|
||||||
- [Flox](https://flox.dev/blog/ollama-part-one)
|
- [Flox](https://flox.dev/blog/ollama-part-one)
|
||||||
|
|
||||||
### Libraries
|
### Libraries
|
||||||
|
|
||||||
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
|
- [LangChain](https://python.langchain.com/docs/integrations/chat/ollama/) and [LangChain.js](https://js.langchain.com/docs/integrations/chat/ollama/) with [example](https://js.langchain.com/docs/tutorials/local_rag/)
|
||||||
- [Firebase Genkit](https://firebase.google.com/docs/genkit/plugins/ollama)
|
- [Firebase Genkit](https://firebase.google.com/docs/genkit/plugins/ollama)
|
||||||
- [crewAI](https://github.com/crewAIInc/crewAI)
|
- [crewAI](https://github.com/crewAIInc/crewAI)
|
||||||
|
- [Yacana](https://remembersoftwares.github.io/yacana/) (User-friendly multi-agent framework for brainstorming and executing predetermined flows with built-in tool integration)
|
||||||
|
- [Spring AI](https://github.com/spring-projects/spring-ai) with [reference](https://docs.spring.io/spring-ai/reference/api/chat/ollama-chat.html) and [example](https://github.com/tzolov/ollama-tools)
|
||||||
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
|
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
|
||||||
- [LangChain4j](https://github.com/langchain4j/langchain4j) with [example](https://github.com/langchain4j/langchain4j-examples/tree/main/ollama-examples/src/main/java)
|
- [LangChain4j](https://github.com/langchain4j/langchain4j) with [example](https://github.com/langchain4j/langchain4j-examples/tree/main/ollama-examples/src/main/java)
|
||||||
- [LangChainRust](https://github.com/Abraxas-365/langchain-rust) with [example](https://github.com/Abraxas-365/langchain-rust/blob/main/examples/llm_ollama.rs)
|
- [LangChainRust](https://github.com/Abraxas-365/langchain-rust) with [example](https://github.com/Abraxas-365/langchain-rust/blob/main/examples/llm_ollama.rs)
|
||||||
- [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html)
|
- [LangChain for .NET](https://github.com/tryAGI/LangChain) with [example](https://github.com/tryAGI/LangChain/blob/main/examples/LangChain.Samples.OpenAI/Program.cs)
|
||||||
|
- [LLPhant](https://github.com/theodo-group/LLPhant?tab=readme-ov-file#ollama)
|
||||||
|
- [LlamaIndex](https://docs.llamaindex.ai/en/stable/examples/llm/ollama/) and [LlamaIndexTS](https://ts.llamaindex.ai/modules/llms/available_llms/ollama)
|
||||||
- [LiteLLM](https://github.com/BerriAI/litellm)
|
- [LiteLLM](https://github.com/BerriAI/litellm)
|
||||||
- [OllamaFarm for Go](https://github.com/presbrey/ollamafarm)
|
- [OllamaFarm for Go](https://github.com/presbrey/ollamafarm)
|
||||||
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
|
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
|
||||||
@@ -405,22 +543,47 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [Portkey](https://portkey.ai/docs/welcome/integration-guides/ollama)
|
- [Portkey](https://portkey.ai/docs/welcome/integration-guides/ollama)
|
||||||
- [PromptingTools.jl](https://github.com/svilupp/PromptingTools.jl) with an [example](https://svilupp.github.io/PromptingTools.jl/dev/examples/working_with_ollama)
|
- [PromptingTools.jl](https://github.com/svilupp/PromptingTools.jl) with an [example](https://svilupp.github.io/PromptingTools.jl/dev/examples/working_with_ollama)
|
||||||
- [LlamaScript](https://github.com/Project-Llama/llamascript)
|
- [LlamaScript](https://github.com/Project-Llama/llamascript)
|
||||||
|
- [llm-axe](https://github.com/emirsahin1/llm-axe) (Python Toolkit for Building LLM Powered Apps)
|
||||||
- [Gollm](https://docs.gollm.co/examples/ollama-example)
|
- [Gollm](https://docs.gollm.co/examples/ollama-example)
|
||||||
|
- [Gollama for Golang](https://github.com/jonathanhecl/gollama)
|
||||||
- [Ollamaclient for Golang](https://github.com/xyproto/ollamaclient)
|
- [Ollamaclient for Golang](https://github.com/xyproto/ollamaclient)
|
||||||
- [High-level function abstraction in Go](https://gitlab.com/tozd/go/fun)
|
- [High-level function abstraction in Go](https://gitlab.com/tozd/go/fun)
|
||||||
- [Ollama PHP](https://github.com/ArdaGnsrn/ollama-php)
|
- [Ollama PHP](https://github.com/ArdaGnsrn/ollama-php)
|
||||||
|
- [Agents-Flex for Java](https://github.com/agents-flex/agents-flex) with [example](https://github.com/agents-flex/agents-flex/tree/main/agents-flex-llm/agents-flex-llm-ollama/src/test/java/com/agentsflex/llm/ollama)
|
||||||
|
- [Parakeet](https://github.com/parakeet-nest/parakeet) is a GoLang library, made to simplify the development of small generative AI applications with Ollama.
|
||||||
|
- [Haverscript](https://github.com/andygill/haverscript) with [examples](https://github.com/andygill/haverscript/tree/main/examples)
|
||||||
|
- [Ollama for Swift](https://github.com/mattt/ollama-swift)
|
||||||
|
- [Swollama for Swift](https://github.com/marcusziade/Swollama) with [DocC](https://marcusziade.github.io/Swollama/documentation/swollama/)
|
||||||
|
- [GoLamify](https://github.com/prasad89/golamify)
|
||||||
|
- [Ollama for Haskell](https://github.com/tusharad/ollama-haskell)
|
||||||
|
- [multi-llm-ts](https://github.com/nbonamy/multi-llm-ts) (A Typescript/JavaScript library allowing access to different LLM in a unified API)
|
||||||
|
- [LlmTornado](https://github.com/lofcz/llmtornado) (C# library providing a unified interface for major FOSS & Commercial inference APIs)
|
||||||
|
- [Ollama for Zig](https://github.com/dravenk/ollama-zig)
|
||||||
|
- [Abso](https://github.com/lunary-ai/abso) (OpenAI-compatible TypeScript SDK for any LLM provider)
|
||||||
|
- [Nichey](https://github.com/goodreasonai/nichey) is a Python package for generating custom wikis for your research topic
|
||||||
|
- [Ollama for D](https://github.com/kassane/ollama-d)
|
||||||
|
- [OllamaPlusPlus](https://github.com/HardCodeDev777/OllamaPlusPlus) (Very simple C++ library for Ollama)
|
||||||
|
- [any-llm](https://github.com/mozilla-ai/any-llm) (A single interface to use different llm providers by [mozilla.ai](https://www.mozilla.ai/))
|
||||||
|
- [any-agent](https://github.com/mozilla-ai/any-agent) (A single interface to use and evaluate different agent frameworks by [mozilla.ai](https://www.mozilla.ai/))
|
||||||
|
- [Neuro SAN](https://github.com/cognizant-ai-lab/neuro-san-studio) (Data-driven multi-agent orchestration framework) with [example](https://github.com/cognizant-ai-lab/neuro-san-studio/blob/main/docs/user_guide.md#ollama)
|
||||||
|
- [achatbot-go](https://github.com/ai-bot-pro/achatbot-go) a multimodal(text/audio/image) chatbot.
|
||||||
|
|
||||||
### Mobile
|
### Mobile
|
||||||
|
|
||||||
|
- [SwiftChat](https://github.com/aws-samples/swift-chat) (Lightning-fast Cross-platform AI chat app with native UI for Android, iOS, and iPad)
|
||||||
- [Enchanted](https://github.com/AugustDev/enchanted)
|
- [Enchanted](https://github.com/AugustDev/enchanted)
|
||||||
- [Maid](https://github.com/Mobile-Artificial-Intelligence/maid)
|
- [Maid](https://github.com/Mobile-Artificial-Intelligence/maid)
|
||||||
- [ConfiChat](https://github.com/1runeberg/confichat) (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)
|
- [Ollama App](https://github.com/JHubi1/ollama-app) (Modern and easy-to-use multi-platform client for Ollama)
|
||||||
|
- [ConfiChat](https://github.com/1runeberg/confichat) (Lightweight, standalone, multi-platform, and privacy-focused LLM chat interface with optional encryption)
|
||||||
|
- [Ollama Android Chat](https://github.com/sunshine0523/OllamaServer) (No need for Termux, start the Ollama service with one click on an Android device)
|
||||||
|
- [Reins](https://github.com/ibrahimcetin/reins) (Easily tweak parameters, customize system prompts per chat, and enhance your AI experiments with reasoning model support.)
|
||||||
|
|
||||||
### Extensions & Plugins
|
### Extensions & Plugins
|
||||||
|
|
||||||
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
|
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
|
||||||
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
|
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
|
||||||
- [Continue](https://github.com/continuedev/continue)
|
- [Continue](https://github.com/continuedev/continue)
|
||||||
|
- [Vibe](https://github.com/thewh1teagle/vibe) (Transcribe and analyze meetings with Ollama)
|
||||||
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
|
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
|
||||||
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
|
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
|
||||||
- [NotesOllama](https://github.com/andersrex/notesollama) (Apple Notes Ollama plugin)
|
- [NotesOllama](https://github.com/andersrex/notesollama) (Apple Notes Ollama plugin)
|
||||||
@@ -435,7 +598,7 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [Obsidian Local GPT plugin](https://github.com/pfrankov/obsidian-local-gpt)
|
- [Obsidian Local GPT plugin](https://github.com/pfrankov/obsidian-local-gpt)
|
||||||
- [Open Interpreter](https://docs.openinterpreter.com/language-model-setup/local-models/ollama)
|
- [Open Interpreter](https://docs.openinterpreter.com/language-model-setup/local-models/ollama)
|
||||||
- [Llama Coder](https://github.com/ex3ndr/llama-coder) (Copilot alternative using Ollama)
|
- [Llama Coder](https://github.com/ex3ndr/llama-coder) (Copilot alternative using Ollama)
|
||||||
- [Ollama Copilot](https://github.com/bernardo-bruning/ollama-copilot) (Proxy that allows you to use ollama as a copilot like Github copilot)
|
- [Ollama Copilot](https://github.com/bernardo-bruning/ollama-copilot) (Proxy that allows you to use Ollama as a copilot like GitHub Copilot)
|
||||||
- [twinny](https://github.com/rjmacarthy/twinny) (Copilot and Copilot chat alternative using Ollama)
|
- [twinny](https://github.com/rjmacarthy/twinny) (Copilot and Copilot chat alternative using Ollama)
|
||||||
- [Wingman-AI](https://github.com/RussellCanfield/wingman-ai) (Copilot code and chat alternative using Ollama and Hugging Face)
|
- [Wingman-AI](https://github.com/RussellCanfield/wingman-ai) (Copilot code and chat alternative using Ollama and Hugging Face)
|
||||||
- [Page Assist](https://github.com/n4ze3m/page-assist) (Chrome Extension)
|
- [Page Assist](https://github.com/n4ze3m/page-assist) (Chrome Extension)
|
||||||
@@ -443,12 +606,37 @@ See the [API documentation](./docs/api.md) for all endpoints.
|
|||||||
- [AI Telegram Bot](https://github.com/tusharhero/aitelegrambot) (Telegram bot using Ollama in backend)
|
- [AI Telegram Bot](https://github.com/tusharhero/aitelegrambot) (Telegram bot using Ollama in backend)
|
||||||
- [AI ST Completion](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (Sublime Text 4 AI assistant plugin with Ollama support)
|
- [AI ST Completion](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (Sublime Text 4 AI assistant plugin with Ollama support)
|
||||||
- [Discord-Ollama Chat Bot](https://github.com/kevinthedang/discord-ollama) (Generalized TypeScript Discord Bot w/ Tuning Documentation)
|
- [Discord-Ollama Chat Bot](https://github.com/kevinthedang/discord-ollama) (Generalized TypeScript Discord Bot w/ Tuning Documentation)
|
||||||
|
- [ChatGPTBox: All in one browser extension](https://github.com/josStorer/chatGPTBox) with [Integrating Tutorial](https://github.com/josStorer/chatGPTBox/issues/616#issuecomment-1975186467)
|
||||||
- [Discord AI chat/moderation bot](https://github.com/rapmd73/Companion) Chat/moderation bot written in python. Uses Ollama to create personalities.
|
- [Discord AI chat/moderation bot](https://github.com/rapmd73/Companion) Chat/moderation bot written in python. Uses Ollama to create personalities.
|
||||||
- [Headless Ollama](https://github.com/nischalj10/headless-ollama) (Scripts to automatically install ollama client & models on any OS for apps that depends on ollama server)
|
- [Headless Ollama](https://github.com/nischalj10/headless-ollama) (Scripts to automatically install ollama client & models on any OS for apps that depend on ollama server)
|
||||||
- [vnc-lm](https://github.com/jk011ru/vnc-lm) (A containerized Discord bot with support for attachments and web links)
|
- [Terraform AWS Ollama & Open WebUI](https://github.com/xuyangbocn/terraform-aws-self-host-llm) (A Terraform module to deploy on AWS a ready-to-use Ollama service, together with its front-end Open WebUI service.)
|
||||||
|
- [node-red-contrib-ollama](https://github.com/jakubburkiewicz/node-red-contrib-ollama)
|
||||||
|
- [Local AI Helper](https://github.com/ivostoykov/localAI) (Chrome and Firefox extensions that enable interactions with the active tab and customisable API endpoints. Includes secure storage for user prompts.)
|
||||||
|
- [vnc-lm](https://github.com/jake83741/vnc-lm) (Discord bot for messaging with LLMs through Ollama and LiteLLM. Seamlessly move between local and flagship models.)
|
||||||
- [LSP-AI](https://github.com/SilasMarvin/lsp-ai) (Open-source language server for AI-powered functionality)
|
- [LSP-AI](https://github.com/SilasMarvin/lsp-ai) (Open-source language server for AI-powered functionality)
|
||||||
|
- [QodeAssist](https://github.com/Palm1r/QodeAssist) (AI-powered coding assistant plugin for Qt Creator)
|
||||||
|
- [Obsidian Quiz Generator plugin](https://github.com/ECuiDev/obsidian-quiz-generator)
|
||||||
|
- [AI Summmary Helper plugin](https://github.com/philffm/ai-summary-helper)
|
||||||
|
- [TextCraft](https://github.com/suncloudsmoon/TextCraft) (Copilot in Word alternative using Ollama)
|
||||||
|
- [Alfred Ollama](https://github.com/zeitlings/alfred-ollama) (Alfred Workflow)
|
||||||
|
- [TextLLaMA](https://github.com/adarshM84/TextLLaMA) A Chrome Extension that helps you write emails, correct grammar, and translate into any language
|
||||||
|
- [Simple-Discord-AI](https://github.com/zyphixor/simple-discord-ai)
|
||||||
|
- [LLM Telegram Bot](https://github.com/innightwolfsleep/llm_telegram_bot) (telegram bot, primary for RP. Oobabooga-like buttons, [A1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) API integration e.t.c)
|
||||||
|
- [mcp-llm](https://github.com/sammcj/mcp-llm) (MCP Server to allow LLMs to call other LLMs)
|
||||||
|
- [SimpleOllamaUnity](https://github.com/HardCodeDev777/SimpleOllamaUnity) (Unity Engine extension for communicating with Ollama in a few lines of code. Also works at runtime)
|
||||||
|
- [UnityCodeLama](https://github.com/HardCodeDev777/UnityCodeLama) (Unity Edtior tool to analyze scripts via Ollama)
|
||||||
|
- [NativeMind](https://github.com/NativeMindBrowser/NativeMindExtension) (Private, on-device AI Assistant, no cloud dependencies)
|
||||||
|
- [GMAI - Gradle Managed AI](https://gmai.premex.se/) (Gradle plugin for automated Ollama lifecycle management during build phases)
|
||||||
|
- [NOMYO Router](https://github.com/nomyo-ai/nomyo-router) (A transparent Ollama proxy with model deployment aware routing which auto-manages multiple Ollama instances in a given network)
|
||||||
|
|
||||||
### Supported backends
|
### Supported backends
|
||||||
|
|
||||||
- [llama.cpp](https://github.com/ggerganov/llama.cpp) project founded by Georgi Gerganov.
|
- [llama.cpp](https://github.com/ggml-org/llama.cpp) project founded by Georgi Gerganov.
|
||||||
|
|
||||||
|
### Observability
|
||||||
|
- [Opik](https://www.comet.com/docs/opik/cookbook/ollama) is an open-source platform to debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards. Opik supports native intergration to Ollama.
|
||||||
|
- [Lunary](https://lunary.ai/docs/integrations/ollama) is the leading open-source LLM observability platform. It provides a variety of enterprise-grade features such as real-time analytics, prompt templates management, PII masking, and comprehensive agent tracing.
|
||||||
|
- [OpenLIT](https://github.com/openlit/openlit) is an OpenTelemetry-native tool for monitoring Ollama Applications & GPUs using traces and metrics.
|
||||||
|
- [HoneyHive](https://docs.honeyhive.ai/integrations/ollama) is an AI observability and evaluation platform for AI agents. Use HoneyHive to evaluate agent performance, interrogate failures, and monitor quality in production.
|
||||||
|
- [Langfuse](https://langfuse.com/docs/integrations/ollama) is an open source LLM observability platform that enables teams to collaboratively monitor, evaluate and debug AI applications.
|
||||||
|
- [MLflow Tracing](https://mlflow.org/docs/latest/llms/tracing/index.html#automatic-tracing) is an open source LLM observability tool with a convenient API to log and visualize traces, making it easy to debug and evaluate GenAI applications.
|
||||||
|
|||||||
@@ -10,7 +10,7 @@
|
|||||||
// repository].
|
// repository].
|
||||||
//
|
//
|
||||||
// [the API documentation]: https://github.com/ollama/ollama/blob/main/docs/api.md
|
// [the API documentation]: https://github.com/ollama/ollama/blob/main/docs/api.md
|
||||||
// [in the GitHub repository]: https://github.com/ollama/ollama/tree/main/examples
|
// [in the GitHub repository]: https://github.com/ollama/ollama/tree/main/api/examples
|
||||||
package api
|
package api
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@@ -24,7 +24,10 @@ import (
|
|||||||
"net/http"
|
"net/http"
|
||||||
"net/url"
|
"net/url"
|
||||||
"runtime"
|
"runtime"
|
||||||
|
"strconv"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/auth"
|
||||||
"github.com/ollama/ollama/envconfig"
|
"github.com/ollama/ollama/envconfig"
|
||||||
"github.com/ollama/ollama/format"
|
"github.com/ollama/ollama/format"
|
||||||
"github.com/ollama/ollama/version"
|
"github.com/ollama/ollama/version"
|
||||||
@@ -42,6 +45,12 @@ func checkError(resp *http.Response, body []byte) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if resp.StatusCode == http.StatusUnauthorized {
|
||||||
|
authError := AuthorizationError{StatusCode: resp.StatusCode}
|
||||||
|
json.Unmarshal(body, &authError)
|
||||||
|
return authError
|
||||||
|
}
|
||||||
|
|
||||||
apiError := StatusError{StatusCode: resp.StatusCode}
|
apiError := StatusError{StatusCode: resp.StatusCode}
|
||||||
|
|
||||||
err := json.Unmarshal(body, &apiError)
|
err := json.Unmarshal(body, &apiError)
|
||||||
@@ -55,7 +64,7 @@ func checkError(resp *http.Response, body []byte) error {
|
|||||||
|
|
||||||
// ClientFromEnvironment creates a new [Client] using configuration from the
|
// ClientFromEnvironment creates a new [Client] using configuration from the
|
||||||
// environment variable OLLAMA_HOST, which points to the network host and
|
// environment variable OLLAMA_HOST, which points to the network host and
|
||||||
// port on which the ollama service is listenting. The format of this variable
|
// port on which the ollama service is listening. The format of this variable
|
||||||
// is:
|
// is:
|
||||||
//
|
//
|
||||||
// <scheme>://<host>:<port>
|
// <scheme>://<host>:<port>
|
||||||
@@ -76,6 +85,14 @@ func NewClient(base *url.URL, http *http.Client) *Client {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func getAuthorizationToken(ctx context.Context, challenge string) (string, error) {
|
||||||
|
token, err := auth.Sign(ctx, []byte(challenge))
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return token, nil
|
||||||
|
}
|
||||||
|
|
||||||
func (c *Client) do(ctx context.Context, method, path string, reqData, respData any) error {
|
func (c *Client) do(ctx context.Context, method, path string, reqData, respData any) error {
|
||||||
var reqBody io.Reader
|
var reqBody io.Reader
|
||||||
var data []byte
|
var data []byte
|
||||||
@@ -97,6 +114,21 @@ func (c *Client) do(ctx context.Context, method, path string, reqData, respData
|
|||||||
}
|
}
|
||||||
|
|
||||||
requestURL := c.base.JoinPath(path)
|
requestURL := c.base.JoinPath(path)
|
||||||
|
|
||||||
|
var token string
|
||||||
|
if envconfig.UseAuth() || c.base.Hostname() == "ollama.com" {
|
||||||
|
now := strconv.FormatInt(time.Now().Unix(), 10)
|
||||||
|
chal := fmt.Sprintf("%s,%s?ts=%s", method, path, now)
|
||||||
|
token, err = getAuthorizationToken(ctx, chal)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
q := requestURL.Query()
|
||||||
|
q.Set("ts", now)
|
||||||
|
requestURL.RawQuery = q.Encode()
|
||||||
|
}
|
||||||
|
|
||||||
request, err := http.NewRequestWithContext(ctx, method, requestURL.String(), reqBody)
|
request, err := http.NewRequestWithContext(ctx, method, requestURL.String(), reqBody)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -106,6 +138,10 @@ func (c *Client) do(ctx context.Context, method, path string, reqData, respData
|
|||||||
request.Header.Set("Accept", "application/json")
|
request.Header.Set("Accept", "application/json")
|
||||||
request.Header.Set("User-Agent", fmt.Sprintf("ollama/%s (%s %s) Go/%s", version.Version, runtime.GOARCH, runtime.GOOS, runtime.Version()))
|
request.Header.Set("User-Agent", fmt.Sprintf("ollama/%s (%s %s) Go/%s", version.Version, runtime.GOARCH, runtime.GOOS, runtime.Version()))
|
||||||
|
|
||||||
|
if token != "" {
|
||||||
|
request.Header.Set("Authorization", token)
|
||||||
|
}
|
||||||
|
|
||||||
respObj, err := c.http.Do(request)
|
respObj, err := c.http.Do(request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -132,7 +168,7 @@ func (c *Client) do(ctx context.Context, method, path string, reqData, respData
|
|||||||
const maxBufferSize = 512 * format.KiloByte
|
const maxBufferSize = 512 * format.KiloByte
|
||||||
|
|
||||||
func (c *Client) stream(ctx context.Context, method, path string, data any, fn func([]byte) error) error {
|
func (c *Client) stream(ctx context.Context, method, path string, data any, fn func([]byte) error) error {
|
||||||
var buf *bytes.Buffer
|
var buf io.Reader
|
||||||
if data != nil {
|
if data != nil {
|
||||||
bts, err := json.Marshal(data)
|
bts, err := json.Marshal(data)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -143,6 +179,22 @@ func (c *Client) stream(ctx context.Context, method, path string, data any, fn f
|
|||||||
}
|
}
|
||||||
|
|
||||||
requestURL := c.base.JoinPath(path)
|
requestURL := c.base.JoinPath(path)
|
||||||
|
|
||||||
|
var token string
|
||||||
|
if envconfig.UseAuth() || c.base.Hostname() == "ollama.com" {
|
||||||
|
var err error
|
||||||
|
now := strconv.FormatInt(time.Now().Unix(), 10)
|
||||||
|
chal := fmt.Sprintf("%s,%s?ts=%s", method, path, now)
|
||||||
|
token, err = getAuthorizationToken(ctx, chal)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
q := requestURL.Query()
|
||||||
|
q.Set("ts", now)
|
||||||
|
requestURL.RawQuery = q.Encode()
|
||||||
|
}
|
||||||
|
|
||||||
request, err := http.NewRequestWithContext(ctx, method, requestURL.String(), buf)
|
request, err := http.NewRequestWithContext(ctx, method, requestURL.String(), buf)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -152,6 +204,10 @@ func (c *Client) stream(ctx context.Context, method, path string, data any, fn f
|
|||||||
request.Header.Set("Accept", "application/x-ndjson")
|
request.Header.Set("Accept", "application/x-ndjson")
|
||||||
request.Header.Set("User-Agent", fmt.Sprintf("ollama/%s (%s %s) Go/%s", version.Version, runtime.GOARCH, runtime.GOOS, runtime.Version()))
|
request.Header.Set("User-Agent", fmt.Sprintf("ollama/%s (%s %s) Go/%s", version.Version, runtime.GOARCH, runtime.GOOS, runtime.Version()))
|
||||||
|
|
||||||
|
if token != "" {
|
||||||
|
request.Header.Set("Authorization", token)
|
||||||
|
}
|
||||||
|
|
||||||
response, err := c.http.Do(request)
|
response, err := c.http.Do(request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -165,6 +221,7 @@ func (c *Client) stream(ctx context.Context, method, path string, data any, fn f
|
|||||||
for scanner.Scan() {
|
for scanner.Scan() {
|
||||||
var errorResponse struct {
|
var errorResponse struct {
|
||||||
Error string `json:"error,omitempty"`
|
Error string `json:"error,omitempty"`
|
||||||
|
SigninURL string `json:"signin_url,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
bts := scanner.Bytes()
|
bts := scanner.Bytes()
|
||||||
@@ -172,11 +229,13 @@ func (c *Client) stream(ctx context.Context, method, path string, data any, fn f
|
|||||||
return fmt.Errorf("unmarshal: %w", err)
|
return fmt.Errorf("unmarshal: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if errorResponse.Error != "" {
|
if response.StatusCode == http.StatusUnauthorized {
|
||||||
return errors.New(errorResponse.Error)
|
return AuthorizationError{
|
||||||
|
StatusCode: response.StatusCode,
|
||||||
|
Status: response.Status,
|
||||||
|
SigninURL: errorResponse.SigninURL,
|
||||||
}
|
}
|
||||||
|
} else if response.StatusCode >= http.StatusBadRequest {
|
||||||
if response.StatusCode >= http.StatusBadRequest {
|
|
||||||
return StatusError{
|
return StatusError{
|
||||||
StatusCode: response.StatusCode,
|
StatusCode: response.StatusCode,
|
||||||
Status: response.Status,
|
Status: response.Status,
|
||||||
@@ -184,6 +243,10 @@ func (c *Client) stream(ctx context.Context, method, path string, data any, fn f
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if errorResponse.Error != "" {
|
||||||
|
return errors.New(errorResponse.Error)
|
||||||
|
}
|
||||||
|
|
||||||
if err := fn(bts); err != nil {
|
if err := fn(bts); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -378,3 +441,21 @@ func (c *Client) Version(ctx context.Context) (string, error) {
|
|||||||
|
|
||||||
return version.Version, nil
|
return version.Version, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Signout will signout a client for a local ollama server.
|
||||||
|
func (c *Client) Signout(ctx context.Context) error {
|
||||||
|
return c.do(ctx, http.MethodPost, "/api/signout", nil, nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disconnect will disconnect an ollama instance from ollama.com.
|
||||||
|
func (c *Client) Disconnect(ctx context.Context, encodedKey string) error {
|
||||||
|
return c.do(ctx, http.MethodDelete, fmt.Sprintf("/api/user/keys/%s", encodedKey), nil, nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Client) Whoami(ctx context.Context) (*UserResponse, error) {
|
||||||
|
var resp UserResponse
|
||||||
|
if err := c.do(ctx, http.MethodPost, "/api/me", nil, &resp); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &resp, nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,6 +1,12 @@
|
|||||||
package api
|
package api
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"net/url"
|
||||||
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -43,3 +49,216 @@ func TestClientFromEnvironment(t *testing.T) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// testError represents an internal error type with status code and message
|
||||||
|
// this is used since the error response from the server is not a standard error struct
|
||||||
|
type testError struct {
|
||||||
|
message string
|
||||||
|
statusCode int
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e testError) Error() string {
|
||||||
|
return e.message
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestClientStream(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
name string
|
||||||
|
responses []any
|
||||||
|
wantErr string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "immediate error response",
|
||||||
|
responses: []any{
|
||||||
|
testError{
|
||||||
|
message: "test error message",
|
||||||
|
statusCode: http.StatusBadRequest,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
wantErr: "test error message",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "error after successful chunks, ok response",
|
||||||
|
responses: []any{
|
||||||
|
ChatResponse{Message: Message{Content: "partial response 1"}},
|
||||||
|
ChatResponse{Message: Message{Content: "partial response 2"}},
|
||||||
|
testError{
|
||||||
|
message: "mid-stream error",
|
||||||
|
statusCode: http.StatusOK,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
wantErr: "mid-stream error",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "http status error takes precedence over general error",
|
||||||
|
responses: []any{
|
||||||
|
testError{
|
||||||
|
message: "custom error message",
|
||||||
|
statusCode: http.StatusInternalServerError,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
wantErr: "500",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "successful stream completion",
|
||||||
|
responses: []any{
|
||||||
|
ChatResponse{Message: Message{Content: "chunk 1"}},
|
||||||
|
ChatResponse{Message: Message{Content: "chunk 2"}},
|
||||||
|
ChatResponse{
|
||||||
|
Message: Message{Content: "final chunk"},
|
||||||
|
Done: true,
|
||||||
|
DoneReason: "stop",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tc := range testCases {
|
||||||
|
t.Run(tc.name, func(t *testing.T) {
|
||||||
|
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
flusher, ok := w.(http.Flusher)
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected http.Flusher")
|
||||||
|
}
|
||||||
|
|
||||||
|
w.Header().Set("Content-Type", "application/x-ndjson")
|
||||||
|
|
||||||
|
for _, resp := range tc.responses {
|
||||||
|
if errResp, ok := resp.(testError); ok {
|
||||||
|
w.WriteHeader(errResp.statusCode)
|
||||||
|
err := json.NewEncoder(w).Encode(map[string]string{
|
||||||
|
"error": errResp.message,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal("failed to encode error response:", err)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := json.NewEncoder(w).Encode(resp); err != nil {
|
||||||
|
t.Fatalf("failed to encode response: %v", err)
|
||||||
|
}
|
||||||
|
flusher.Flush()
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
defer ts.Close()
|
||||||
|
|
||||||
|
client := NewClient(&url.URL{Scheme: "http", Host: ts.Listener.Addr().String()}, http.DefaultClient)
|
||||||
|
|
||||||
|
var receivedChunks []ChatResponse
|
||||||
|
err := client.stream(t.Context(), http.MethodPost, "/v1/chat", nil, func(chunk []byte) error {
|
||||||
|
var resp ChatResponse
|
||||||
|
if err := json.Unmarshal(chunk, &resp); err != nil {
|
||||||
|
return fmt.Errorf("failed to unmarshal chunk: %w", err)
|
||||||
|
}
|
||||||
|
receivedChunks = append(receivedChunks, resp)
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
|
||||||
|
if tc.wantErr != "" {
|
||||||
|
if err == nil {
|
||||||
|
t.Fatal("expected error but got nil")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), tc.wantErr) {
|
||||||
|
t.Errorf("expected error containing %q, got %v", tc.wantErr, err)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
t.Errorf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestClientDo(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
name string
|
||||||
|
response any
|
||||||
|
wantErr string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "immediate error response",
|
||||||
|
response: testError{
|
||||||
|
message: "test error message",
|
||||||
|
statusCode: http.StatusBadRequest,
|
||||||
|
},
|
||||||
|
wantErr: "test error message",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "server error response",
|
||||||
|
response: testError{
|
||||||
|
message: "internal error",
|
||||||
|
statusCode: http.StatusInternalServerError,
|
||||||
|
},
|
||||||
|
wantErr: "internal error",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "successful response",
|
||||||
|
response: struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Success bool `json:"success"`
|
||||||
|
}{
|
||||||
|
ID: "msg_123",
|
||||||
|
Success: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tc := range testCases {
|
||||||
|
t.Run(tc.name, func(t *testing.T) {
|
||||||
|
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if errResp, ok := tc.response.(testError); ok {
|
||||||
|
w.WriteHeader(errResp.statusCode)
|
||||||
|
err := json.NewEncoder(w).Encode(map[string]string{
|
||||||
|
"error": errResp.message,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal("failed to encode error response:", err)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
w.Header().Set("Content-Type", "application/json")
|
||||||
|
if err := json.NewEncoder(w).Encode(tc.response); err != nil {
|
||||||
|
t.Fatalf("failed to encode response: %v", err)
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
defer ts.Close()
|
||||||
|
|
||||||
|
client := NewClient(&url.URL{Scheme: "http", Host: ts.Listener.Addr().String()}, http.DefaultClient)
|
||||||
|
|
||||||
|
var resp struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Success bool `json:"success"`
|
||||||
|
}
|
||||||
|
err := client.do(t.Context(), http.MethodPost, "/v1/messages", nil, &resp)
|
||||||
|
|
||||||
|
if tc.wantErr != "" {
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("got nil, want error %q", tc.wantErr)
|
||||||
|
}
|
||||||
|
if err.Error() != tc.wantErr {
|
||||||
|
t.Errorf("error message mismatch: got %q, want %q", err.Error(), tc.wantErr)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("got error %q, want nil", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if expectedResp, ok := tc.response.(struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Success bool `json:"success"`
|
||||||
|
}); ok {
|
||||||
|
if resp.ID != expectedResp.ID {
|
||||||
|
t.Errorf("response ID mismatch: got %q, want %q", resp.ID, expectedResp.ID)
|
||||||
|
}
|
||||||
|
if resp.Success != expectedResp.Success {
|
||||||
|
t.Errorf("response Success mismatch: got %v, want %v", resp.Success, expectedResp.Success)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
18
api/examples/README.md
Normal file
18
api/examples/README.md
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
# Ollama API Examples
|
||||||
|
|
||||||
|
Run the examples in this directory with:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
go run example_name/main.go
|
||||||
|
```
|
||||||
|
|
||||||
|
## Chat - Chat with a model
|
||||||
|
- [chat/main.go](chat/main.go)
|
||||||
|
|
||||||
|
## Generate - Generate text from a model
|
||||||
|
- [generate/main.go](generate/main.go)
|
||||||
|
- [generate-streaming/main.go](generate-streaming/main.go)
|
||||||
|
|
||||||
|
## Pull - Pull a model
|
||||||
|
- [pull-progress/main.go](pull-progress/main.go)
|
||||||
|
|
||||||
@@ -35,7 +35,7 @@ func main() {
|
|||||||
|
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
req := &api.ChatRequest{
|
req := &api.ChatRequest{
|
||||||
Model: "llama3.1",
|
Model: "llama3.2",
|
||||||
Messages: messages,
|
Messages: messages,
|
||||||
}
|
}
|
||||||
|
|
||||||
479
api/types.go
479
api/types.go
@@ -10,9 +10,14 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/envconfig"
|
||||||
|
"github.com/ollama/ollama/types/model"
|
||||||
)
|
)
|
||||||
|
|
||||||
// StatusError is an error with and HTTP status code.
|
// StatusError is an error with an HTTP status code and message.
|
||||||
type StatusError struct {
|
type StatusError struct {
|
||||||
StatusCode int
|
StatusCode int
|
||||||
Status string
|
Status string
|
||||||
@@ -33,6 +38,19 @@ func (e StatusError) Error() string {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type AuthorizationError struct {
|
||||||
|
StatusCode int
|
||||||
|
Status string
|
||||||
|
SigninURL string `json:"signin_url"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e AuthorizationError) Error() string {
|
||||||
|
if e.Status != "" {
|
||||||
|
return e.Status
|
||||||
|
}
|
||||||
|
return "something went wrong, please see the ollama server logs for details"
|
||||||
|
}
|
||||||
|
|
||||||
// ImageData represents the raw binary data of an image file.
|
// ImageData represents the raw binary data of an image file.
|
||||||
type ImageData []byte
|
type ImageData []byte
|
||||||
|
|
||||||
@@ -57,7 +75,7 @@ type GenerateRequest struct {
|
|||||||
Template string `json:"template"`
|
Template string `json:"template"`
|
||||||
|
|
||||||
// Context is the context parameter returned from a previous call to
|
// Context is the context parameter returned from a previous call to
|
||||||
// Generate call. It can be used to keep a short conversational memory.
|
// [Client.Generate]. It can be used to keep a short conversational memory.
|
||||||
Context []int `json:"context,omitempty"`
|
Context []int `json:"context,omitempty"`
|
||||||
|
|
||||||
// Stream specifies whether the response is streaming; it is true by default.
|
// Stream specifies whether the response is streaming; it is true by default.
|
||||||
@@ -67,19 +85,38 @@ type GenerateRequest struct {
|
|||||||
Raw bool `json:"raw,omitempty"`
|
Raw bool `json:"raw,omitempty"`
|
||||||
|
|
||||||
// Format specifies the format to return a response in.
|
// Format specifies the format to return a response in.
|
||||||
Format string `json:"format"`
|
Format json.RawMessage `json:"format,omitempty"`
|
||||||
|
|
||||||
// KeepAlive controls how long the model will stay loaded in memory following
|
// KeepAlive controls how long the model will stay loaded in memory following
|
||||||
// this request.
|
// this request.
|
||||||
KeepAlive *Duration `json:"keep_alive,omitempty"`
|
KeepAlive *Duration `json:"keep_alive,omitempty"`
|
||||||
|
|
||||||
// Images is an optional list of base64-encoded images accompanying this
|
// Images is an optional list of raw image bytes accompanying this
|
||||||
// request, for multimodal models.
|
// request, for multimodal models.
|
||||||
Images []ImageData `json:"images,omitempty"`
|
Images []ImageData `json:"images,omitempty"`
|
||||||
|
|
||||||
// Options lists model-specific options. For example, temperature can be
|
// Options lists model-specific options. For example, temperature can be
|
||||||
// set through this field, if the model supports it.
|
// set through this field, if the model supports it.
|
||||||
Options map[string]interface{} `json:"options"`
|
Options map[string]any `json:"options"`
|
||||||
|
|
||||||
|
// Think controls whether thinking/reasoning models will think before
|
||||||
|
// responding. Can be a boolean (true/false) or a string ("high", "medium", "low")
|
||||||
|
// for supported models. Needs to be a pointer so we can distinguish between false
|
||||||
|
// (request that thinking _not_ be used) and unset (use the old behavior
|
||||||
|
// before this option was introduced)
|
||||||
|
Think *ThinkValue `json:"think,omitempty"`
|
||||||
|
|
||||||
|
// Truncate is a boolean that, when set to true, truncates the chat history messages
|
||||||
|
// if the rendered prompt exceeds the context length limit.
|
||||||
|
Truncate *bool `json:"truncate,omitempty"`
|
||||||
|
|
||||||
|
// Shift is a boolean that, when set to true, shifts the chat history
|
||||||
|
// when hitting the context length limit instead of erroring.
|
||||||
|
Shift *bool `json:"shift,omitempty"`
|
||||||
|
|
||||||
|
// DebugRenderOnly is a debug option that, when set to true, returns the rendered
|
||||||
|
// template instead of calling the model.
|
||||||
|
DebugRenderOnly bool `json:"_debug_render_only,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ChatRequest describes a request sent by [Client.Chat].
|
// ChatRequest describes a request sent by [Client.Chat].
|
||||||
@@ -90,21 +127,38 @@ type ChatRequest struct {
|
|||||||
// Messages is the messages of the chat - can be used to keep a chat memory.
|
// Messages is the messages of the chat - can be used to keep a chat memory.
|
||||||
Messages []Message `json:"messages"`
|
Messages []Message `json:"messages"`
|
||||||
|
|
||||||
// Stream enable streaming of returned response; true by default.
|
// Stream enables streaming of returned responses; true by default.
|
||||||
Stream *bool `json:"stream,omitempty"`
|
Stream *bool `json:"stream,omitempty"`
|
||||||
|
|
||||||
// Format is the format to return the response in (e.g. "json").
|
// Format is the format to return the response in (e.g. "json").
|
||||||
Format string `json:"format"`
|
Format json.RawMessage `json:"format,omitempty"`
|
||||||
|
|
||||||
// KeepAlive controls how long the model will stay loaded into memory
|
// KeepAlive controls how long the model will stay loaded into memory
|
||||||
// followin the request.
|
// following the request.
|
||||||
KeepAlive *Duration `json:"keep_alive,omitempty"`
|
KeepAlive *Duration `json:"keep_alive,omitempty"`
|
||||||
|
|
||||||
// Tools is an optional list of tools the model has access to.
|
// Tools is an optional list of tools the model has access to.
|
||||||
Tools `json:"tools,omitempty"`
|
Tools `json:"tools,omitempty"`
|
||||||
|
|
||||||
// Options lists model-specific options.
|
// Options lists model-specific options.
|
||||||
Options map[string]interface{} `json:"options"`
|
Options map[string]any `json:"options"`
|
||||||
|
|
||||||
|
// Think controls whether thinking/reasoning models will think before
|
||||||
|
// responding. Can be a boolean (true/false) or a string ("high", "medium", "low")
|
||||||
|
// for supported models.
|
||||||
|
Think *ThinkValue `json:"think,omitempty"`
|
||||||
|
|
||||||
|
// Truncate is a boolean that, when set to true, truncates the chat history messages
|
||||||
|
// if the rendered prompt exceeds the context length limit.
|
||||||
|
Truncate *bool `json:"truncate,omitempty"`
|
||||||
|
|
||||||
|
// Shift is a boolean that, when set to true, shifts the chat history
|
||||||
|
// when hitting the context length limit instead of erroring.
|
||||||
|
Shift *bool `json:"shift,omitempty"`
|
||||||
|
|
||||||
|
// DebugRenderOnly is a debug option that, when set to true, returns the rendered
|
||||||
|
// template instead of calling the model.
|
||||||
|
DebugRenderOnly bool `json:"_debug_render_only,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type Tools []Tool
|
type Tools []Tool
|
||||||
@@ -125,8 +179,12 @@ func (t Tool) String() string {
|
|||||||
type Message struct {
|
type Message struct {
|
||||||
Role string `json:"role"`
|
Role string `json:"role"`
|
||||||
Content string `json:"content"`
|
Content string `json:"content"`
|
||||||
|
// Thinking contains the text that was inside thinking tags in the
|
||||||
|
// original model output when ChatRequest.Think is enabled.
|
||||||
|
Thinking string `json:"thinking,omitempty"`
|
||||||
Images []ImageData `json:"images,omitempty"`
|
Images []ImageData `json:"images,omitempty"`
|
||||||
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
|
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
|
||||||
|
ToolName string `json:"tool_name,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m *Message) UnmarshalJSON(b []byte) error {
|
func (m *Message) UnmarshalJSON(b []byte) error {
|
||||||
@@ -146,6 +204,7 @@ type ToolCall struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type ToolCallFunction struct {
|
type ToolCallFunction struct {
|
||||||
|
Index int `json:"index"`
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Arguments ToolCallFunctionArguments `json:"arguments"`
|
Arguments ToolCallFunctionArguments `json:"arguments"`
|
||||||
}
|
}
|
||||||
@@ -159,21 +218,122 @@ func (t *ToolCallFunctionArguments) String() string {
|
|||||||
|
|
||||||
type Tool struct {
|
type Tool struct {
|
||||||
Type string `json:"type"`
|
Type string `json:"type"`
|
||||||
|
Items any `json:"items,omitempty"`
|
||||||
Function ToolFunction `json:"function"`
|
Function ToolFunction `json:"function"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// PropertyType can be either a string or an array of strings
|
||||||
|
type PropertyType []string
|
||||||
|
|
||||||
|
// UnmarshalJSON implements the json.Unmarshaler interface
|
||||||
|
func (pt *PropertyType) UnmarshalJSON(data []byte) error {
|
||||||
|
// Try to unmarshal as a string first
|
||||||
|
var s string
|
||||||
|
if err := json.Unmarshal(data, &s); err == nil {
|
||||||
|
*pt = []string{s}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// If that fails, try to unmarshal as an array of strings
|
||||||
|
var a []string
|
||||||
|
if err := json.Unmarshal(data, &a); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
*pt = a
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// MarshalJSON implements the json.Marshaler interface
|
||||||
|
func (pt PropertyType) MarshalJSON() ([]byte, error) {
|
||||||
|
if len(pt) == 1 {
|
||||||
|
// If there's only one type, marshal as a string
|
||||||
|
return json.Marshal(pt[0])
|
||||||
|
}
|
||||||
|
// Otherwise marshal as an array
|
||||||
|
return json.Marshal([]string(pt))
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns a string representation of the PropertyType
|
||||||
|
func (pt PropertyType) String() string {
|
||||||
|
if len(pt) == 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
if len(pt) == 1 {
|
||||||
|
return pt[0]
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%v", []string(pt))
|
||||||
|
}
|
||||||
|
|
||||||
|
type ToolProperty struct {
|
||||||
|
AnyOf []ToolProperty `json:"anyOf,omitempty"`
|
||||||
|
Type PropertyType `json:"type,omitempty"`
|
||||||
|
Items any `json:"items,omitempty"`
|
||||||
|
Description string `json:"description,omitempty"`
|
||||||
|
Enum []any `json:"enum,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// ToTypeScriptType converts a ToolProperty to a TypeScript type string
|
||||||
|
func (tp ToolProperty) ToTypeScriptType() string {
|
||||||
|
if len(tp.AnyOf) > 0 {
|
||||||
|
var types []string
|
||||||
|
for _, anyOf := range tp.AnyOf {
|
||||||
|
types = append(types, anyOf.ToTypeScriptType())
|
||||||
|
}
|
||||||
|
return strings.Join(types, " | ")
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(tp.Type) == 0 {
|
||||||
|
return "any"
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(tp.Type) == 1 {
|
||||||
|
return mapToTypeScriptType(tp.Type[0])
|
||||||
|
}
|
||||||
|
|
||||||
|
var types []string
|
||||||
|
for _, t := range tp.Type {
|
||||||
|
types = append(types, mapToTypeScriptType(t))
|
||||||
|
}
|
||||||
|
return strings.Join(types, " | ")
|
||||||
|
}
|
||||||
|
|
||||||
|
// mapToTypeScriptType maps JSON Schema types to TypeScript types
|
||||||
|
func mapToTypeScriptType(jsonType string) string {
|
||||||
|
switch jsonType {
|
||||||
|
case "string":
|
||||||
|
return "string"
|
||||||
|
case "number", "integer":
|
||||||
|
return "number"
|
||||||
|
case "boolean":
|
||||||
|
return "boolean"
|
||||||
|
case "array":
|
||||||
|
return "any[]"
|
||||||
|
case "object":
|
||||||
|
return "Record<string, any>"
|
||||||
|
case "null":
|
||||||
|
return "null"
|
||||||
|
default:
|
||||||
|
return "any"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type ToolFunctionParameters struct {
|
||||||
|
Type string `json:"type"`
|
||||||
|
Defs any `json:"$defs,omitempty"`
|
||||||
|
Items any `json:"items,omitempty"`
|
||||||
|
Required []string `json:"required"`
|
||||||
|
Properties map[string]ToolProperty `json:"properties"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *ToolFunctionParameters) String() string {
|
||||||
|
bts, _ := json.Marshal(t)
|
||||||
|
return string(bts)
|
||||||
|
}
|
||||||
|
|
||||||
type ToolFunction struct {
|
type ToolFunction struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Description string `json:"description"`
|
Description string `json:"description,omitempty"`
|
||||||
Parameters struct {
|
Parameters ToolFunctionParameters `json:"parameters"`
|
||||||
Type string `json:"type"`
|
|
||||||
Required []string `json:"required"`
|
|
||||||
Properties map[string]struct {
|
|
||||||
Type string `json:"type"`
|
|
||||||
Description string `json:"description"`
|
|
||||||
Enum []string `json:"enum,omitempty"`
|
|
||||||
} `json:"properties"`
|
|
||||||
} `json:"parameters"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *ToolFunction) String() string {
|
func (t *ToolFunction) String() string {
|
||||||
@@ -184,16 +344,38 @@ func (t *ToolFunction) String() string {
|
|||||||
// ChatResponse is the response returned by [Client.Chat]. Its fields are
|
// ChatResponse is the response returned by [Client.Chat]. Its fields are
|
||||||
// similar to [GenerateResponse].
|
// similar to [GenerateResponse].
|
||||||
type ChatResponse struct {
|
type ChatResponse struct {
|
||||||
|
// Model is the model name that generated the response.
|
||||||
Model string `json:"model"`
|
Model string `json:"model"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
|
||||||
Message Message `json:"message"`
|
|
||||||
DoneReason string `json:"done_reason,omitempty"`
|
|
||||||
|
|
||||||
|
// RemoteModel is the name of the upstream model that generated the response.
|
||||||
|
RemoteModel string `json:"remote_model,omitempty"`
|
||||||
|
|
||||||
|
// RemoteHost is the URL of the upstream Ollama host that generated the response.
|
||||||
|
RemoteHost string `json:"remote_host,omitempty"`
|
||||||
|
|
||||||
|
// CreatedAt is the timestamp of the response.
|
||||||
|
CreatedAt time.Time `json:"created_at"`
|
||||||
|
|
||||||
|
// Message contains the message or part of a message from the model.
|
||||||
|
Message Message `json:"message"`
|
||||||
|
|
||||||
|
// Done specifies if the response is complete.
|
||||||
Done bool `json:"done"`
|
Done bool `json:"done"`
|
||||||
|
|
||||||
|
// DoneReason is the reason the model stopped generating text.
|
||||||
|
DoneReason string `json:"done_reason,omitempty"`
|
||||||
|
|
||||||
|
DebugInfo *DebugInfo `json:"_debug_info,omitempty"`
|
||||||
|
|
||||||
Metrics
|
Metrics
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// DebugInfo contains debug information for template rendering
|
||||||
|
type DebugInfo struct {
|
||||||
|
RenderedTemplate string `json:"rendered_template"`
|
||||||
|
ImageCount int `json:"image_count,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
type Metrics struct {
|
type Metrics struct {
|
||||||
TotalDuration time.Duration `json:"total_duration,omitempty"`
|
TotalDuration time.Duration `json:"total_duration,omitempty"`
|
||||||
LoadDuration time.Duration `json:"load_duration,omitempty"`
|
LoadDuration time.Duration `json:"load_duration,omitempty"`
|
||||||
@@ -203,8 +385,8 @@ type Metrics struct {
|
|||||||
EvalDuration time.Duration `json:"eval_duration,omitempty"`
|
EvalDuration time.Duration `json:"eval_duration,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Options specified in [GenerateRequest], if you add a new option here add it
|
// Options specified in [GenerateRequest]. If you add a new option here, also
|
||||||
// to the API docs also.
|
// add it to the API docs.
|
||||||
type Options struct {
|
type Options struct {
|
||||||
Runner
|
Runner
|
||||||
|
|
||||||
@@ -215,17 +397,12 @@ type Options struct {
|
|||||||
TopK int `json:"top_k,omitempty"`
|
TopK int `json:"top_k,omitempty"`
|
||||||
TopP float32 `json:"top_p,omitempty"`
|
TopP float32 `json:"top_p,omitempty"`
|
||||||
MinP float32 `json:"min_p,omitempty"`
|
MinP float32 `json:"min_p,omitempty"`
|
||||||
TFSZ float32 `json:"tfs_z,omitempty"`
|
|
||||||
TypicalP float32 `json:"typical_p,omitempty"`
|
TypicalP float32 `json:"typical_p,omitempty"`
|
||||||
RepeatLastN int `json:"repeat_last_n,omitempty"`
|
RepeatLastN int `json:"repeat_last_n,omitempty"`
|
||||||
Temperature float32 `json:"temperature,omitempty"`
|
Temperature float32 `json:"temperature,omitempty"`
|
||||||
RepeatPenalty float32 `json:"repeat_penalty,omitempty"`
|
RepeatPenalty float32 `json:"repeat_penalty,omitempty"`
|
||||||
PresencePenalty float32 `json:"presence_penalty,omitempty"`
|
PresencePenalty float32 `json:"presence_penalty,omitempty"`
|
||||||
FrequencyPenalty float32 `json:"frequency_penalty,omitempty"`
|
FrequencyPenalty float32 `json:"frequency_penalty,omitempty"`
|
||||||
Mirostat int `json:"mirostat,omitempty"`
|
|
||||||
MirostatTau float32 `json:"mirostat_tau,omitempty"`
|
|
||||||
MirostatEta float32 `json:"mirostat_eta,omitempty"`
|
|
||||||
PenalizeNewline bool `json:"penalize_newline,omitempty"`
|
|
||||||
Stop []string `json:"stop,omitempty"`
|
Stop []string `json:"stop,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -235,12 +412,7 @@ type Runner struct {
|
|||||||
NumBatch int `json:"num_batch,omitempty"`
|
NumBatch int `json:"num_batch,omitempty"`
|
||||||
NumGPU int `json:"num_gpu,omitempty"`
|
NumGPU int `json:"num_gpu,omitempty"`
|
||||||
MainGPU int `json:"main_gpu,omitempty"`
|
MainGPU int `json:"main_gpu,omitempty"`
|
||||||
LowVRAM bool `json:"low_vram,omitempty"`
|
|
||||||
F16KV bool `json:"f16_kv,omitempty"`
|
|
||||||
LogitsAll bool `json:"logits_all,omitempty"`
|
|
||||||
VocabOnly bool `json:"vocab_only,omitempty"`
|
|
||||||
UseMMap *bool `json:"use_mmap,omitempty"`
|
UseMMap *bool `json:"use_mmap,omitempty"`
|
||||||
UseMLock bool `json:"use_mlock,omitempty"`
|
|
||||||
NumThread int `json:"num_thread,omitempty"`
|
NumThread int `json:"num_thread,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -256,10 +428,14 @@ type EmbedRequest struct {
|
|||||||
// this request.
|
// this request.
|
||||||
KeepAlive *Duration `json:"keep_alive,omitempty"`
|
KeepAlive *Duration `json:"keep_alive,omitempty"`
|
||||||
|
|
||||||
|
// Truncate truncates the input to fit the model's max sequence length.
|
||||||
Truncate *bool `json:"truncate,omitempty"`
|
Truncate *bool `json:"truncate,omitempty"`
|
||||||
|
|
||||||
|
// Dimensions truncates the output embedding to the specified dimension.
|
||||||
|
Dimensions int `json:"dimensions,omitempty"`
|
||||||
|
|
||||||
// Options lists model-specific options.
|
// Options lists model-specific options.
|
||||||
Options map[string]interface{} `json:"options"`
|
Options map[string]any `json:"options"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// EmbedResponse is the response from [Client.Embed].
|
// EmbedResponse is the response from [Client.Embed].
|
||||||
@@ -285,7 +461,7 @@ type EmbeddingRequest struct {
|
|||||||
KeepAlive *Duration `json:"keep_alive,omitempty"`
|
KeepAlive *Duration `json:"keep_alive,omitempty"`
|
||||||
|
|
||||||
// Options lists model-specific options.
|
// Options lists model-specific options.
|
||||||
Options map[string]interface{} `json:"options"`
|
Options map[string]any `json:"options"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// EmbeddingResponse is the response from [Client.Embeddings].
|
// EmbeddingResponse is the response from [Client.Embeddings].
|
||||||
@@ -295,17 +471,50 @@ type EmbeddingResponse struct {
|
|||||||
|
|
||||||
// CreateRequest is the request passed to [Client.Create].
|
// CreateRequest is the request passed to [Client.Create].
|
||||||
type CreateRequest struct {
|
type CreateRequest struct {
|
||||||
|
// Model is the model name to create.
|
||||||
Model string `json:"model"`
|
Model string `json:"model"`
|
||||||
Modelfile string `json:"modelfile"`
|
|
||||||
|
// Stream specifies whether the response is streaming; it is true by default.
|
||||||
Stream *bool `json:"stream,omitempty"`
|
Stream *bool `json:"stream,omitempty"`
|
||||||
|
|
||||||
|
// Quantize is the quantization format for the model; leave blank to not change the quantization level.
|
||||||
Quantize string `json:"quantize,omitempty"`
|
Quantize string `json:"quantize,omitempty"`
|
||||||
|
|
||||||
|
// From is the name of the model or file to use as the source.
|
||||||
|
From string `json:"from,omitempty"`
|
||||||
|
|
||||||
|
// RemoteHost is the URL of the upstream ollama API for the model (if any).
|
||||||
|
RemoteHost string `json:"remote_host,omitempty"`
|
||||||
|
|
||||||
|
// Files is a map of files include when creating the model.
|
||||||
|
Files map[string]string `json:"files,omitempty"`
|
||||||
|
|
||||||
|
// Adapters is a map of LoRA adapters to include when creating the model.
|
||||||
|
Adapters map[string]string `json:"adapters,omitempty"`
|
||||||
|
|
||||||
|
// Template is the template used when constructing a request to the model.
|
||||||
|
Template string `json:"template,omitempty"`
|
||||||
|
|
||||||
|
// License is a string or list of strings for licenses.
|
||||||
|
License any `json:"license,omitempty"`
|
||||||
|
|
||||||
|
// System is the system prompt for the model.
|
||||||
|
System string `json:"system,omitempty"`
|
||||||
|
|
||||||
|
// Parameters is a map of hyper-parameters which are applied to the model.
|
||||||
|
Parameters map[string]any `json:"parameters,omitempty"`
|
||||||
|
|
||||||
|
// Messages is a list of messages added to the model before chat and generation requests.
|
||||||
|
Messages []Message `json:"messages,omitempty"`
|
||||||
|
|
||||||
|
Renderer string `json:"renderer,omitempty"`
|
||||||
|
Parser string `json:"parser,omitempty"`
|
||||||
|
|
||||||
|
// Info is a map of additional information for the model
|
||||||
|
Info map[string]any `json:"info,omitempty"`
|
||||||
|
|
||||||
// Deprecated: set the model name with Model instead
|
// Deprecated: set the model name with Model instead
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
|
|
||||||
// Deprecated: set the file content with Modelfile instead
|
|
||||||
Path string `json:"path"`
|
|
||||||
|
|
||||||
// Deprecated: use Quantize instead
|
// Deprecated: use Quantize instead
|
||||||
Quantization string `json:"quantization,omitempty"`
|
Quantization string `json:"quantization,omitempty"`
|
||||||
}
|
}
|
||||||
@@ -327,7 +536,7 @@ type ShowRequest struct {
|
|||||||
Template string `json:"template"`
|
Template string `json:"template"`
|
||||||
Verbose bool `json:"verbose"`
|
Verbose bool `json:"verbose"`
|
||||||
|
|
||||||
Options map[string]interface{} `json:"options"`
|
Options map[string]any `json:"options"`
|
||||||
|
|
||||||
// Deprecated: set the model name with Model instead
|
// Deprecated: set the model name with Model instead
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
@@ -340,10 +549,16 @@ type ShowResponse struct {
|
|||||||
Parameters string `json:"parameters,omitempty"`
|
Parameters string `json:"parameters,omitempty"`
|
||||||
Template string `json:"template,omitempty"`
|
Template string `json:"template,omitempty"`
|
||||||
System string `json:"system,omitempty"`
|
System string `json:"system,omitempty"`
|
||||||
|
Renderer string `json:"renderer,omitempty"`
|
||||||
|
Parser string `json:"parser,omitempty"`
|
||||||
Details ModelDetails `json:"details,omitempty"`
|
Details ModelDetails `json:"details,omitempty"`
|
||||||
Messages []Message `json:"messages,omitempty"`
|
Messages []Message `json:"messages,omitempty"`
|
||||||
|
RemoteModel string `json:"remote_model,omitempty"`
|
||||||
|
RemoteHost string `json:"remote_host,omitempty"`
|
||||||
ModelInfo map[string]any `json:"model_info,omitempty"`
|
ModelInfo map[string]any `json:"model_info,omitempty"`
|
||||||
ProjectorInfo map[string]any `json:"projector_info,omitempty"`
|
ProjectorInfo map[string]any `json:"projector_info,omitempty"`
|
||||||
|
Tensors []Tensor `json:"tensors,omitempty"`
|
||||||
|
Capabilities []model.Capability `json:"capabilities,omitempty"`
|
||||||
ModifiedAt time.Time `json:"modified_at,omitempty"`
|
ModifiedAt time.Time `json:"modified_at,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -356,9 +571,9 @@ type CopyRequest struct {
|
|||||||
// PullRequest is the request passed to [Client.Pull].
|
// PullRequest is the request passed to [Client.Pull].
|
||||||
type PullRequest struct {
|
type PullRequest struct {
|
||||||
Model string `json:"model"`
|
Model string `json:"model"`
|
||||||
Insecure bool `json:"insecure,omitempty"`
|
Insecure bool `json:"insecure,omitempty"` // Deprecated: ignored
|
||||||
Username string `json:"username"`
|
Username string `json:"username"` // Deprecated: ignored
|
||||||
Password string `json:"password"`
|
Password string `json:"password"` // Deprecated: ignored
|
||||||
Stream *bool `json:"stream,omitempty"`
|
Stream *bool `json:"stream,omitempty"`
|
||||||
|
|
||||||
// Deprecated: set the model name with Model instead
|
// Deprecated: set the model name with Model instead
|
||||||
@@ -400,6 +615,8 @@ type ProcessResponse struct {
|
|||||||
type ListModelResponse struct {
|
type ListModelResponse struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Model string `json:"model"`
|
Model string `json:"model"`
|
||||||
|
RemoteModel string `json:"remote_model,omitempty"`
|
||||||
|
RemoteHost string `json:"remote_host,omitempty"`
|
||||||
ModifiedAt time.Time `json:"modified_at"`
|
ModifiedAt time.Time `json:"modified_at"`
|
||||||
Size int64 `json:"size"`
|
Size int64 `json:"size"`
|
||||||
Digest string `json:"digest"`
|
Digest string `json:"digest"`
|
||||||
@@ -415,13 +632,7 @@ type ProcessModelResponse struct {
|
|||||||
Details ModelDetails `json:"details,omitempty"`
|
Details ModelDetails `json:"details,omitempty"`
|
||||||
ExpiresAt time.Time `json:"expires_at"`
|
ExpiresAt time.Time `json:"expires_at"`
|
||||||
SizeVRAM int64 `json:"size_vram"`
|
SizeVRAM int64 `json:"size_vram"`
|
||||||
}
|
ContextLength int `json:"context_length"`
|
||||||
|
|
||||||
type RetrieveModelResponse struct {
|
|
||||||
Id string `json:"id"`
|
|
||||||
Object string `json:"object"`
|
|
||||||
Created int64 `json:"created"`
|
|
||||||
OwnedBy string `json:"owned_by"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type TokenResponse struct {
|
type TokenResponse struct {
|
||||||
@@ -433,12 +644,22 @@ type GenerateResponse struct {
|
|||||||
// Model is the model name that generated the response.
|
// Model is the model name that generated the response.
|
||||||
Model string `json:"model"`
|
Model string `json:"model"`
|
||||||
|
|
||||||
|
// RemoteModel is the name of the upstream model that generated the response.
|
||||||
|
RemoteModel string `json:"remote_model,omitempty"`
|
||||||
|
|
||||||
|
// RemoteHost is the URL of the upstream Ollama host that generated the response.
|
||||||
|
RemoteHost string `json:"remote_host,omitempty"`
|
||||||
|
|
||||||
// CreatedAt is the timestamp of the response.
|
// CreatedAt is the timestamp of the response.
|
||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
|
|
||||||
// Response is the textual response itself.
|
// Response is the textual response itself.
|
||||||
Response string `json:"response"`
|
Response string `json:"response"`
|
||||||
|
|
||||||
|
// Thinking contains the text that was inside thinking tags in the
|
||||||
|
// original model output when ChatRequest.Think is enabled.
|
||||||
|
Thinking string `json:"thinking,omitempty"`
|
||||||
|
|
||||||
// Done specifies if the response is complete.
|
// Done specifies if the response is complete.
|
||||||
Done bool `json:"done"`
|
Done bool `json:"done"`
|
||||||
|
|
||||||
@@ -450,6 +671,10 @@ type GenerateResponse struct {
|
|||||||
Context []int `json:"context,omitempty"`
|
Context []int `json:"context,omitempty"`
|
||||||
|
|
||||||
Metrics
|
Metrics
|
||||||
|
|
||||||
|
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
|
||||||
|
|
||||||
|
DebugInfo *DebugInfo `json:"_debug_info,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ModelDetails provides details about a model.
|
// ModelDetails provides details about a model.
|
||||||
@@ -462,6 +687,25 @@ type ModelDetails struct {
|
|||||||
QuantizationLevel string `json:"quantization_level"`
|
QuantizationLevel string `json:"quantization_level"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// UserResponse provides information about a user.
|
||||||
|
type UserResponse struct {
|
||||||
|
ID uuid.UUID `json:"id"`
|
||||||
|
Email string `json:"email"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
Bio string `json:"bio,omitempty"`
|
||||||
|
AvatarURL string `json:"avatarurl,omitempty"`
|
||||||
|
FirstName string `json:"firstname,omitempty"`
|
||||||
|
LastName string `json:"lastname,omitempty"`
|
||||||
|
Plan string `json:"plan,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Tensor describes the metadata for a given tensor.
|
||||||
|
type Tensor struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Type string `json:"type"`
|
||||||
|
Shape []uint64 `json:"shape"`
|
||||||
|
}
|
||||||
|
|
||||||
func (m *Metrics) Summary() {
|
func (m *Metrics) Summary() {
|
||||||
if m.TotalDuration > 0 {
|
if m.TotalDuration > 0 {
|
||||||
fmt.Fprintf(os.Stderr, "total duration: %v\n", m.TotalDuration)
|
fmt.Fprintf(os.Stderr, "total duration: %v\n", m.TotalDuration)
|
||||||
@@ -490,7 +734,7 @@ func (m *Metrics) Summary() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (opts *Options) FromMap(m map[string]interface{}) error {
|
func (opts *Options) FromMap(m map[string]any) error {
|
||||||
valueOpts := reflect.ValueOf(opts).Elem() // names of the fields in the options struct
|
valueOpts := reflect.ValueOf(opts).Elem() // names of the fields in the options struct
|
||||||
typeOpts := reflect.TypeOf(opts).Elem() // types of the fields in the options struct
|
typeOpts := reflect.TypeOf(opts).Elem() // types of the fields in the options struct
|
||||||
|
|
||||||
@@ -547,12 +791,12 @@ func (opts *Options) FromMap(m map[string]interface{}) error {
|
|||||||
}
|
}
|
||||||
field.SetString(val)
|
field.SetString(val)
|
||||||
case reflect.Slice:
|
case reflect.Slice:
|
||||||
// JSON unmarshals to []interface{}, not []string
|
// JSON unmarshals to []any, not []string
|
||||||
val, ok := val.([]interface{})
|
val, ok := val.([]any)
|
||||||
if !ok {
|
if !ok {
|
||||||
return fmt.Errorf("option %q must be of type array", key)
|
return fmt.Errorf("option %q must be of type array", key)
|
||||||
}
|
}
|
||||||
// convert []interface{} to []string
|
// convert []any to []string
|
||||||
slice := make([]string, len(val))
|
slice := make([]string, len(val))
|
||||||
for i, item := range val {
|
for i, item := range val {
|
||||||
str, ok := item.(string)
|
str, ok := item.(string)
|
||||||
@@ -594,32 +838,131 @@ func DefaultOptions() Options {
|
|||||||
Temperature: 0.8,
|
Temperature: 0.8,
|
||||||
TopK: 40,
|
TopK: 40,
|
||||||
TopP: 0.9,
|
TopP: 0.9,
|
||||||
TFSZ: 1.0,
|
|
||||||
TypicalP: 1.0,
|
TypicalP: 1.0,
|
||||||
RepeatLastN: 64,
|
RepeatLastN: 64,
|
||||||
RepeatPenalty: 1.1,
|
RepeatPenalty: 1.1,
|
||||||
PresencePenalty: 0.0,
|
PresencePenalty: 0.0,
|
||||||
FrequencyPenalty: 0.0,
|
FrequencyPenalty: 0.0,
|
||||||
Mirostat: 0,
|
|
||||||
MirostatTau: 5.0,
|
|
||||||
MirostatEta: 0.1,
|
|
||||||
PenalizeNewline: true,
|
|
||||||
Seed: -1,
|
Seed: -1,
|
||||||
|
|
||||||
Runner: Runner{
|
Runner: Runner{
|
||||||
// options set when the model is loaded
|
// options set when the model is loaded
|
||||||
NumCtx: 2048,
|
NumCtx: int(envconfig.ContextLength()),
|
||||||
NumBatch: 512,
|
NumBatch: 512,
|
||||||
NumGPU: -1, // -1 here indicates that NumGPU should be set dynamically
|
NumGPU: -1, // -1 here indicates that NumGPU should be set dynamically
|
||||||
NumThread: 0, // let the runtime decide
|
NumThread: 0, // let the runtime decide
|
||||||
LowVRAM: false,
|
|
||||||
F16KV: true,
|
|
||||||
UseMLock: false,
|
|
||||||
UseMMap: nil,
|
UseMMap: nil,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ThinkValue represents a value that can be a boolean or a string ("high", "medium", "low")
|
||||||
|
type ThinkValue struct {
|
||||||
|
// Value can be a bool or string
|
||||||
|
Value interface{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsValid checks if the ThinkValue is valid
|
||||||
|
func (t *ThinkValue) IsValid() bool {
|
||||||
|
if t == nil || t.Value == nil {
|
||||||
|
return true // nil is valid (means not set)
|
||||||
|
}
|
||||||
|
|
||||||
|
switch v := t.Value.(type) {
|
||||||
|
case bool:
|
||||||
|
return true
|
||||||
|
case string:
|
||||||
|
return v == "high" || v == "medium" || v == "low"
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsBool returns true if the value is a boolean
|
||||||
|
func (t *ThinkValue) IsBool() bool {
|
||||||
|
if t == nil || t.Value == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
_, ok := t.Value.(bool)
|
||||||
|
return ok
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsString returns true if the value is a string
|
||||||
|
func (t *ThinkValue) IsString() bool {
|
||||||
|
if t == nil || t.Value == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
_, ok := t.Value.(string)
|
||||||
|
return ok
|
||||||
|
}
|
||||||
|
|
||||||
|
// Bool returns the value as a bool (true if enabled in any way)
|
||||||
|
func (t *ThinkValue) Bool() bool {
|
||||||
|
if t == nil || t.Value == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
switch v := t.Value.(type) {
|
||||||
|
case bool:
|
||||||
|
return v
|
||||||
|
case string:
|
||||||
|
// Any string value ("high", "medium", "low") means thinking is enabled
|
||||||
|
return v == "high" || v == "medium" || v == "low"
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns the value as a string
|
||||||
|
func (t *ThinkValue) String() string {
|
||||||
|
if t == nil || t.Value == nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
switch v := t.Value.(type) {
|
||||||
|
case string:
|
||||||
|
return v
|
||||||
|
case bool:
|
||||||
|
if v {
|
||||||
|
return "medium" // Default level when just true
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
default:
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// UnmarshalJSON implements json.Unmarshaler
|
||||||
|
func (t *ThinkValue) UnmarshalJSON(data []byte) error {
|
||||||
|
// Try to unmarshal as bool first
|
||||||
|
var b bool
|
||||||
|
if err := json.Unmarshal(data, &b); err == nil {
|
||||||
|
t.Value = b
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to unmarshal as string
|
||||||
|
var s string
|
||||||
|
if err := json.Unmarshal(data, &s); err == nil {
|
||||||
|
// Validate string values
|
||||||
|
if s != "high" && s != "medium" && s != "low" {
|
||||||
|
return fmt.Errorf("invalid think value: %q (must be \"high\", \"medium\", \"low\", true, or false)", s)
|
||||||
|
}
|
||||||
|
t.Value = s
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return fmt.Errorf("think must be a boolean or string (\"high\", \"medium\", \"low\", true, or false)")
|
||||||
|
}
|
||||||
|
|
||||||
|
// MarshalJSON implements json.Marshaler
|
||||||
|
func (t *ThinkValue) MarshalJSON() ([]byte, error) {
|
||||||
|
if t == nil || t.Value == nil {
|
||||||
|
return []byte("null"), nil
|
||||||
|
}
|
||||||
|
return json.Marshal(t.Value)
|
||||||
|
}
|
||||||
|
|
||||||
type Duration struct {
|
type Duration struct {
|
||||||
time.Duration
|
time.Duration
|
||||||
}
|
}
|
||||||
@@ -644,7 +987,7 @@ func (d *Duration) UnmarshalJSON(b []byte) (err error) {
|
|||||||
if t < 0 {
|
if t < 0 {
|
||||||
d.Duration = time.Duration(math.MaxInt64)
|
d.Duration = time.Duration(math.MaxInt64)
|
||||||
} else {
|
} else {
|
||||||
d.Duration = time.Duration(int(t) * int(time.Second))
|
d.Duration = time.Duration(t * float64(time.Second))
|
||||||
}
|
}
|
||||||
case string:
|
case string:
|
||||||
d.Duration, err = time.ParseDuration(t)
|
d.Duration, err = time.ParseDuration(t)
|
||||||
@@ -662,7 +1005,7 @@ func (d *Duration) UnmarshalJSON(b []byte) (err error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// FormatParams converts specified parameter options to their correct types
|
// FormatParams converts specified parameter options to their correct types
|
||||||
func FormatParams(params map[string][]string) (map[string]interface{}, error) {
|
func FormatParams(params map[string][]string) (map[string]any, error) {
|
||||||
opts := Options{}
|
opts := Options{}
|
||||||
valueOpts := reflect.ValueOf(&opts).Elem() // names of the fields in the options struct
|
valueOpts := reflect.ValueOf(&opts).Elem() // names of the fields in the options struct
|
||||||
typeOpts := reflect.TypeOf(opts) // types of the fields in the options struct
|
typeOpts := reflect.TypeOf(opts) // types of the fields in the options struct
|
||||||
@@ -676,7 +1019,7 @@ func FormatParams(params map[string][]string) (map[string]interface{}, error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
out := make(map[string]interface{})
|
out := make(map[string]any)
|
||||||
// iterate params and set values based on json struct tags
|
// iterate params and set values based on json struct tags
|
||||||
for key, vals := range params {
|
for key, vals := range params {
|
||||||
if opt, ok := jsonOpts[key]; !ok {
|
if opt, ok := jsonOpts[key]; !ok {
|
||||||
|
|||||||
@@ -17,6 +17,11 @@ func TestKeepAliveParsingFromJSON(t *testing.T) {
|
|||||||
req string
|
req string
|
||||||
exp *Duration
|
exp *Duration
|
||||||
}{
|
}{
|
||||||
|
{
|
||||||
|
name: "Unset",
|
||||||
|
req: `{ }`,
|
||||||
|
exp: nil,
|
||||||
|
},
|
||||||
{
|
{
|
||||||
name: "Positive Integer",
|
name: "Positive Integer",
|
||||||
req: `{ "keep_alive": 42 }`,
|
req: `{ "keep_alive": 42 }`,
|
||||||
@@ -25,7 +30,7 @@ func TestKeepAliveParsingFromJSON(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "Positive Float",
|
name: "Positive Float",
|
||||||
req: `{ "keep_alive": 42.5 }`,
|
req: `{ "keep_alive": 42.5 }`,
|
||||||
exp: &Duration{42 * time.Second},
|
exp: &Duration{42500 * time.Millisecond},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "Positive Integer String",
|
name: "Positive Integer String",
|
||||||
@@ -134,7 +139,7 @@ func TestUseMmapParsingFromJSON(t *testing.T) {
|
|||||||
|
|
||||||
for _, test := range tests {
|
for _, test := range tests {
|
||||||
t.Run(test.name, func(t *testing.T) {
|
t.Run(test.name, func(t *testing.T) {
|
||||||
var oMap map[string]interface{}
|
var oMap map[string]any
|
||||||
err := json.Unmarshal([]byte(test.req), &oMap)
|
err := json.Unmarshal([]byte(test.req), &oMap)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
opts := DefaultOptions()
|
opts := DefaultOptions()
|
||||||
@@ -231,3 +236,279 @@ func TestMessage_UnmarshalJSON(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestToolFunction_UnmarshalJSON(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
input string
|
||||||
|
wantErr string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "valid enum with same types",
|
||||||
|
input: `{
|
||||||
|
"name": "test",
|
||||||
|
"description": "test function",
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"required": ["test"],
|
||||||
|
"properties": {
|
||||||
|
"test": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "test prop",
|
||||||
|
"enum": ["a", "b", "c"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}`,
|
||||||
|
wantErr: "",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "empty enum array",
|
||||||
|
input: `{
|
||||||
|
"name": "test",
|
||||||
|
"description": "test function",
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"required": ["test"],
|
||||||
|
"properties": {
|
||||||
|
"test": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "test prop",
|
||||||
|
"enum": []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}`,
|
||||||
|
wantErr: "",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
var tf ToolFunction
|
||||||
|
err := json.Unmarshal([]byte(tt.input), &tf)
|
||||||
|
|
||||||
|
if tt.wantErr != "" {
|
||||||
|
require.Error(t, err)
|
||||||
|
assert.Contains(t, err.Error(), tt.wantErr)
|
||||||
|
} else {
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToolCallFunction_IndexAlwaysMarshals(t *testing.T) {
|
||||||
|
fn := ToolCallFunction{
|
||||||
|
Name: "echo",
|
||||||
|
Arguments: ToolCallFunctionArguments{"message": "hi"},
|
||||||
|
}
|
||||||
|
|
||||||
|
data, err := json.Marshal(fn)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
raw := map[string]any{}
|
||||||
|
require.NoError(t, json.Unmarshal(data, &raw))
|
||||||
|
require.Contains(t, raw, "index")
|
||||||
|
assert.Equal(t, float64(0), raw["index"])
|
||||||
|
|
||||||
|
fn.Index = 3
|
||||||
|
data, err = json.Marshal(fn)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
raw = map[string]any{}
|
||||||
|
require.NoError(t, json.Unmarshal(data, &raw))
|
||||||
|
require.Contains(t, raw, "index")
|
||||||
|
assert.Equal(t, float64(3), raw["index"])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPropertyType_UnmarshalJSON(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
input string
|
||||||
|
expected PropertyType
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "string type",
|
||||||
|
input: `"string"`,
|
||||||
|
expected: PropertyType{"string"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "array of types",
|
||||||
|
input: `["string", "number"]`,
|
||||||
|
expected: PropertyType{"string", "number"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "array with single type",
|
||||||
|
input: `["string"]`,
|
||||||
|
expected: PropertyType{"string"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range tests {
|
||||||
|
t.Run(test.name, func(t *testing.T) {
|
||||||
|
var pt PropertyType
|
||||||
|
if err := json.Unmarshal([]byte(test.input), &pt); err != nil {
|
||||||
|
t.Errorf("Unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(pt) != len(test.expected) {
|
||||||
|
t.Errorf("Length mismatch: got %v, expected %v", len(pt), len(test.expected))
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, v := range pt {
|
||||||
|
if v != test.expected[i] {
|
||||||
|
t.Errorf("Value mismatch at index %d: got %v, expected %v", i, v, test.expected[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPropertyType_MarshalJSON(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
input PropertyType
|
||||||
|
expected string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "single type",
|
||||||
|
input: PropertyType{"string"},
|
||||||
|
expected: `"string"`,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "multiple types",
|
||||||
|
input: PropertyType{"string", "number"},
|
||||||
|
expected: `["string","number"]`,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "empty type",
|
||||||
|
input: PropertyType{},
|
||||||
|
expected: `[]`,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range tests {
|
||||||
|
t.Run(test.name, func(t *testing.T) {
|
||||||
|
data, err := json.Marshal(test.input)
|
||||||
|
if err != nil {
|
||||||
|
t.Errorf("Unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if string(data) != test.expected {
|
||||||
|
t.Errorf("Marshaled data mismatch: got %v, expected %v", string(data), test.expected)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestThinking_UnmarshalJSON(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
input string
|
||||||
|
expectedThinking *ThinkValue
|
||||||
|
expectedError bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "true",
|
||||||
|
input: `{ "think": true }`,
|
||||||
|
expectedThinking: &ThinkValue{Value: true},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "false",
|
||||||
|
input: `{ "think": false }`,
|
||||||
|
expectedThinking: &ThinkValue{Value: false},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "unset",
|
||||||
|
input: `{ }`,
|
||||||
|
expectedThinking: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "string_high",
|
||||||
|
input: `{ "think": "high" }`,
|
||||||
|
expectedThinking: &ThinkValue{Value: "high"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "string_medium",
|
||||||
|
input: `{ "think": "medium" }`,
|
||||||
|
expectedThinking: &ThinkValue{Value: "medium"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "string_low",
|
||||||
|
input: `{ "think": "low" }`,
|
||||||
|
expectedThinking: &ThinkValue{Value: "low"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "invalid_string",
|
||||||
|
input: `{ "think": "invalid" }`,
|
||||||
|
expectedThinking: nil,
|
||||||
|
expectedError: true,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range tests {
|
||||||
|
t.Run(test.name, func(t *testing.T) {
|
||||||
|
var req GenerateRequest
|
||||||
|
err := json.Unmarshal([]byte(test.input), &req)
|
||||||
|
if test.expectedError {
|
||||||
|
require.Error(t, err)
|
||||||
|
} else {
|
||||||
|
require.NoError(t, err)
|
||||||
|
if test.expectedThinking == nil {
|
||||||
|
assert.Nil(t, req.Think)
|
||||||
|
} else {
|
||||||
|
require.NotNil(t, req.Think)
|
||||||
|
assert.Equal(t, test.expectedThinking.Value, req.Think.Value)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToolFunctionParameters_String(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
params ToolFunctionParameters
|
||||||
|
expected string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "simple object with string property",
|
||||||
|
params: ToolFunctionParameters{
|
||||||
|
Type: "object",
|
||||||
|
Required: []string{"name"},
|
||||||
|
Properties: map[string]ToolProperty{
|
||||||
|
"name": {
|
||||||
|
Type: PropertyType{"string"},
|
||||||
|
Description: "The name of the person",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expected: `{"type":"object","required":["name"],"properties":{"name":{"type":"string","description":"The name of the person"}}}`,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "marshal failure returns empty string",
|
||||||
|
params: ToolFunctionParameters{
|
||||||
|
Type: "object",
|
||||||
|
Defs: func() any {
|
||||||
|
// Create a cycle that will cause json.Marshal to fail
|
||||||
|
type selfRef struct {
|
||||||
|
Self *selfRef
|
||||||
|
}
|
||||||
|
s := &selfRef{}
|
||||||
|
s.Self = s
|
||||||
|
return s
|
||||||
|
}(),
|
||||||
|
Properties: map[string]ToolProperty{},
|
||||||
|
},
|
||||||
|
expected: "",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range tests {
|
||||||
|
t.Run(test.name, func(t *testing.T) {
|
||||||
|
result := test.params.String()
|
||||||
|
assert.Equal(t, test.expected, result)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
142
api/types_typescript_test.go
Normal file
142
api/types_typescript_test.go
Normal file
@@ -0,0 +1,142 @@
|
|||||||
|
package api
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestToolParameterToTypeScriptType(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
param ToolProperty
|
||||||
|
expected string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "single string type",
|
||||||
|
param: ToolProperty{
|
||||||
|
Type: PropertyType{"string"},
|
||||||
|
},
|
||||||
|
expected: "string",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "single number type",
|
||||||
|
param: ToolProperty{
|
||||||
|
Type: PropertyType{"number"},
|
||||||
|
},
|
||||||
|
expected: "number",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "integer maps to number",
|
||||||
|
param: ToolProperty{
|
||||||
|
Type: PropertyType{"integer"},
|
||||||
|
},
|
||||||
|
expected: "number",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "boolean type",
|
||||||
|
param: ToolProperty{
|
||||||
|
Type: PropertyType{"boolean"},
|
||||||
|
},
|
||||||
|
expected: "boolean",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "array type",
|
||||||
|
param: ToolProperty{
|
||||||
|
Type: PropertyType{"array"},
|
||||||
|
},
|
||||||
|
expected: "any[]",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "object type",
|
||||||
|
param: ToolProperty{
|
||||||
|
Type: PropertyType{"object"},
|
||||||
|
},
|
||||||
|
expected: "Record<string, any>",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "null type",
|
||||||
|
param: ToolProperty{
|
||||||
|
Type: PropertyType{"null"},
|
||||||
|
},
|
||||||
|
expected: "null",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "multiple types as union",
|
||||||
|
param: ToolProperty{
|
||||||
|
Type: PropertyType{"string", "number"},
|
||||||
|
},
|
||||||
|
expected: "string | number",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "string or null union",
|
||||||
|
param: ToolProperty{
|
||||||
|
Type: PropertyType{"string", "null"},
|
||||||
|
},
|
||||||
|
expected: "string | null",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "anyOf with single types",
|
||||||
|
param: ToolProperty{
|
||||||
|
AnyOf: []ToolProperty{
|
||||||
|
{Type: PropertyType{"string"}},
|
||||||
|
{Type: PropertyType{"number"}},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expected: "string | number",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "anyOf with multiple types in each branch",
|
||||||
|
param: ToolProperty{
|
||||||
|
AnyOf: []ToolProperty{
|
||||||
|
{Type: PropertyType{"string", "null"}},
|
||||||
|
{Type: PropertyType{"number"}},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expected: "string | null | number",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "nested anyOf",
|
||||||
|
param: ToolProperty{
|
||||||
|
AnyOf: []ToolProperty{
|
||||||
|
{Type: PropertyType{"boolean"}},
|
||||||
|
{
|
||||||
|
AnyOf: []ToolProperty{
|
||||||
|
{Type: PropertyType{"string"}},
|
||||||
|
{Type: PropertyType{"number"}},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
expected: "boolean | string | number",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "empty type returns any",
|
||||||
|
param: ToolProperty{
|
||||||
|
Type: PropertyType{},
|
||||||
|
},
|
||||||
|
expected: "any",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "unknown type maps to any",
|
||||||
|
param: ToolProperty{
|
||||||
|
Type: PropertyType{"unknown_type"},
|
||||||
|
},
|
||||||
|
expected: "any",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "multiple types including array",
|
||||||
|
param: ToolProperty{
|
||||||
|
Type: PropertyType{"string", "array", "null"},
|
||||||
|
},
|
||||||
|
expected: "string | any[] | null",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
result := tt.param.ToTypeScriptType()
|
||||||
|
if result != tt.expected {
|
||||||
|
t.Errorf("ToTypeScriptType() = %q, want %q", result, tt.expected)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -17,6 +17,6 @@ If you want to build the installer, youll need to install
|
|||||||
In the top directory of this repo, run the following powershell script
|
In the top directory of this repo, run the following powershell script
|
||||||
to build the ollama CLI, ollama app, and ollama installer.
|
to build the ollama CLI, ollama app, and ollama installer.
|
||||||
|
|
||||||
```
|
```powershell
|
||||||
powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1
|
powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -11,10 +11,12 @@ import (
|
|||||||
|
|
||||||
"github.com/ollama/ollama/app/store"
|
"github.com/ollama/ollama/app/store"
|
||||||
"github.com/ollama/ollama/app/tray"
|
"github.com/ollama/ollama/app/tray"
|
||||||
|
"github.com/ollama/ollama/envconfig"
|
||||||
)
|
)
|
||||||
|
|
||||||
func Run() {
|
func Run() {
|
||||||
InitLogging()
|
InitLogging()
|
||||||
|
slog.Info("app config", "env", envconfig.Values())
|
||||||
|
|
||||||
ctx, cancel := context.WithCancel(context.Background())
|
ctx, cancel := context.WithCancel(context.Background())
|
||||||
var done chan int
|
var done chan int
|
||||||
|
|||||||
@@ -4,20 +4,14 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"log/slog"
|
"log/slog"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/ollama/ollama/envconfig"
|
"github.com/ollama/ollama/envconfig"
|
||||||
|
"github.com/ollama/ollama/logutil"
|
||||||
)
|
)
|
||||||
|
|
||||||
func InitLogging() {
|
func InitLogging() {
|
||||||
level := slog.LevelInfo
|
|
||||||
|
|
||||||
if envconfig.Debug() {
|
|
||||||
level = slog.LevelDebug
|
|
||||||
}
|
|
||||||
|
|
||||||
var logFile *os.File
|
var logFile *os.File
|
||||||
var err error
|
var err error
|
||||||
// Detect if we're a GUI app on windows, and if not, send logs to console
|
// Detect if we're a GUI app on windows, and if not, send logs to console
|
||||||
@@ -33,20 +27,8 @@ func InitLogging() {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
handler := slog.NewTextHandler(logFile, &slog.HandlerOptions{
|
|
||||||
Level: level,
|
|
||||||
AddSource: true,
|
|
||||||
ReplaceAttr: func(_ []string, attr slog.Attr) slog.Attr {
|
|
||||||
if attr.Key == slog.SourceKey {
|
|
||||||
source := attr.Value.Any().(*slog.Source)
|
|
||||||
source.File = filepath.Base(source.File)
|
|
||||||
}
|
|
||||||
return attr
|
|
||||||
},
|
|
||||||
})
|
|
||||||
|
|
||||||
slog.SetDefault(slog.New(handler))
|
|
||||||
|
|
||||||
|
slog.SetDefault(logutil.NewLogger(logFile, envconfig.LogLevel()))
|
||||||
slog.Info("ollama app started")
|
slog.Info("ollama app started")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -36,8 +36,13 @@ func init() {
|
|||||||
ServerLogFile = filepath.Join(AppDataDir, "server.log")
|
ServerLogFile = filepath.Join(AppDataDir, "server.log")
|
||||||
UpgradeLogFile = filepath.Join(AppDataDir, "upgrade.log")
|
UpgradeLogFile = filepath.Join(AppDataDir, "upgrade.log")
|
||||||
|
|
||||||
// Executables are stored in APPDATA
|
exe, err := os.Executable()
|
||||||
|
if err != nil {
|
||||||
|
slog.Warn("error discovering executable directory", "error", err)
|
||||||
AppDir = filepath.Join(localAppData, "Programs", "Ollama")
|
AppDir = filepath.Join(localAppData, "Programs", "Ollama")
|
||||||
|
} else {
|
||||||
|
AppDir = filepath.Dir(exe)
|
||||||
|
}
|
||||||
|
|
||||||
// Make sure we have PATH set correctly for any spawned children
|
// Make sure we have PATH set correctly for any spawned children
|
||||||
paths := strings.Split(os.Getenv("PATH"), ";")
|
paths := strings.Split(os.Getenv("PATH"), ";")
|
||||||
@@ -64,7 +69,7 @@ func init() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Make sure our logging dir exists
|
// Make sure our logging dir exists
|
||||||
_, err := os.Stat(AppDataDir)
|
_, err = os.Stat(AppDataDir)
|
||||||
if errors.Is(err, os.ErrNotExist) {
|
if errors.Is(err, os.ErrNotExist) {
|
||||||
if err := os.MkdirAll(AppDataDir, 0o755); err != nil {
|
if err := os.MkdirAll(AppDataDir, 0o755); err != nil {
|
||||||
slog.Error(fmt.Sprintf("create ollama dir %s: %v", AppDataDir, err))
|
slog.Error(fmt.Sprintf("create ollama dir %s: %v", AppDataDir, err))
|
||||||
|
|||||||
@@ -18,11 +18,17 @@ func getCLIFullPath(command string) string {
|
|||||||
var cmdPath string
|
var cmdPath string
|
||||||
appExe, err := os.Executable()
|
appExe, err := os.Executable()
|
||||||
if err == nil {
|
if err == nil {
|
||||||
|
// Check both the same location as the tray app, as well as ./bin
|
||||||
cmdPath = filepath.Join(filepath.Dir(appExe), command)
|
cmdPath = filepath.Join(filepath.Dir(appExe), command)
|
||||||
_, err := os.Stat(cmdPath)
|
_, err := os.Stat(cmdPath)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
return cmdPath
|
return cmdPath
|
||||||
}
|
}
|
||||||
|
cmdPath = filepath.Join(filepath.Dir(appExe), "bin", command)
|
||||||
|
_, err = os.Stat(cmdPath)
|
||||||
|
if err == nil {
|
||||||
|
return cmdPath
|
||||||
|
}
|
||||||
}
|
}
|
||||||
cmdPath, err = exec.LookPath(command)
|
cmdPath, err = exec.LookPath(command)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
|
|||||||
@@ -26,19 +26,15 @@ func DoUpgrade(cancel context.CancelFunc, done chan int) error {
|
|||||||
slog.Info("starting upgrade with " + installerExe)
|
slog.Info("starting upgrade with " + installerExe)
|
||||||
slog.Info("upgrade log file " + UpgradeLogFile)
|
slog.Info("upgrade log file " + UpgradeLogFile)
|
||||||
|
|
||||||
// When running in debug mode, we'll be "verbose" and let the installer pop up and prompt
|
// make the upgrade show progress, but non interactive
|
||||||
installArgs := []string{
|
installArgs := []string{
|
||||||
"/CLOSEAPPLICATIONS", // Quit the tray app if it's still running
|
"/CLOSEAPPLICATIONS", // Quit the tray app if it's still running
|
||||||
"/LOG=" + filepath.Base(UpgradeLogFile), // Only relative seems reliable, so set pwd
|
"/LOG=" + filepath.Base(UpgradeLogFile), // Only relative seems reliable, so set pwd
|
||||||
"/FORCECLOSEAPPLICATIONS", // Force close the tray app - might be needed
|
"/FORCECLOSEAPPLICATIONS", // Force close the tray app - might be needed
|
||||||
}
|
|
||||||
// make the upgrade as quiet as possible (no GUI, no prompts)
|
|
||||||
installArgs = append(installArgs,
|
|
||||||
"/SP", // Skip the "This will install... Do you wish to continue" prompt
|
"/SP", // Skip the "This will install... Do you wish to continue" prompt
|
||||||
"/SUPPRESSMSGBOXES",
|
"/NOCANCEL", // Disable the ability to cancel upgrade mid-flight to avoid partially installed upgrades
|
||||||
"/SILENT",
|
"/SILENT",
|
||||||
"/VERYSILENT",
|
}
|
||||||
)
|
|
||||||
|
|
||||||
// Safeguard in case we have requests in flight that need to drain...
|
// Safeguard in case we have requests in flight that need to drain...
|
||||||
slog.Info("Waiting for server to shutdown")
|
slog.Info("Waiting for server to shutdown")
|
||||||
|
|||||||
@@ -28,8 +28,8 @@ AppPublisher={#MyAppPublisher}
|
|||||||
AppPublisherURL={#MyAppURL}
|
AppPublisherURL={#MyAppURL}
|
||||||
AppSupportURL={#MyAppURL}
|
AppSupportURL={#MyAppURL}
|
||||||
AppUpdatesURL={#MyAppURL}
|
AppUpdatesURL={#MyAppURL}
|
||||||
ArchitecturesAllowed=x64 arm64
|
;ArchitecturesAllowed=x64compatible arm64
|
||||||
ArchitecturesInstallIn64BitMode=x64 arm64
|
;ArchitecturesInstallIn64BitMode=x64compatible arm64
|
||||||
DefaultDirName={localappdata}\Programs\{#MyAppName}
|
DefaultDirName={localappdata}\Programs\{#MyAppName}
|
||||||
DefaultGroupName={#MyAppName}
|
DefaultGroupName={#MyAppName}
|
||||||
DisableProgramGroupPage=yes
|
DisableProgramGroupPage=yes
|
||||||
@@ -48,12 +48,13 @@ OutputDir=..\dist\
|
|||||||
SetupLogging=yes
|
SetupLogging=yes
|
||||||
CloseApplications=yes
|
CloseApplications=yes
|
||||||
RestartApplications=no
|
RestartApplications=no
|
||||||
|
RestartIfNeededByRun=no
|
||||||
|
|
||||||
; https://jrsoftware.org/ishelp/index.php?topic=setup_wizardimagefile
|
; https://jrsoftware.org/ishelp/index.php?topic=setup_wizardimagefile
|
||||||
WizardSmallImageFile=.\assets\setup.bmp
|
WizardSmallImageFile=.\assets\setup.bmp
|
||||||
|
|
||||||
; TODO verifty actual min windows version...
|
; Ollama requires Windows 10 22H2 or newer for proper unicode rendering
|
||||||
; OG Win 10
|
; TODO: consider setting this to 10.0.19045
|
||||||
MinVersion=10.0.10240
|
MinVersion=10.0.10240
|
||||||
|
|
||||||
; First release that supports WinRT UI Composition for win32 apps
|
; First release that supports WinRT UI Composition for win32 apps
|
||||||
@@ -86,12 +87,20 @@ Name: "english"; MessagesFile: "compiler:Default.isl"
|
|||||||
DialogFontSize=12
|
DialogFontSize=12
|
||||||
|
|
||||||
[Files]
|
[Files]
|
||||||
Source: ".\app.exe"; DestDir: "{app}"; DestName: "{#MyAppExeName}" ; Flags: ignoreversion 64bit
|
#if DirExists("..\dist\windows-amd64")
|
||||||
Source: "..\ollama.exe"; DestDir: "{app}"; Flags: ignoreversion 64bit
|
Source: "..\dist\windows-amd64-app.exe"; DestDir: "{app}"; DestName: "{#MyAppExeName}" ;Check: not IsArm64(); Flags: ignoreversion 64bit
|
||||||
Source: "..\dist\windows-{#ARCH}\lib\ollama\runners\*"; DestDir: "{app}\lib\ollama\runners"; Flags: ignoreversion 64bit recursesubdirs
|
Source: "..\dist\windows-amd64\ollama.exe"; DestDir: "{app}"; Check: not IsArm64(); Flags: ignoreversion 64bit
|
||||||
|
Source: "..\dist\windows-amd64\lib\ollama\*"; DestDir: "{app}\lib\ollama\"; Check: not IsArm64(); Flags: ignoreversion 64bit recursesubdirs
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#if DirExists("..\dist\windows-arm64")
|
||||||
|
Source: "..\dist\windows-arm64\vc_redist.arm64.exe"; DestDir: "{tmp}"; Check: IsArm64() and vc_redist_needed(); Flags: deleteafterinstall
|
||||||
|
Source: "..\dist\windows-arm64-app.exe"; DestDir: "{app}"; DestName: "{#MyAppExeName}" ;Check: IsArm64(); Flags: ignoreversion 64bit
|
||||||
|
Source: "..\dist\windows-arm64\ollama.exe"; DestDir: "{app}"; Check: IsArm64(); Flags: ignoreversion 64bit
|
||||||
|
#endif
|
||||||
|
|
||||||
Source: "..\dist\ollama_welcome.ps1"; DestDir: "{app}"; Flags: ignoreversion
|
Source: "..\dist\ollama_welcome.ps1"; DestDir: "{app}"; Flags: ignoreversion
|
||||||
Source: ".\assets\app.ico"; DestDir: "{app}"; Flags: ignoreversion
|
Source: ".\assets\app.ico"; DestDir: "{app}"; Flags: ignoreversion
|
||||||
Source: "..\dist\windows-amd64\lib\ollama\*"; DestDir: "{app}\lib\ollama\"; Flags: ignoreversion recursesubdirs
|
|
||||||
|
|
||||||
[Icons]
|
[Icons]
|
||||||
Name: "{group}\{#MyAppName}"; Filename: "{app}\{#MyAppExeName}"; IconFilename: "{app}\app.ico"
|
Name: "{group}\{#MyAppName}"; Filename: "{app}\{#MyAppExeName}"; IconFilename: "{app}\app.ico"
|
||||||
@@ -99,6 +108,9 @@ Name: "{userstartup}\{#MyAppName}"; Filename: "{app}\{#MyAppExeName}"; IconFilen
|
|||||||
Name: "{userprograms}\{#MyAppName}"; Filename: "{app}\{#MyAppExeName}"; IconFilename: "{app}\app.ico"
|
Name: "{userprograms}\{#MyAppName}"; Filename: "{app}\{#MyAppExeName}"; IconFilename: "{app}\app.ico"
|
||||||
|
|
||||||
[Run]
|
[Run]
|
||||||
|
#if DirExists("..\dist\windows-arm64")
|
||||||
|
Filename: "{tmp}\vc_redist.arm64.exe"; Parameters: "/install /passive /norestart"; Check: IsArm64() and vc_redist_needed(); StatusMsg: "Installing VC++ Redistributables..."; Flags: waituntilterminated
|
||||||
|
#endif
|
||||||
Filename: "{cmd}"; Parameters: "/C set PATH={app};%PATH% & ""{app}\{#MyAppExeName}"""; Flags: postinstall nowait runhidden
|
Filename: "{cmd}"; Parameters: "/C set PATH={app};%PATH% & ""{app}\{#MyAppExeName}"""; Flags: postinstall nowait runhidden
|
||||||
|
|
||||||
[UninstallRun]
|
[UninstallRun]
|
||||||
@@ -123,13 +135,13 @@ Type: filesandordirs; Name: "{%TEMP}\ollama*"
|
|||||||
Type: filesandordirs; Name: "{%LOCALAPPDATA}\Programs\Ollama"
|
Type: filesandordirs; Name: "{%LOCALAPPDATA}\Programs\Ollama"
|
||||||
|
|
||||||
[Messages]
|
[Messages]
|
||||||
WizardReady=Ollama Windows Preview
|
WizardReady=Ollama
|
||||||
ReadyLabel1=%nLet's get you up and running with your own large language models.
|
ReadyLabel1=%nLet's get you up and running with your own large language models.
|
||||||
SetupAppRunningError=Another Ollama installer is running.%n%nPlease cancel or finish the other installer, then click OK to continue with this install, or Cancel to exit.
|
SetupAppRunningError=Another Ollama installer is running.%n%nPlease cancel or finish the other installer, then click OK to continue with this install, or Cancel to exit.
|
||||||
|
|
||||||
|
|
||||||
;FinishedHeadingLabel=Run your first model
|
;FinishedHeadingLabel=Run your first model
|
||||||
;FinishedLabel=%nRun this command in a PowerShell or cmd terminal.%n%n%n ollama run llama3.1
|
;FinishedLabel=%nRun this command in a PowerShell or cmd terminal.%n%n%n ollama run llama3.2
|
||||||
;ClickFinish=%n
|
;ClickFinish=%n
|
||||||
|
|
||||||
[Registry]
|
[Registry]
|
||||||
@@ -154,3 +166,39 @@ begin
|
|||||||
{ Pos() returns 0 if not found }
|
{ Pos() returns 0 if not found }
|
||||||
Result := Pos(';' + ExpandConstant(Param) + ';', ';' + OrigPath + ';') = 0;
|
Result := Pos(';' + ExpandConstant(Param) + ';', ';' + OrigPath + ';') = 0;
|
||||||
end;
|
end;
|
||||||
|
|
||||||
|
{ --- VC Runtime libraries discovery code - Only install vc_redist if it isn't already installed ----- }
|
||||||
|
const VCRTL_MIN_V1 = 14;
|
||||||
|
const VCRTL_MIN_V2 = 40;
|
||||||
|
const VCRTL_MIN_V3 = 33807;
|
||||||
|
const VCRTL_MIN_V4 = 0;
|
||||||
|
|
||||||
|
// check if the minimum required vc redist is installed (by looking the registry)
|
||||||
|
function vc_redist_needed (): Boolean;
|
||||||
|
var
|
||||||
|
sRegKey: string;
|
||||||
|
v1: Cardinal;
|
||||||
|
v2: Cardinal;
|
||||||
|
v3: Cardinal;
|
||||||
|
v4: Cardinal;
|
||||||
|
begin
|
||||||
|
sRegKey := 'SOFTWARE\WOW6432Node\Microsoft\VisualStudio\14.0\VC\Runtimes\arm64';
|
||||||
|
if (RegQueryDWordValue (HKEY_LOCAL_MACHINE, sRegKey, 'Major', v1) and
|
||||||
|
RegQueryDWordValue (HKEY_LOCAL_MACHINE, sRegKey, 'Minor', v2) and
|
||||||
|
RegQueryDWordValue (HKEY_LOCAL_MACHINE, sRegKey, 'Bld', v3) and
|
||||||
|
RegQueryDWordValue (HKEY_LOCAL_MACHINE, sRegKey, 'RBld', v4)) then
|
||||||
|
begin
|
||||||
|
Log ('VC Redist version: ' + IntToStr (v1) +
|
||||||
|
'.' + IntToStr (v2) + '.' + IntToStr (v3) +
|
||||||
|
'.' + IntToStr (v4));
|
||||||
|
{ Version info was found. Return true if later or equal to our
|
||||||
|
minimal required version RTL_MIN_Vx }
|
||||||
|
Result := not (
|
||||||
|
(v1 > VCRTL_MIN_V1) or ((v1 = VCRTL_MIN_V1) and
|
||||||
|
((v2 > VCRTL_MIN_V2) or ((v2 = VCRTL_MIN_V2) and
|
||||||
|
((v3 > VCRTL_MIN_V3) or ((v3 = VCRTL_MIN_V3) and
|
||||||
|
(v4 >= VCRTL_MIN_V4)))))));
|
||||||
|
end
|
||||||
|
else
|
||||||
|
Result := TRUE;
|
||||||
|
end;
|
||||||
|
|||||||
@@ -4,5 +4,5 @@ write-host "Welcome to Ollama!"
|
|||||||
write-host ""
|
write-host ""
|
||||||
write-host "Run your first model:"
|
write-host "Run your first model:"
|
||||||
write-host ""
|
write-host ""
|
||||||
write-host "`tollama run llama3.1"
|
write-host "`tollama run llama3.2"
|
||||||
write-host ""
|
write-host ""
|
||||||
@@ -64,7 +64,7 @@ func initStore() {
|
|||||||
slog.Debug(fmt.Sprintf("unexpected error searching for store: %s", err))
|
slog.Debug(fmt.Sprintf("unexpected error searching for store: %s", err))
|
||||||
}
|
}
|
||||||
slog.Debug("initializing new store")
|
slog.Debug("initializing new store")
|
||||||
store.ID = uuid.New().String()
|
store.ID = uuid.NewString()
|
||||||
writeStore(getStorePath())
|
writeStore(getStorePath())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -98,7 +98,7 @@ func (t *winTray) wndProc(hWnd windows.Handle, message uint32, wParam, lParam ui
|
|||||||
}
|
}
|
||||||
err = t.wcex.unregister()
|
err = t.wcex.unregister()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
slog.Error(fmt.Sprintf("failed to uregister windo %s", err))
|
slog.Error(fmt.Sprintf("failed to unregister window %s", err))
|
||||||
}
|
}
|
||||||
case WM_DESTROY:
|
case WM_DESTROY:
|
||||||
// same as WM_ENDSESSION, but throws 0 exit code after all
|
// same as WM_ENDSESSION, but throws 0 exit code after all
|
||||||
|
|||||||
@@ -11,12 +11,13 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
updateAvailableMenuID = 1
|
_ = iota
|
||||||
updateMenuID = updateAvailableMenuID + 1
|
updateAvailableMenuID
|
||||||
separatorMenuID = updateMenuID + 1
|
updateMenuID
|
||||||
diagLogsMenuID = separatorMenuID + 1
|
separatorMenuID
|
||||||
diagSeparatorMenuID = diagLogsMenuID + 1
|
diagLogsMenuID
|
||||||
quitMenuID = diagSeparatorMenuID + 1
|
diagSeparatorMenuID
|
||||||
|
quitMenuID
|
||||||
)
|
)
|
||||||
|
|
||||||
func (t *winTray) initMenus() error {
|
func (t *winTray) initMenus() error {
|
||||||
@@ -38,7 +39,7 @@ func (t *winTray) UpdateAvailable(ver string) error {
|
|||||||
if err := t.addOrUpdateMenuItem(updateAvailableMenuID, 0, updateAvailableMenuTitle, true); err != nil {
|
if err := t.addOrUpdateMenuItem(updateAvailableMenuID, 0, updateAvailableMenuTitle, true); err != nil {
|
||||||
return fmt.Errorf("unable to create menu entries %w", err)
|
return fmt.Errorf("unable to create menu entries %w", err)
|
||||||
}
|
}
|
||||||
if err := t.addOrUpdateMenuItem(updateMenuID, 0, updateMenutTitle, false); err != nil {
|
if err := t.addOrUpdateMenuItem(updateMenuID, 0, updateMenuTitle, false); err != nil {
|
||||||
return fmt.Errorf("unable to create menu entries %w", err)
|
return fmt.Errorf("unable to create menu entries %w", err)
|
||||||
}
|
}
|
||||||
if err := t.addSeparatorMenuItem(separatorMenuID, 0); err != nil {
|
if err := t.addSeparatorMenuItem(separatorMenuID, 0); err != nil {
|
||||||
|
|||||||
@@ -10,6 +10,6 @@ const (
|
|||||||
|
|
||||||
quitMenuTitle = "Quit Ollama"
|
quitMenuTitle = "Quit Ollama"
|
||||||
updateAvailableMenuTitle = "An update is available"
|
updateAvailableMenuTitle = "An update is available"
|
||||||
updateMenutTitle = "Restart to update"
|
updateMenuTitle = "Restart to update"
|
||||||
diagLogsMenuTitle = "View logs"
|
diagLogsMenuTitle = "View logs"
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -361,7 +361,7 @@ func (t *winTray) showMenu() error {
|
|||||||
|
|
||||||
boolRet, _, err = pTrackPopupMenu.Call(
|
boolRet, _, err = pTrackPopupMenu.Call(
|
||||||
uintptr(t.menus[0]),
|
uintptr(t.menus[0]),
|
||||||
TPM_BOTTOMALIGN|TPM_LEFTALIGN,
|
TPM_BOTTOMALIGN|TPM_LEFTALIGN|TPM_RIGHTBUTTON,
|
||||||
uintptr(p.X),
|
uintptr(p.X),
|
||||||
uintptr(p.Y),
|
uintptr(p.Y),
|
||||||
0,
|
0,
|
||||||
|
|||||||
@@ -67,6 +67,7 @@ const (
|
|||||||
SW_HIDE = 0
|
SW_HIDE = 0
|
||||||
TPM_BOTTOMALIGN = 0x0020
|
TPM_BOTTOMALIGN = 0x0020
|
||||||
TPM_LEFTALIGN = 0x0000
|
TPM_LEFTALIGN = 0x0000
|
||||||
|
TPM_RIGHTBUTTON = 0x0002
|
||||||
WM_CLOSE = 0x0010
|
WM_CLOSE = 0x0010
|
||||||
WM_USER = 0x0400
|
WM_USER = 0x0400
|
||||||
WS_CAPTION = 0x00C00000
|
WS_CAPTION = 0x00C00000
|
||||||
|
|||||||
15
auth/auth.go
15
auth/auth.go
@@ -18,21 +18,13 @@ import (
|
|||||||
|
|
||||||
const defaultPrivateKey = "id_ed25519"
|
const defaultPrivateKey = "id_ed25519"
|
||||||
|
|
||||||
func keyPath() (string, error) {
|
func GetPublicKey() (string, error) {
|
||||||
home, err := os.UserHomeDir()
|
home, err := os.UserHomeDir()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
|
|
||||||
return filepath.Join(home, ".ollama", defaultPrivateKey), nil
|
keyPath := filepath.Join(home, ".ollama", defaultPrivateKey)
|
||||||
}
|
|
||||||
|
|
||||||
func GetPublicKey() (string, error) {
|
|
||||||
keyPath, err := keyPath()
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
|
|
||||||
privateKeyFile, err := os.ReadFile(keyPath)
|
privateKeyFile, err := os.ReadFile(keyPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
slog.Info(fmt.Sprintf("Failed to load private key: %v", err))
|
slog.Info(fmt.Sprintf("Failed to load private key: %v", err))
|
||||||
@@ -59,11 +51,12 @@ func NewNonce(r io.Reader, length int) (string, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func Sign(ctx context.Context, bts []byte) (string, error) {
|
func Sign(ctx context.Context, bts []byte) (string, error) {
|
||||||
keyPath, err := keyPath()
|
home, err := os.UserHomeDir()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
keyPath := filepath.Join(home, ".ollama", defaultPrivateKey)
|
||||||
privateKeyFile, err := os.ReadFile(keyPath)
|
privateKeyFile, err := os.ReadFile(keyPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
slog.Info(fmt.Sprintf("Failed to load private key: %v", err))
|
slog.Info(fmt.Sprintf("Failed to load private key: %v", err))
|
||||||
|
|||||||
1220
cmd/cmd.go
1220
cmd/cmd.go
File diff suppressed because it is too large
Load Diff
1239
cmd/cmd_test.go
Normal file
1239
cmd/cmd_test.go
Normal file
File diff suppressed because it is too large
Load Diff
@@ -13,14 +13,12 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"golang.org/x/exp/maps"
|
|
||||||
|
|
||||||
"github.com/ollama/ollama/api"
|
"github.com/ollama/ollama/api"
|
||||||
"github.com/ollama/ollama/envconfig"
|
"github.com/ollama/ollama/envconfig"
|
||||||
"github.com/ollama/ollama/parser"
|
|
||||||
"github.com/ollama/ollama/progress"
|
|
||||||
"github.com/ollama/ollama/readline"
|
"github.com/ollama/ollama/readline"
|
||||||
"github.com/ollama/ollama/types/errtypes"
|
"github.com/ollama/ollama/types/errtypes"
|
||||||
|
"github.com/ollama/ollama/types/model"
|
||||||
)
|
)
|
||||||
|
|
||||||
type MultilineState int
|
type MultilineState int
|
||||||
@@ -31,26 +29,6 @@ const (
|
|||||||
MultilineSystem
|
MultilineSystem
|
||||||
)
|
)
|
||||||
|
|
||||||
func loadModel(cmd *cobra.Command, opts *runOptions) error {
|
|
||||||
p := progress.NewProgress(os.Stderr)
|
|
||||||
defer p.StopAndClear()
|
|
||||||
|
|
||||||
spinner := progress.NewSpinner("")
|
|
||||||
p.Add("", spinner)
|
|
||||||
|
|
||||||
client, err := api.ClientFromEnvironment()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
chatReq := &api.ChatRequest{
|
|
||||||
Model: opts.Model,
|
|
||||||
KeepAlive: opts.KeepAlive,
|
|
||||||
}
|
|
||||||
|
|
||||||
return client.Chat(cmd.Context(), chatReq, func(api.ChatResponse) error { return nil })
|
|
||||||
}
|
|
||||||
|
|
||||||
func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
||||||
usage := func() {
|
usage := func() {
|
||||||
fmt.Fprintln(os.Stderr, "Available Commands:")
|
fmt.Fprintln(os.Stderr, "Available Commands:")
|
||||||
@@ -66,7 +44,7 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
fmt.Fprintln(os.Stderr, "Use \"\"\" to begin a multi-line message.")
|
fmt.Fprintln(os.Stderr, "Use \"\"\" to begin a multi-line message.")
|
||||||
|
|
||||||
if opts.MultiModal {
|
if opts.MultiModal {
|
||||||
fmt.Fprintf(os.Stderr, "Use %s to include .jpg or .png images.\n", filepath.FromSlash("/path/to/file"))
|
fmt.Fprintf(os.Stderr, "Use %s to include .jpg, .png, or .webp images.\n", filepath.FromSlash("/path/to/file"))
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Fprintln(os.Stderr, "")
|
fmt.Fprintln(os.Stderr, "")
|
||||||
@@ -84,6 +62,8 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
fmt.Fprintln(os.Stderr, " /set noformat Disable formatting")
|
fmt.Fprintln(os.Stderr, " /set noformat Disable formatting")
|
||||||
fmt.Fprintln(os.Stderr, " /set verbose Show LLM stats")
|
fmt.Fprintln(os.Stderr, " /set verbose Show LLM stats")
|
||||||
fmt.Fprintln(os.Stderr, " /set quiet Disable LLM stats")
|
fmt.Fprintln(os.Stderr, " /set quiet Disable LLM stats")
|
||||||
|
fmt.Fprintln(os.Stderr, " /set think Enable thinking")
|
||||||
|
fmt.Fprintln(os.Stderr, " /set nothink Disable thinking")
|
||||||
fmt.Fprintln(os.Stderr, "")
|
fmt.Fprintln(os.Stderr, "")
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -150,6 +130,7 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
|
|
||||||
var sb strings.Builder
|
var sb strings.Builder
|
||||||
var multiline MultilineState
|
var multiline MultilineState
|
||||||
|
var thinkExplicitlySet bool = opts.Think != nil
|
||||||
|
|
||||||
for {
|
for {
|
||||||
line, err := scanner.Readline()
|
line, err := scanner.Readline()
|
||||||
@@ -214,10 +195,30 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
fmt.Println("Usage:\n /load <modelname>")
|
fmt.Println("Usage:\n /load <modelname>")
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
origOpts := opts.Copy()
|
||||||
|
|
||||||
opts.Model = args[1]
|
opts.Model = args[1]
|
||||||
opts.Messages = []api.Message{}
|
opts.Messages = []api.Message{}
|
||||||
fmt.Printf("Loading model '%s'\n", opts.Model)
|
fmt.Printf("Loading model '%s'\n", opts.Model)
|
||||||
if err := loadModel(cmd, &opts); err != nil {
|
opts.Think, err = inferThinkingOption(nil, &opts, thinkExplicitlySet)
|
||||||
|
if err != nil {
|
||||||
|
if strings.Contains(err.Error(), "not found") {
|
||||||
|
fmt.Printf("Couldn't find model '%s'\n", opts.Model)
|
||||||
|
opts = origOpts.Copy()
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := loadOrUnloadModel(cmd, &opts); err != nil {
|
||||||
|
if strings.Contains(err.Error(), "not found") {
|
||||||
|
fmt.Printf("Couldn't find model '%s'\n", opts.Model)
|
||||||
|
opts = origOpts.Copy()
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if strings.Contains(err.Error(), "does not support thinking") {
|
||||||
|
fmt.Printf("error: %v\n", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
continue
|
continue
|
||||||
@@ -234,10 +235,7 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
req := &api.CreateRequest{
|
req := NewCreateRequest(args[1], opts)
|
||||||
Name: args[1],
|
|
||||||
Modelfile: buildModelfile(opts),
|
|
||||||
}
|
|
||||||
fn := func(resp api.ProgressResponse) error { return nil }
|
fn := func(resp api.ProgressResponse) error { return nil }
|
||||||
err = client.Create(cmd.Context(), req, fn)
|
err = client.Create(cmd.Context(), req, fn)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -281,6 +279,35 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
fmt.Println("Set 'quiet' mode.")
|
fmt.Println("Set 'quiet' mode.")
|
||||||
|
case "think":
|
||||||
|
thinkValue := api.ThinkValue{Value: true}
|
||||||
|
var maybeLevel string
|
||||||
|
if len(args) > 2 {
|
||||||
|
maybeLevel = args[2]
|
||||||
|
}
|
||||||
|
if maybeLevel != "" {
|
||||||
|
// TODO(drifkin): validate the level, could be model dependent
|
||||||
|
// though... It will also be validated on the server once a call is
|
||||||
|
// made.
|
||||||
|
thinkValue.Value = maybeLevel
|
||||||
|
}
|
||||||
|
opts.Think = &thinkValue
|
||||||
|
thinkExplicitlySet = true
|
||||||
|
if client, err := api.ClientFromEnvironment(); err == nil {
|
||||||
|
ensureThinkingSupport(cmd.Context(), client, opts.Model)
|
||||||
|
}
|
||||||
|
if maybeLevel != "" {
|
||||||
|
fmt.Printf("Set 'think' mode to '%s'.\n", maybeLevel)
|
||||||
|
} else {
|
||||||
|
fmt.Println("Set 'think' mode.")
|
||||||
|
}
|
||||||
|
case "nothink":
|
||||||
|
opts.Think = &api.ThinkValue{Value: false}
|
||||||
|
thinkExplicitlySet = true
|
||||||
|
if client, err := api.ClientFromEnvironment(); err == nil {
|
||||||
|
ensureThinkingSupport(cmd.Context(), client, opts.Model)
|
||||||
|
}
|
||||||
|
fmt.Println("Set 'nothink' mode.")
|
||||||
case "format":
|
case "format":
|
||||||
if len(args) < 3 || args[2] != "json" {
|
if len(args) < 3 || args[2] != "json" {
|
||||||
fmt.Println("Invalid or missing format. For 'json' mode use '/set format json'")
|
fmt.Println("Invalid or missing format. For 'json' mode use '/set format json'")
|
||||||
@@ -340,8 +367,6 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
opts.Messages = append(opts.Messages, newMessage)
|
opts.Messages = append(opts.Messages, newMessage)
|
||||||
}
|
}
|
||||||
fmt.Println("Set system message.")
|
fmt.Println("Set system message.")
|
||||||
sb.Reset()
|
|
||||||
|
|
||||||
sb.Reset()
|
sb.Reset()
|
||||||
continue
|
continue
|
||||||
default:
|
default:
|
||||||
@@ -371,7 +396,7 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
|
|
||||||
switch args[1] {
|
switch args[1] {
|
||||||
case "info":
|
case "info":
|
||||||
showInfo(resp)
|
_ = showInfo(resp, false, os.Stderr)
|
||||||
case "license":
|
case "license":
|
||||||
if resp.License == "" {
|
if resp.License == "" {
|
||||||
fmt.Println("No license was specified for this model.")
|
fmt.Println("No license was specified for this model.")
|
||||||
@@ -381,19 +406,22 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
case "modelfile":
|
case "modelfile":
|
||||||
fmt.Println(resp.Modelfile)
|
fmt.Println(resp.Modelfile)
|
||||||
case "parameters":
|
case "parameters":
|
||||||
|
fmt.Println("Model defined parameters:")
|
||||||
if resp.Parameters == "" {
|
if resp.Parameters == "" {
|
||||||
fmt.Println("No parameters were specified for this model.")
|
fmt.Println(" No additional parameters were specified for this model.")
|
||||||
} else {
|
} else {
|
||||||
|
for _, l := range strings.Split(resp.Parameters, "\n") {
|
||||||
|
fmt.Printf(" %s\n", l)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fmt.Println()
|
||||||
if len(opts.Options) > 0 {
|
if len(opts.Options) > 0 {
|
||||||
fmt.Println("User defined parameters:")
|
fmt.Println("User defined parameters:")
|
||||||
for k, v := range opts.Options {
|
for k, v := range opts.Options {
|
||||||
fmt.Printf("%-*s %v\n", 30, k, v)
|
fmt.Printf(" %-*s %v\n", 30, k, v)
|
||||||
}
|
}
|
||||||
fmt.Println()
|
fmt.Println()
|
||||||
}
|
}
|
||||||
fmt.Println("Model defined parameters:")
|
|
||||||
fmt.Println(resp.Parameters)
|
|
||||||
}
|
|
||||||
case "system":
|
case "system":
|
||||||
switch {
|
switch {
|
||||||
case opts.System != "":
|
case opts.System != "":
|
||||||
@@ -463,13 +491,6 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// clear all previous images for better responses
|
|
||||||
if len(images) > 0 {
|
|
||||||
for i := range opts.Messages {
|
|
||||||
opts.Messages[i].Images = nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
newMessage.Content = msg
|
newMessage.Content = msg
|
||||||
newMessage.Images = images
|
newMessage.Images = images
|
||||||
}
|
}
|
||||||
@@ -478,6 +499,12 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
|
|
||||||
assistant, err := chat(cmd, opts)
|
assistant, err := chat(cmd, opts)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
if strings.Contains(err.Error(), "does not support thinking") ||
|
||||||
|
strings.Contains(err.Error(), "invalid think value") {
|
||||||
|
fmt.Printf("error: %v\n", err)
|
||||||
|
sb.Reset()
|
||||||
|
continue
|
||||||
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if assistant != nil {
|
if assistant != nil {
|
||||||
@@ -489,68 +516,59 @@ func generateInteractive(cmd *cobra.Command, opts runOptions) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func buildModelfile(opts runOptions) string {
|
func NewCreateRequest(name string, opts runOptions) *api.CreateRequest {
|
||||||
var f parser.File
|
parentModel := opts.ParentModel
|
||||||
f.Commands = append(f.Commands, parser.Command{Name: "model", Args: cmp.Or(opts.ParentModel, opts.Model)})
|
|
||||||
|
modelName := model.ParseName(parentModel)
|
||||||
|
if !modelName.IsValid() {
|
||||||
|
parentModel = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
req := &api.CreateRequest{
|
||||||
|
Model: name,
|
||||||
|
From: cmp.Or(parentModel, opts.Model),
|
||||||
|
}
|
||||||
|
|
||||||
if opts.System != "" {
|
if opts.System != "" {
|
||||||
f.Commands = append(f.Commands, parser.Command{Name: "system", Args: opts.System})
|
req.System = opts.System
|
||||||
}
|
}
|
||||||
|
|
||||||
keys := maps.Keys(opts.Options)
|
if len(opts.Options) > 0 {
|
||||||
slices.Sort(keys)
|
req.Parameters = opts.Options
|
||||||
for _, k := range keys {
|
|
||||||
v := opts.Options[k]
|
|
||||||
var cmds []parser.Command
|
|
||||||
switch t := v.(type) {
|
|
||||||
case []string:
|
|
||||||
for _, s := range t {
|
|
||||||
cmds = append(cmds, parser.Command{Name: k, Args: s})
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
cmds = append(cmds, parser.Command{Name: k, Args: fmt.Sprintf("%v", t)})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
f.Commands = append(f.Commands, cmds...)
|
if len(opts.Messages) > 0 {
|
||||||
|
req.Messages = opts.Messages
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, msg := range opts.Messages {
|
return req
|
||||||
f.Commands = append(f.Commands, parser.Command{Name: "message", Args: fmt.Sprintf("%s: %s", msg.Role, msg.Content)})
|
|
||||||
}
|
|
||||||
|
|
||||||
return f.String()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func normalizeFilePath(fp string) string {
|
func normalizeFilePath(fp string) string {
|
||||||
// Define a map of escaped characters and their replacements
|
return strings.NewReplacer(
|
||||||
replacements := map[string]string{
|
"\\ ", " ", // Escaped space
|
||||||
"\\ ": " ", // Escaped space
|
"\\(", "(", // Escaped left parenthesis
|
||||||
"\\(": "(", // Escaped left parenthesis
|
"\\)", ")", // Escaped right parenthesis
|
||||||
"\\)": ")", // Escaped right parenthesis
|
"\\[", "[", // Escaped left square bracket
|
||||||
"\\[": "[", // Escaped left square bracket
|
"\\]", "]", // Escaped right square bracket
|
||||||
"\\]": "]", // Escaped right square bracket
|
"\\{", "{", // Escaped left curly brace
|
||||||
"\\{": "{", // Escaped left curly brace
|
"\\}", "}", // Escaped right curly brace
|
||||||
"\\}": "}", // Escaped right curly brace
|
"\\$", "$", // Escaped dollar sign
|
||||||
"\\$": "$", // Escaped dollar sign
|
"\\&", "&", // Escaped ampersand
|
||||||
"\\&": "&", // Escaped ampersand
|
"\\;", ";", // Escaped semicolon
|
||||||
"\\;": ";", // Escaped semicolon
|
"\\'", "'", // Escaped single quote
|
||||||
"\\'": "'", // Escaped single quote
|
"\\\\", "\\", // Escaped backslash
|
||||||
"\\\\": "\\", // Escaped backslash
|
"\\*", "*", // Escaped asterisk
|
||||||
"\\*": "*", // Escaped asterisk
|
"\\?", "?", // Escaped question mark
|
||||||
"\\?": "?", // Escaped question mark
|
"\\~", "~", // Escaped tilde
|
||||||
}
|
).Replace(fp)
|
||||||
|
|
||||||
for escaped, actual := range replacements {
|
|
||||||
fp = strings.ReplaceAll(fp, escaped, actual)
|
|
||||||
}
|
|
||||||
return fp
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func extractFileNames(input string) []string {
|
func extractFileNames(input string) []string {
|
||||||
// Regex to match file paths starting with optional drive letter, / ./ \ or .\ and include escaped or unescaped spaces (\ or %20)
|
// Regex to match file paths starting with optional drive letter, / ./ \ or .\ and include escaped or unescaped spaces (\ or %20)
|
||||||
// and followed by more characters and a file extension
|
// and followed by more characters and a file extension
|
||||||
// This will capture non filename strings, but we'll check for file existence to remove mismatches
|
// This will capture non filename strings, but we'll check for file existence to remove mismatches
|
||||||
regexPattern := `(?:[a-zA-Z]:)?(?:\./|/|\\)[\S\\ ]+?\.(?i:jpg|jpeg|png|svg)\b`
|
regexPattern := `(?:[a-zA-Z]:)?(?:\./|/|\\)[\S\\ ]+?\.(?i:jpg|jpeg|png|webp)\b`
|
||||||
re := regexp.MustCompile(regexPattern)
|
re := regexp.MustCompile(regexPattern)
|
||||||
|
|
||||||
return re.FindAllString(input, -1)
|
return re.FindAllString(input, -1)
|
||||||
@@ -563,18 +581,19 @@ func extractFileData(input string) (string, []api.ImageData, error) {
|
|||||||
for _, fp := range filePaths {
|
for _, fp := range filePaths {
|
||||||
nfp := normalizeFilePath(fp)
|
nfp := normalizeFilePath(fp)
|
||||||
data, err := getImageData(nfp)
|
data, err := getImageData(nfp)
|
||||||
if err != nil {
|
if errors.Is(err, os.ErrNotExist) {
|
||||||
if os.IsNotExist(err) {
|
|
||||||
continue
|
continue
|
||||||
}
|
} else if err != nil {
|
||||||
fmt.Fprintf(os.Stderr, "Couldn't process image: %q\n", err)
|
fmt.Fprintf(os.Stderr, "Couldn't process image: %q\n", err)
|
||||||
return "", imgs, err
|
return "", imgs, err
|
||||||
}
|
}
|
||||||
fmt.Fprintf(os.Stderr, "Added image '%s'\n", nfp)
|
fmt.Fprintf(os.Stderr, "Added image '%s'\n", nfp)
|
||||||
|
input = strings.ReplaceAll(input, "'"+nfp+"'", "")
|
||||||
|
input = strings.ReplaceAll(input, "'"+fp+"'", "")
|
||||||
input = strings.ReplaceAll(input, fp, "")
|
input = strings.ReplaceAll(input, fp, "")
|
||||||
imgs = append(imgs, data)
|
imgs = append(imgs, data)
|
||||||
}
|
}
|
||||||
return input, imgs, nil
|
return strings.TrimSpace(input), imgs, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func getImageData(filePath string) ([]byte, error) {
|
func getImageData(filePath string) ([]byte, error) {
|
||||||
@@ -591,7 +610,7 @@ func getImageData(filePath string) ([]byte, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
contentType := http.DetectContentType(buf)
|
contentType := http.DetectContentType(buf)
|
||||||
allowedTypes := []string{"image/jpeg", "image/jpg", "image/png"}
|
allowedTypes := []string{"image/jpeg", "image/jpg", "image/png", "image/webp"}
|
||||||
if !slices.Contains(allowedTypes, contentType) {
|
if !slices.Contains(allowedTypes, contentType) {
|
||||||
return nil, fmt.Errorf("invalid image type: %s", contentType)
|
return nil, fmt.Errorf("invalid image type: %s", contentType)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,107 +1,86 @@
|
|||||||
package cmd
|
package cmd
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/google/go-cmp/cmp"
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
|
|
||||||
"github.com/ollama/ollama/api"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestExtractFilenames(t *testing.T) {
|
func TestExtractFilenames(t *testing.T) {
|
||||||
// Unix style paths
|
// Unix style paths
|
||||||
input := ` some preamble
|
input := ` some preamble
|
||||||
./relative\ path/one.png inbetween1 ./not a valid two.jpg inbetween2
|
./relative\ path/one.png inbetween1 ./not a valid two.jpg inbetween2 ./1.svg
|
||||||
/unescaped space /three.jpeg inbetween3 /valid\ path/dir/four.png "./quoted with spaces/five.svg`
|
/unescaped space /three.jpeg inbetween3 /valid\ path/dir/four.png "./quoted with spaces/five.JPG
|
||||||
|
/unescaped space /six.webp inbetween6 /valid\ path/dir/seven.WEBP`
|
||||||
res := extractFileNames(input)
|
res := extractFileNames(input)
|
||||||
assert.Len(t, res, 5)
|
assert.Len(t, res, 7)
|
||||||
assert.Contains(t, res[0], "one.png")
|
assert.Contains(t, res[0], "one.png")
|
||||||
assert.Contains(t, res[1], "two.jpg")
|
assert.Contains(t, res[1], "two.jpg")
|
||||||
assert.Contains(t, res[2], "three.jpeg")
|
assert.Contains(t, res[2], "three.jpeg")
|
||||||
assert.Contains(t, res[3], "four.png")
|
assert.Contains(t, res[3], "four.png")
|
||||||
assert.Contains(t, res[4], "five.svg")
|
assert.Contains(t, res[4], "five.JPG")
|
||||||
|
assert.Contains(t, res[5], "six.webp")
|
||||||
|
assert.Contains(t, res[6], "seven.WEBP")
|
||||||
assert.NotContains(t, res[4], '"')
|
assert.NotContains(t, res[4], '"')
|
||||||
assert.NotContains(t, res, "inbtween")
|
assert.NotContains(t, res, "inbetween1")
|
||||||
|
assert.NotContains(t, res, "./1.svg")
|
||||||
|
|
||||||
// Windows style paths
|
// Windows style paths
|
||||||
input = ` some preamble
|
input = ` some preamble
|
||||||
c:/users/jdoe/one.png inbetween1 c:/program files/someplace/two.jpg inbetween2
|
c:/users/jdoe/one.png inbetween1 c:/program files/someplace/two.jpg inbetween2
|
||||||
/absolute/nospace/three.jpeg inbetween3 /absolute/with space/four.png inbetween4
|
/absolute/nospace/three.jpeg inbetween3 /absolute/with space/four.png inbetween4
|
||||||
./relative\ path/five.svg inbetween5 "./relative with/spaces/six.png inbetween6
|
./relative\ path/five.JPG inbetween5 "./relative with/spaces/six.png inbetween6
|
||||||
d:\path with\spaces\seven.svg inbetween7 c:\users\jdoe\eight.png inbetween8
|
d:\path with\spaces\seven.JPEG inbetween7 c:\users\jdoe\eight.png inbetween8
|
||||||
d:\program files\someplace\nine.png inbetween9 "E:\program files\someplace\ten.svg some ending
|
d:\program files\someplace\nine.png inbetween9 "E:\program files\someplace\ten.PNG
|
||||||
|
c:/users/jdoe/eleven.webp inbetween11 c:/program files/someplace/twelve.WebP inbetween12
|
||||||
|
d:\path with\spaces\thirteen.WEBP some ending
|
||||||
`
|
`
|
||||||
res = extractFileNames(input)
|
res = extractFileNames(input)
|
||||||
assert.Len(t, res, 10)
|
assert.Len(t, res, 13)
|
||||||
assert.NotContains(t, res, "inbtween")
|
assert.NotContains(t, res, "inbetween2")
|
||||||
assert.Contains(t, res[0], "one.png")
|
assert.Contains(t, res[0], "one.png")
|
||||||
assert.Contains(t, res[0], "c:")
|
assert.Contains(t, res[0], "c:")
|
||||||
assert.Contains(t, res[1], "two.jpg")
|
assert.Contains(t, res[1], "two.jpg")
|
||||||
assert.Contains(t, res[1], "c:")
|
assert.Contains(t, res[1], "c:")
|
||||||
assert.Contains(t, res[2], "three.jpeg")
|
assert.Contains(t, res[2], "three.jpeg")
|
||||||
assert.Contains(t, res[3], "four.png")
|
assert.Contains(t, res[3], "four.png")
|
||||||
assert.Contains(t, res[4], "five.svg")
|
assert.Contains(t, res[4], "five.JPG")
|
||||||
assert.Contains(t, res[5], "six.png")
|
assert.Contains(t, res[5], "six.png")
|
||||||
assert.Contains(t, res[6], "seven.svg")
|
assert.Contains(t, res[6], "seven.JPEG")
|
||||||
assert.Contains(t, res[6], "d:")
|
assert.Contains(t, res[6], "d:")
|
||||||
assert.Contains(t, res[7], "eight.png")
|
assert.Contains(t, res[7], "eight.png")
|
||||||
assert.Contains(t, res[7], "c:")
|
assert.Contains(t, res[7], "c:")
|
||||||
assert.Contains(t, res[8], "nine.png")
|
assert.Contains(t, res[8], "nine.png")
|
||||||
assert.Contains(t, res[8], "d:")
|
assert.Contains(t, res[8], "d:")
|
||||||
assert.Contains(t, res[9], "ten.svg")
|
assert.Contains(t, res[9], "ten.PNG")
|
||||||
assert.Contains(t, res[9], "E:")
|
assert.Contains(t, res[9], "E:")
|
||||||
|
assert.Contains(t, res[10], "eleven.webp")
|
||||||
|
assert.Contains(t, res[10], "c:")
|
||||||
|
assert.Contains(t, res[11], "twelve.WebP")
|
||||||
|
assert.Contains(t, res[11], "c:")
|
||||||
|
assert.Contains(t, res[12], "thirteen.WEBP")
|
||||||
|
assert.Contains(t, res[12], "d:")
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestModelfileBuilder(t *testing.T) {
|
// Ensure that file paths wrapped in single quotes are removed with the quotes.
|
||||||
opts := runOptions{
|
func TestExtractFileDataRemovesQuotedFilepath(t *testing.T) {
|
||||||
Model: "hork",
|
dir := t.TempDir()
|
||||||
System: "You are part horse and part shark, but all hork. Do horklike things",
|
fp := filepath.Join(dir, "img.jpg")
|
||||||
Messages: []api.Message{
|
data := make([]byte, 600)
|
||||||
{Role: "user", Content: "Hey there hork!"},
|
copy(data, []byte{
|
||||||
{Role: "assistant", Content: "Yes it is true, I am half horse, half shark."},
|
0xff, 0xd8, 0xff, 0xe0, 0x00, 0x10, 'J', 'F', 'I', 'F',
|
||||||
},
|
0x00, 0x01, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
|
||||||
Options: map[string]any{
|
0xff, 0xd9,
|
||||||
"temperature": 0.9,
|
|
||||||
"seed": 42,
|
|
||||||
"penalize_newline": false,
|
|
||||||
"stop": []string{"hi", "there"},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
t.Run("model", func(t *testing.T) {
|
|
||||||
expect := `FROM hork
|
|
||||||
SYSTEM You are part horse and part shark, but all hork. Do horklike things
|
|
||||||
PARAMETER penalize_newline false
|
|
||||||
PARAMETER seed 42
|
|
||||||
PARAMETER stop hi
|
|
||||||
PARAMETER stop there
|
|
||||||
PARAMETER temperature 0.9
|
|
||||||
MESSAGE user Hey there hork!
|
|
||||||
MESSAGE assistant Yes it is true, I am half horse, half shark.
|
|
||||||
`
|
|
||||||
|
|
||||||
actual := buildModelfile(opts)
|
|
||||||
if diff := cmp.Diff(expect, actual); diff != "" {
|
|
||||||
t.Errorf("mismatch (-want +got):\n%s", diff)
|
|
||||||
}
|
|
||||||
})
|
})
|
||||||
|
if err := os.WriteFile(fp, data, 0o600); err != nil {
|
||||||
t.Run("parent model", func(t *testing.T) {
|
t.Fatalf("failed to write test image: %v", err)
|
||||||
opts.ParentModel = "horseshark"
|
|
||||||
expect := `FROM horseshark
|
|
||||||
SYSTEM You are part horse and part shark, but all hork. Do horklike things
|
|
||||||
PARAMETER penalize_newline false
|
|
||||||
PARAMETER seed 42
|
|
||||||
PARAMETER stop hi
|
|
||||||
PARAMETER stop there
|
|
||||||
PARAMETER temperature 0.9
|
|
||||||
MESSAGE user Hey there hork!
|
|
||||||
MESSAGE assistant Yes it is true, I am half horse, half shark.
|
|
||||||
`
|
|
||||||
actual := buildModelfile(opts)
|
|
||||||
if diff := cmp.Diff(expect, actual); diff != "" {
|
|
||||||
t.Errorf("mismatch (-want +got):\n%s", diff)
|
|
||||||
}
|
}
|
||||||
})
|
|
||||||
|
input := "before '" + fp + "' after"
|
||||||
|
cleaned, imgs, err := extractFileData(input)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Len(t, imgs, 1)
|
||||||
|
assert.Equal(t, cleaned, "before after")
|
||||||
}
|
}
|
||||||
|
|||||||
15
cmd/runner/main.go
Normal file
15
cmd/runner/main.go
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/runner"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
if err := runner.Execute(os.Args[1:]); err != nil {
|
||||||
|
fmt.Fprintf(os.Stderr, "error: %s\n", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -5,7 +5,7 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"os"
|
"os"
|
||||||
"os/exec"
|
"os/exec"
|
||||||
"strings"
|
"regexp"
|
||||||
|
|
||||||
"github.com/ollama/ollama/api"
|
"github.com/ollama/ollama/api"
|
||||||
)
|
)
|
||||||
@@ -19,11 +19,12 @@ func startApp(ctx context.Context, client *api.Client) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if !strings.Contains(link, "Ollama.app") {
|
r := regexp.MustCompile(`^.*/Ollama\s?\d*.app`)
|
||||||
|
m := r.FindStringSubmatch(link)
|
||||||
|
if len(m) != 1 {
|
||||||
return errors.New("could not find ollama app")
|
return errors.New("could not find ollama app")
|
||||||
}
|
}
|
||||||
path := strings.Split(link, "Ollama.app")
|
if err := exec.Command("/usr/bin/open", "-j", "-a", m[0], "--args", "--fast-startup").Run(); err != nil {
|
||||||
if err := exec.Command("/usr/bin/open", "-a", path[0]+"Ollama.app").Run(); err != nil {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
return waitForServer(ctx, client)
|
return waitForServer(ctx, client)
|
||||||
|
|||||||
@@ -4,17 +4,27 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"log/slog"
|
||||||
"os"
|
"os"
|
||||||
"os/exec"
|
"os/exec"
|
||||||
|
"path"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
"syscall"
|
"syscall"
|
||||||
|
"unsafe"
|
||||||
|
|
||||||
"github.com/ollama/ollama/api"
|
"github.com/ollama/ollama/api"
|
||||||
|
"golang.org/x/sys/windows"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
Installer = "OllamaSetup.exe"
|
||||||
)
|
)
|
||||||
|
|
||||||
func startApp(ctx context.Context, client *api.Client) error {
|
func startApp(ctx context.Context, client *api.Client) error {
|
||||||
// log.Printf("XXX Attempting to find and start ollama app")
|
if len(isProcRunning(Installer)) > 0 {
|
||||||
|
return fmt.Errorf("upgrade in progress...")
|
||||||
|
}
|
||||||
AppName := "ollama app.exe"
|
AppName := "ollama app.exe"
|
||||||
exe, err := os.Executable()
|
exe, err := os.Executable()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -35,14 +45,11 @@ func startApp(ctx context.Context, client *api.Client) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// log.Printf("XXX attempting to start app %s", appExe)
|
|
||||||
|
|
||||||
cmd_path := "c:\\Windows\\system32\\cmd.exe"
|
cmd_path := "c:\\Windows\\system32\\cmd.exe"
|
||||||
cmd := exec.Command(cmd_path, "/c", appExe)
|
cmd := exec.Command(cmd_path, "/c", appExe, "--hide", "--fast-startup")
|
||||||
// TODO - these hide flags aren't working - still pops up a command window for some reason
|
|
||||||
cmd.SysProcAttr = &syscall.SysProcAttr{CreationFlags: 0x08000000, HideWindow: true}
|
cmd.SysProcAttr = &syscall.SysProcAttr{CreationFlags: 0x08000000, HideWindow: true}
|
||||||
|
|
||||||
// TODO this didn't help either...
|
|
||||||
cmd.Stdin = strings.NewReader("")
|
cmd.Stdin = strings.NewReader("")
|
||||||
cmd.Stdout = os.Stdout
|
cmd.Stdout = os.Stdout
|
||||||
cmd.Stderr = os.Stderr
|
cmd.Stderr = os.Stderr
|
||||||
@@ -56,3 +63,50 @@ func startApp(ctx context.Context, client *api.Client) error {
|
|||||||
}
|
}
|
||||||
return waitForServer(ctx, client)
|
return waitForServer(ctx, client)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func isProcRunning(procName string) []uint32 {
|
||||||
|
pids := make([]uint32, 2048)
|
||||||
|
var ret uint32
|
||||||
|
if err := windows.EnumProcesses(pids, &ret); err != nil || ret == 0 {
|
||||||
|
slog.Debug("failed to check for running installers", "error", err)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if ret > uint32(len(pids)) {
|
||||||
|
pids = make([]uint32, ret+10)
|
||||||
|
if err := windows.EnumProcesses(pids, &ret); err != nil || ret == 0 {
|
||||||
|
slog.Debug("failed to check for running installers", "error", err)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if ret < uint32(len(pids)) {
|
||||||
|
pids = pids[:ret]
|
||||||
|
}
|
||||||
|
var matches []uint32
|
||||||
|
for _, pid := range pids {
|
||||||
|
if pid == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
hProcess, err := windows.OpenProcess(windows.PROCESS_QUERY_INFORMATION|windows.PROCESS_VM_READ, false, pid)
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
defer windows.CloseHandle(hProcess)
|
||||||
|
var module windows.Handle
|
||||||
|
var cbNeeded uint32
|
||||||
|
cb := (uint32)(unsafe.Sizeof(module))
|
||||||
|
if err := windows.EnumProcessModules(hProcess, &module, cb, &cbNeeded); err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var sz uint32 = 1024 * 8
|
||||||
|
moduleName := make([]uint16, sz)
|
||||||
|
cb = uint32(len(moduleName)) * (uint32)(unsafe.Sizeof(uint16(0)))
|
||||||
|
if err := windows.GetModuleBaseName(hProcess, module, &moduleName[0], cb); err != nil && err != syscall.ERROR_INSUFFICIENT_BUFFER {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
exeFile := path.Base(strings.ToLower(syscall.UTF16ToString(moduleName)))
|
||||||
|
if strings.EqualFold(exeFile, procName) {
|
||||||
|
matches = append(matches, pid)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return matches
|
||||||
|
}
|
||||||
|
|||||||
63
cmd/warn_thinking_test.go
Normal file
63
cmd/warn_thinking_test.go
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"io"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/api"
|
||||||
|
"github.com/ollama/ollama/types/model"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Test that a warning is printed when thinking is requested but not supported.
|
||||||
|
func TestWarnMissingThinking(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
capabilities []model.Capability
|
||||||
|
expectWarn bool
|
||||||
|
}{
|
||||||
|
{capabilities: []model.Capability{model.CapabilityThinking}, expectWarn: false},
|
||||||
|
{capabilities: []model.Capability{}, expectWarn: true},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tc := range cases {
|
||||||
|
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.URL.Path != "/api/show" || r.Method != http.MethodPost {
|
||||||
|
t.Fatalf("unexpected request to %s %s", r.URL.Path, r.Method)
|
||||||
|
}
|
||||||
|
var req api.ShowRequest
|
||||||
|
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||||
|
t.Fatalf("decode request: %v", err)
|
||||||
|
}
|
||||||
|
resp := api.ShowResponse{Capabilities: tc.capabilities}
|
||||||
|
if err := json.NewEncoder(w).Encode(resp); err != nil {
|
||||||
|
t.Fatalf("encode response: %v", err)
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
defer srv.Close()
|
||||||
|
|
||||||
|
t.Setenv("OLLAMA_HOST", srv.URL)
|
||||||
|
client, err := api.ClientFromEnvironment()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
oldStderr := os.Stderr
|
||||||
|
r, w, _ := os.Pipe()
|
||||||
|
os.Stderr = w
|
||||||
|
ensureThinkingSupport(t.Context(), client, "m")
|
||||||
|
w.Close()
|
||||||
|
os.Stderr = oldStderr
|
||||||
|
out, _ := io.ReadAll(r)
|
||||||
|
|
||||||
|
warned := strings.Contains(string(out), "warning:")
|
||||||
|
if tc.expectWarn && !warned {
|
||||||
|
t.Errorf("expected warning, got none")
|
||||||
|
}
|
||||||
|
if !tc.expectWarn && warned {
|
||||||
|
t.Errorf("did not expect warning, got: %s", string(out))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,20 +1,26 @@
|
|||||||
package convert
|
package convert
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"cmp"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
|
||||||
"io/fs"
|
"io/fs"
|
||||||
"log/slog"
|
"log/slog"
|
||||||
|
"os"
|
||||||
|
"slices"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type ModelParameters struct {
|
type ModelParameters struct {
|
||||||
Architectures []string `json:"architectures"`
|
Architectures []string `json:"architectures"`
|
||||||
VocabSize uint32 `json:"vocab_size"`
|
VocabSize uint32 `json:"vocab_size"`
|
||||||
|
|
||||||
|
TextModel struct {
|
||||||
|
VocabSize uint32 `json:"vocab_size"`
|
||||||
|
} `json:"text_config"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type AdapterParameters struct {
|
type AdapterParameters struct {
|
||||||
@@ -27,8 +33,8 @@ type AdapterParameters struct {
|
|||||||
} `json:"lora_parameters"`
|
} `json:"lora_parameters"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ModelParameters) KV(t *Tokenizer) llm.KV {
|
func (ModelParameters) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := llm.KV{
|
kv := ggml.KV{
|
||||||
"general.file_type": uint32(1),
|
"general.file_type": uint32(1),
|
||||||
"general.quantization_version": uint32(2),
|
"general.quantization_version": uint32(2),
|
||||||
"tokenizer.ggml.pre": t.Pre,
|
"tokenizer.ggml.pre": t.Pre,
|
||||||
@@ -47,14 +53,17 @@ func (ModelParameters) KV(t *Tokenizer) llm.KV {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for _, sv := range t.SpecialVocabulary {
|
for _, sv := range t.SpecialVocabulary {
|
||||||
kv[fmt.Sprintf("tokenizer.ggml.%s_token_id", sv.Key())] = uint32(sv.ID)
|
|
||||||
kv[fmt.Sprintf("tokenizer.ggml.add_%s_token", sv.Key())] = sv.AddToken
|
kv[fmt.Sprintf("tokenizer.ggml.add_%s_token", sv.Key())] = sv.AddToken
|
||||||
|
kv[fmt.Sprintf("tokenizer.ggml.%s_token_id", sv.Key())] = uint32(sv.ID)
|
||||||
|
if len(sv.IDs) > 0 {
|
||||||
|
kv[fmt.Sprintf("tokenizer.ggml.%s_token_ids", sv.Key())] = sv.IDs
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p AdapterParameters) KV() llm.KV {
|
func (p AdapterParameters) KV() ggml.KV {
|
||||||
var alpha float32
|
var alpha float32
|
||||||
if p.LoraParameters.Alpha == 0 {
|
if p.LoraParameters.Alpha == 0 {
|
||||||
alpha = float32(p.Alpha)
|
alpha = float32(p.Alpha)
|
||||||
@@ -62,7 +71,7 @@ func (p AdapterParameters) KV() llm.KV {
|
|||||||
alpha = p.LoraParameters.Alpha
|
alpha = p.LoraParameters.Alpha
|
||||||
}
|
}
|
||||||
|
|
||||||
kv := llm.KV{
|
kv := ggml.KV{
|
||||||
"adapter.lora.alpha": alpha,
|
"adapter.lora.alpha": alpha,
|
||||||
"adapter.type": "lora",
|
"adapter.type": "lora",
|
||||||
"general.file_type": uint32(1),
|
"general.file_type": uint32(1),
|
||||||
@@ -79,27 +88,17 @@ func (ModelParameters) specialTokenTypes() []string {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ModelParameters) writeFile(ws io.WriteSeeker, kv llm.KV, ts []llm.Tensor) error {
|
|
||||||
return llm.WriteGGUF(ws, kv, ts)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (AdapterParameters) writeFile(ws io.WriteSeeker, kv llm.KV, ts []llm.Tensor) error {
|
|
||||||
return llm.WriteGGUF(ws, kv, ts)
|
|
||||||
}
|
|
||||||
|
|
||||||
type ModelConverter interface {
|
type ModelConverter interface {
|
||||||
// KV maps parameters to LLM key-values
|
// KV maps parameters to LLM key-values
|
||||||
KV(*Tokenizer) llm.KV
|
KV(*Tokenizer) ggml.KV
|
||||||
// Tensors maps input tensors to LLM tensors. Model specific modifications can be done here.
|
// Tensors maps input tensors to LLM tensors. Model specific modifications can be done here.
|
||||||
Tensors([]Tensor) []llm.Tensor
|
Tensors([]Tensor) []*ggml.Tensor
|
||||||
// Replacements returns a list of string pairs to replace in tensor names.
|
// Replacements returns a list of string pairs to replace in tensor names.
|
||||||
// See [strings.Replacer](https://pkg.go.dev/strings#Replacer) for details
|
// See [strings.Replacer](https://pkg.go.dev/strings#Replacer) for details
|
||||||
Replacements() []string
|
Replacements() []string
|
||||||
|
|
||||||
// specialTokenTypes returns any special token types the model uses
|
// specialTokenTypes returns any special token types the model uses
|
||||||
specialTokenTypes() []string
|
specialTokenTypes() []string
|
||||||
// writeFile writes the model to the provided io.WriteSeeker
|
|
||||||
writeFile(io.WriteSeeker, llm.KV, []llm.Tensor) error
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type moreParser interface {
|
type moreParser interface {
|
||||||
@@ -108,17 +107,15 @@ type moreParser interface {
|
|||||||
|
|
||||||
type AdapterConverter interface {
|
type AdapterConverter interface {
|
||||||
// KV maps parameters to LLM key-values
|
// KV maps parameters to LLM key-values
|
||||||
KV(llm.KV) llm.KV
|
KV(ggml.KV) ggml.KV
|
||||||
// Tensors maps input tensors to LLM tensors. Adapter specific modifications can be done here.
|
// Tensors maps input tensors to LLM tensors. Adapter specific modifications can be done here.
|
||||||
Tensors([]Tensor) []llm.Tensor
|
Tensors([]Tensor) []*ggml.Tensor
|
||||||
// Replacements returns a list of string pairs to replace in tensor names.
|
// Replacements returns a list of string pairs to replace in tensor names.
|
||||||
// See [strings.Replacer](https://pkg.go.dev/strings#Replacer) for details
|
// See [strings.Replacer](https://pkg.go.dev/strings#Replacer) for details
|
||||||
Replacements() []string
|
Replacements() []string
|
||||||
|
|
||||||
writeFile(io.WriteSeeker, llm.KV, []llm.Tensor) error
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func ConvertAdapter(fsys fs.FS, ws io.WriteSeeker, baseKV llm.KV) error {
|
func ConvertAdapter(fsys fs.FS, f *os.File, baseKV ggml.KV) error {
|
||||||
bts, err := fs.ReadFile(fsys, "adapter_config.json")
|
bts, err := fs.ReadFile(fsys, "adapter_config.json")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -153,14 +150,14 @@ func ConvertAdapter(fsys fs.FS, ws io.WriteSeeker, baseKV llm.KV) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
return conv.writeFile(ws, conv.KV(baseKV), conv.Tensors(ts))
|
return writeFile(f, conv.KV(baseKV), conv.Tensors(ts))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Convert writes an Ollama compatible model to the provided io.WriteSeeker based on configurations
|
// Convert writes an Ollama compatible model to the provided io.WriteSeeker based on configurations
|
||||||
// and files it finds in the input path.
|
// and files it finds in the input path.
|
||||||
// Supported input model formats include safetensors.
|
// Supported input model formats include safetensors.
|
||||||
// Supported input tokenizers files include tokenizer.json (preferred) and tokenizer.model.
|
// Supported input tokenizers files include tokenizer.json (preferred) and tokenizer.model.
|
||||||
func ConvertModel(fsys fs.FS, ws io.WriteSeeker) error {
|
func ConvertModel(fsys fs.FS, f *os.File) error {
|
||||||
bts, err := fs.ReadFile(fsys, "config.json")
|
bts, err := fs.ReadFile(fsys, "config.json")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -177,20 +174,38 @@ func ConvertModel(fsys fs.FS, ws io.WriteSeeker) error {
|
|||||||
|
|
||||||
var conv ModelConverter
|
var conv ModelConverter
|
||||||
switch p.Architectures[0] {
|
switch p.Architectures[0] {
|
||||||
case "LlamaForCausalLM", "MistralForCausalLM":
|
case "LlamaForCausalLM":
|
||||||
conv = &llamaModel{}
|
conv = &llamaModel{}
|
||||||
|
case "MllamaForConditionalGeneration":
|
||||||
|
conv = &mllamaModel{}
|
||||||
|
case "Llama4ForConditionalGeneration":
|
||||||
|
conv = &llama4Model{}
|
||||||
|
case "Mistral3ForConditionalGeneration":
|
||||||
|
conv = &mistral3Model{}
|
||||||
case "MixtralForCausalLM":
|
case "MixtralForCausalLM":
|
||||||
conv = &mixtralModel{}
|
conv = &mixtralModel{}
|
||||||
case "GemmaForCausalLM":
|
case "GemmaForCausalLM":
|
||||||
conv = &gemmaModel{}
|
conv = &gemmaModel{}
|
||||||
case "Gemma2ForCausalLM":
|
case "Gemma2ForCausalLM":
|
||||||
conv = &gemma2Model{}
|
conv = &gemma2Model{}
|
||||||
|
case "Gemma3ForCausalLM", "Gemma3ForConditionalGeneration":
|
||||||
|
conv = &gemma3Model{Architecture: p.Architectures[0]}
|
||||||
|
case "Gemma3nForConditionalGeneration":
|
||||||
|
conv = &gemma3nModel{}
|
||||||
case "Phi3ForCausalLM":
|
case "Phi3ForCausalLM":
|
||||||
conv = &phi3Model{}
|
conv = &phi3Model{}
|
||||||
|
case "Qwen2ForCausalLM":
|
||||||
|
conv = &qwen2Model{}
|
||||||
|
case "Qwen2_5_VLForConditionalGeneration":
|
||||||
|
conv = &qwen25VLModel{}
|
||||||
case "BertModel":
|
case "BertModel":
|
||||||
conv = &bertModel{}
|
conv = &bertModel{}
|
||||||
|
case "CohereForCausalLM":
|
||||||
|
conv = &commandrModel{}
|
||||||
|
case "GptOssForCausalLM":
|
||||||
|
conv = &gptossModel{}
|
||||||
default:
|
default:
|
||||||
return errors.New("unsupported architecture")
|
return fmt.Errorf("unsupported architecture %q", p.Architectures[0])
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := json.Unmarshal(bts, conv); err != nil {
|
if err := json.Unmarshal(bts, conv); err != nil {
|
||||||
@@ -208,14 +223,23 @@ func ConvertModel(fsys fs.FS, ws io.WriteSeeker) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if vocabSize := int(p.VocabSize); vocabSize > len(t.Vocabulary.Tokens) {
|
vocabSize := int(cmp.Or(p.VocabSize, p.TextModel.VocabSize))
|
||||||
slog.Warn("vocabulary is smaller than expected, padding with dummy tokens", "expect", p.VocabSize, "actual", len(t.Vocabulary.Tokens))
|
|
||||||
|
switch {
|
||||||
|
case vocabSize == 0:
|
||||||
|
slog.Debug("vocabulary size was not explicitly set by the model", "default size", len(t.Vocabulary.Tokens))
|
||||||
|
case vocabSize > len(t.Vocabulary.Tokens):
|
||||||
|
slog.Debug("vocabulary is smaller than expected, padding with dummy tokens", "expect", vocabSize, "actual", len(t.Vocabulary.Tokens))
|
||||||
for i := range vocabSize - len(t.Vocabulary.Tokens) {
|
for i := range vocabSize - len(t.Vocabulary.Tokens) {
|
||||||
t.Vocabulary.Tokens = append(t.Vocabulary.Tokens, fmt.Sprintf("[PAD%d]", i))
|
t.Vocabulary.Tokens = append(t.Vocabulary.Tokens, fmt.Sprintf("[PAD%d]", i))
|
||||||
t.Vocabulary.Scores = append(t.Vocabulary.Scores, -1)
|
t.Vocabulary.Scores = append(t.Vocabulary.Scores, -1)
|
||||||
t.Vocabulary.Types = append(t.Vocabulary.Types, tokenTypeUserDefined)
|
t.Vocabulary.Types = append(t.Vocabulary.Types, tokenTypeUserDefined)
|
||||||
}
|
}
|
||||||
} else {
|
case vocabSize < len(t.Vocabulary.Tokens):
|
||||||
|
slog.Debug("vocabulary is larger than expected", "want", vocabSize, "got", len(t.Vocabulary.Tokens))
|
||||||
|
p.VocabSize = uint32(len(t.Vocabulary.Tokens))
|
||||||
|
p.TextModel.VocabSize = uint32(len(t.Vocabulary.Tokens))
|
||||||
|
default:
|
||||||
slog.Debug("vocabulary", "size", len(t.Vocabulary.Tokens))
|
slog.Debug("vocabulary", "size", len(t.Vocabulary.Tokens))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -224,5 +248,13 @@ func ConvertModel(fsys fs.FS, ws io.WriteSeeker) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
return conv.writeFile(ws, conv.KV(t), conv.Tensors(ts))
|
return writeFile(f, conv.KV(t), conv.Tensors(ts))
|
||||||
|
}
|
||||||
|
|
||||||
|
func writeFile(f *os.File, kv ggml.KV, ts []*ggml.Tensor) error {
|
||||||
|
for i := range ts {
|
||||||
|
ts[i].Shape = slices.Clone(ts[i].Shape)
|
||||||
|
slices.Reverse(ts[i].Shape)
|
||||||
|
}
|
||||||
|
return ggml.WriteGGUF(f, kv, ts)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ import (
|
|||||||
"slices"
|
"slices"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type bertModel struct {
|
type bertModel struct {
|
||||||
@@ -28,6 +28,7 @@ type bertModel struct {
|
|||||||
LayerNormEPS float32 `json:"layer_norm_eps"`
|
LayerNormEPS float32 `json:"layer_norm_eps"`
|
||||||
LayerNormEpsilon float32 `json:"layer_norm_epsilon"`
|
LayerNormEpsilon float32 `json:"layer_norm_epsilon"`
|
||||||
NormEpsilon float32 `json:"norm_epsilon"`
|
NormEpsilon float32 `json:"norm_epsilon"`
|
||||||
|
normalizeEmbeddings bool
|
||||||
|
|
||||||
PoolingType uint32
|
PoolingType uint32
|
||||||
}
|
}
|
||||||
@@ -54,9 +55,11 @@ func (p *bertModel) parseMore(fsys fs.FS) error {
|
|||||||
|
|
||||||
var pooling string
|
var pooling string
|
||||||
for _, m := range modules {
|
for _, m := range modules {
|
||||||
if m.Type == "sentence_transformers.models.Pooling" {
|
switch m.Type {
|
||||||
|
case "sentence_transformers.models.Pooling":
|
||||||
pooling = m.Path
|
pooling = m.Path
|
||||||
break
|
case "sentence_transformers.models.Normalize":
|
||||||
|
p.normalizeEmbeddings = true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -85,11 +88,12 @@ func (p *bertModel) parseMore(fsys fs.FS) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *bertModel) KV(t *Tokenizer) llm.KV {
|
func (p *bertModel) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := p.ModelParameters.KV(t)
|
kv := p.ModelParameters.KV(t)
|
||||||
kv["general.architecture"] = "bert"
|
kv["general.architecture"] = "bert"
|
||||||
kv["bert.attention.causal"] = false
|
kv["bert.attention.causal"] = false
|
||||||
kv["bert.pooling_type"] = p.PoolingType
|
kv["bert.pooling_type"] = p.PoolingType
|
||||||
|
kv["bert.normalize_embeddings"] = p.normalizeEmbeddings
|
||||||
|
|
||||||
kv["bert.block_count"] = cmp.Or(p.NLayers, p.NumHiddenLayers, p.NLayer)
|
kv["bert.block_count"] = cmp.Or(p.NLayers, p.NumHiddenLayers, p.NLayer)
|
||||||
|
|
||||||
@@ -132,8 +136,8 @@ func (p *bertModel) KV(t *Tokenizer) llm.KV {
|
|||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *bertModel) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *bertModel) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
var out []llm.Tensor
|
var out []*ggml.Tensor
|
||||||
for _, t := range ts {
|
for _, t := range ts {
|
||||||
if slices.Contains([]string{
|
if slices.Contains([]string{
|
||||||
"embeddings.position_ids",
|
"embeddings.position_ids",
|
||||||
@@ -143,7 +147,7 @@ func (p *bertModel) Tensors(ts []Tensor) []llm.Tensor {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, &ggml.Tensor{
|
||||||
Name: t.Name(),
|
Name: t.Name(),
|
||||||
Kind: t.Kind(),
|
Kind: t.Kind(),
|
||||||
Shape: t.Shape(),
|
Shape: t.Shape(),
|
||||||
|
|||||||
76
convert/convert_commandr.go
Normal file
76
convert/convert_commandr.go
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import (
|
||||||
|
"cmp"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
|
)
|
||||||
|
|
||||||
|
type commandrModel struct {
|
||||||
|
ModelParameters
|
||||||
|
MaxPositionEmbeddings uint32 `json:"max_position_embeddings"`
|
||||||
|
HiddenSize uint32 `json:"hidden_size"`
|
||||||
|
HiddenLayers uint32 `json:"num_hidden_layers"`
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"`
|
||||||
|
NumAttentionHeads uint32 `json:"num_attention_heads"`
|
||||||
|
NumKeyValueHeads uint32 `json:"num_key_value_heads"`
|
||||||
|
LayerNormEPS float32 `json:"layer_norm_eps"`
|
||||||
|
RopeTheta float32 `json:"rope_theta"`
|
||||||
|
UseQKNorm bool `json:"use_qk_norm"`
|
||||||
|
MaxLength uint32 `json:"model_max_length"`
|
||||||
|
LogitScale float32 `json:"logit_scale"`
|
||||||
|
NCtx uint32 `json:"n_ctx"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ ModelConverter = (*commandrModel)(nil)
|
||||||
|
|
||||||
|
func (p *commandrModel) KV(t *Tokenizer) ggml.KV {
|
||||||
|
kv := p.ModelParameters.KV(t)
|
||||||
|
kv["general.architecture"] = "command-r"
|
||||||
|
kv["general.name"] = "command-r"
|
||||||
|
kv["command-r.context_length"] = cmp.Or(p.MaxLength, p.MaxPositionEmbeddings, p.NCtx)
|
||||||
|
kv["command-r.embedding_length"] = p.HiddenSize
|
||||||
|
kv["command-r.block_count"] = p.HiddenLayers
|
||||||
|
kv["command-r.feed_forward_length"] = p.IntermediateSize
|
||||||
|
kv["command-r.attention.head_count"] = p.NumAttentionHeads
|
||||||
|
kv["command-r.attention.head_count_kv"] = p.NumKeyValueHeads
|
||||||
|
kv["command-r.attention.layer_norm_epsilon"] = p.LayerNormEPS
|
||||||
|
kv["command-r.rope.freq_base"] = p.RopeTheta
|
||||||
|
kv["command-r.max_position_embeddings"] = cmp.Or(p.MaxLength, p.MaxPositionEmbeddings)
|
||||||
|
kv["command-r.logit_scale"] = p.LogitScale
|
||||||
|
kv["command-r.rope.scaling.type"] = "none"
|
||||||
|
|
||||||
|
return kv
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *commandrModel) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
|
var out []*ggml.Tensor
|
||||||
|
for _, t := range ts {
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: t.Name(),
|
||||||
|
Kind: t.Kind(),
|
||||||
|
Shape: t.Shape(),
|
||||||
|
WriterTo: t,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *commandrModel) Replacements() []string {
|
||||||
|
return []string{
|
||||||
|
"self_attn.q_norm", "attn_q_norm",
|
||||||
|
"self_attn.k_norm", "attn_k_norm",
|
||||||
|
"model.layers", "blk",
|
||||||
|
"input_layernorm", "attn_norm",
|
||||||
|
"mlp.down_proj", "ffn_down",
|
||||||
|
"mlp.gate_proj", "ffn_gate",
|
||||||
|
"mlp.up_proj", "ffn_up",
|
||||||
|
"self_attn.k_proj", "attn_k",
|
||||||
|
"self_attn.o_proj", "attn_output",
|
||||||
|
"self_attn.q_proj", "attn_q",
|
||||||
|
"self_attn.v_proj", "attn_v",
|
||||||
|
"model.norm", "output_norm",
|
||||||
|
"model.embed_tokens", "token_embd",
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -6,7 +6,7 @@ import (
|
|||||||
"github.com/pdevine/tensor"
|
"github.com/pdevine/tensor"
|
||||||
"github.com/pdevine/tensor/native"
|
"github.com/pdevine/tensor/native"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type gemmaModel struct {
|
type gemmaModel struct {
|
||||||
@@ -23,7 +23,7 @@ type gemmaModel struct {
|
|||||||
|
|
||||||
var _ ModelConverter = (*gemmaModel)(nil)
|
var _ ModelConverter = (*gemmaModel)(nil)
|
||||||
|
|
||||||
func (p *gemmaModel) KV(t *Tokenizer) llm.KV {
|
func (p *gemmaModel) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := p.ModelParameters.KV(t)
|
kv := p.ModelParameters.KV(t)
|
||||||
kv["general.architecture"] = "gemma"
|
kv["general.architecture"] = "gemma"
|
||||||
kv["gemma.context_length"] = p.MaxPositionEmbeddings
|
kv["gemma.context_length"] = p.MaxPositionEmbeddings
|
||||||
@@ -42,14 +42,14 @@ func (p *gemmaModel) KV(t *Tokenizer) llm.KV {
|
|||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *gemmaModel) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *gemmaModel) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
var out []llm.Tensor
|
var out []*ggml.Tensor
|
||||||
for _, t := range ts {
|
for _, t := range ts {
|
||||||
if strings.HasSuffix(t.Name(), "_norm.weight") {
|
if !strings.HasPrefix(t.Name(), "v.") && strings.HasSuffix(t.Name(), "_norm.weight") {
|
||||||
t.SetRepacker(p.addOne)
|
t.SetRepacker(p.addOne)
|
||||||
}
|
}
|
||||||
|
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, &ggml.Tensor{
|
||||||
Name: t.Name(),
|
Name: t.Name(),
|
||||||
Kind: t.Kind(),
|
Kind: t.Kind(),
|
||||||
Shape: t.Shape(),
|
Shape: t.Shape(),
|
||||||
|
|||||||
@@ -1,8 +1,6 @@
|
|||||||
package convert
|
package convert
|
||||||
|
|
||||||
import (
|
import "github.com/ollama/ollama/fs/ggml"
|
||||||
"github.com/ollama/ollama/llm"
|
|
||||||
)
|
|
||||||
|
|
||||||
type gemma2Model struct {
|
type gemma2Model struct {
|
||||||
gemmaModel
|
gemmaModel
|
||||||
@@ -11,7 +9,7 @@ type gemma2Model struct {
|
|||||||
FinalLogitSoftcap float32 `json:"final_logit_softcapping"`
|
FinalLogitSoftcap float32 `json:"final_logit_softcapping"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *gemma2Model) KV(t *Tokenizer) llm.KV {
|
func (p *gemma2Model) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := p.ModelParameters.KV(t)
|
kv := p.ModelParameters.KV(t)
|
||||||
kv["general.architecture"] = "gemma2"
|
kv["general.architecture"] = "gemma2"
|
||||||
kv["gemma2.context_length"] = p.MaxPositionEmbeddings
|
kv["gemma2.context_length"] = p.MaxPositionEmbeddings
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ import (
|
|||||||
"github.com/pdevine/tensor"
|
"github.com/pdevine/tensor"
|
||||||
"github.com/pdevine/tensor/native"
|
"github.com/pdevine/tensor/native"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type gemma2Adapter struct {
|
type gemma2Adapter struct {
|
||||||
@@ -15,14 +15,14 @@ type gemma2Adapter struct {
|
|||||||
|
|
||||||
var _ AdapterConverter = (*gemma2Adapter)(nil)
|
var _ AdapterConverter = (*gemma2Adapter)(nil)
|
||||||
|
|
||||||
func (p *gemma2Adapter) KV(baseKV llm.KV) llm.KV {
|
func (p *gemma2Adapter) KV(baseKV ggml.KV) ggml.KV {
|
||||||
kv := p.AdapterParameters.KV()
|
kv := p.AdapterParameters.KV()
|
||||||
kv["general.architecture"] = "gemma2"
|
kv["general.architecture"] = "gemma2"
|
||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *gemma2Adapter) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *gemma2Adapter) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
var out []llm.Tensor
|
var out []*ggml.Tensor
|
||||||
for _, t := range ts {
|
for _, t := range ts {
|
||||||
shape := t.Shape()
|
shape := t.Shape()
|
||||||
if (strings.HasSuffix(t.Name(), "weight.lora_a") && shape[0] > shape[1]) ||
|
if (strings.HasSuffix(t.Name(), "weight.lora_a") && shape[0] > shape[1]) ||
|
||||||
@@ -31,7 +31,7 @@ func (p *gemma2Adapter) Tensors(ts []Tensor) []llm.Tensor {
|
|||||||
t.SetRepacker(p.repack)
|
t.SetRepacker(p.repack)
|
||||||
}
|
}
|
||||||
|
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, &ggml.Tensor{
|
||||||
Name: t.Name(),
|
Name: t.Name(),
|
||||||
Kind: t.Kind(),
|
Kind: t.Kind(),
|
||||||
Shape: t.Shape(),
|
Shape: t.Shape(),
|
||||||
|
|||||||
142
convert/convert_gemma3.go
Normal file
142
convert/convert_gemma3.go
Normal file
@@ -0,0 +1,142 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import (
|
||||||
|
"cmp"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
|
)
|
||||||
|
|
||||||
|
type gemma3Model struct {
|
||||||
|
gemmaModel
|
||||||
|
Architecture string
|
||||||
|
TextModel struct {
|
||||||
|
HeadDim uint32 `json:"head_dim"`
|
||||||
|
HiddenSize uint32 `json:"hidden_size"`
|
||||||
|
HiddenLayers uint32 `json:"num_hidden_layers"`
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"`
|
||||||
|
SlidingWindow uint32 `json:"sliding_window"`
|
||||||
|
} `json:"text_config"`
|
||||||
|
VisionModel struct {
|
||||||
|
NumAttentionHeads uint32 `json:"num_attention_heads"` // attention.head_count 16
|
||||||
|
LayerNormEpsilon float32 `json:"layer_norm_eps"` // attention.layer_norm_epsilon 1e-05
|
||||||
|
NumHiddenLayers uint32 `json:"num_hidden_layers"` // block_count 32
|
||||||
|
HiddenSize uint32 `json:"hidden_size"` // embedding_length 1280
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"` // feed_forward_length 5120
|
||||||
|
ImageSize uint32 `json:"image_size"` // image_size 560
|
||||||
|
NumChannels uint32 `json:"num_channels"` // num_channels 3
|
||||||
|
PatchSize uint32 `json:"patch_size"` // patch_size 14
|
||||||
|
} `json:"vision_config"`
|
||||||
|
MaxPositionEmbeddings uint32 `json:"max_position_embeddings"`
|
||||||
|
NumAttentionHeads uint32 `json:"num_attention_heads"`
|
||||||
|
NumKeyValueHeads uint32 `json:"num_key_value_heads"`
|
||||||
|
RMSNormEPS float32 `json:"rms_norm_eps"`
|
||||||
|
HeadDim uint32 `json:"head_dim"`
|
||||||
|
FinalLogitSoftcap float32 `json:"final_logit_softcapping"`
|
||||||
|
RopeLocalTheta float32 `json:"rope_local_base_freq"`
|
||||||
|
RopeGlobalTheta float32 `json:"rope_global_base_freq"`
|
||||||
|
SlidingWindow uint32 `json:"sliding_window"`
|
||||||
|
MultiModalTokensPerImage uint32 `json:"mm_tokens_per_image"`
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
gemma4BLayerCount = 34
|
||||||
|
gemma12BLayerCount = 48
|
||||||
|
gemma27BLayerCount = 62
|
||||||
|
)
|
||||||
|
|
||||||
|
func (p *gemma3Model) KV(t *Tokenizer) ggml.KV {
|
||||||
|
kv := p.ModelParameters.KV(t)
|
||||||
|
kv["general.architecture"] = "gemma3"
|
||||||
|
|
||||||
|
numBlocks := cmp.Or(p.HiddenLayers, p.TextModel.HiddenLayers)
|
||||||
|
kv["gemma3.block_count"] = numBlocks
|
||||||
|
|
||||||
|
var (
|
||||||
|
numHeads uint32
|
||||||
|
numKVHeads uint32
|
||||||
|
)
|
||||||
|
|
||||||
|
switch numBlocks {
|
||||||
|
case gemma4BLayerCount:
|
||||||
|
numHeads = 8
|
||||||
|
numKVHeads = 4
|
||||||
|
case gemma12BLayerCount:
|
||||||
|
numHeads = 16
|
||||||
|
numKVHeads = 8
|
||||||
|
case gemma27BLayerCount:
|
||||||
|
numHeads = 32
|
||||||
|
numKVHeads = 16
|
||||||
|
default:
|
||||||
|
numHeads = p.NumAttentionHeads
|
||||||
|
numKVHeads = p.NumKeyValueHeads
|
||||||
|
}
|
||||||
|
|
||||||
|
kv["gemma3.attention.head_count"] = numHeads
|
||||||
|
kv["gemma3.attention.head_count_kv"] = numKVHeads
|
||||||
|
|
||||||
|
switch p.Architecture {
|
||||||
|
case "Gemma3ForCausalLM":
|
||||||
|
kv["gemma3.context_length"] = p.MaxPositionEmbeddings
|
||||||
|
kv["gemma3.attention.layer_norm_rms_epsilon"] = p.RMSNormEPS
|
||||||
|
kv["gemma3.attention.key_length"] = p.HeadDim
|
||||||
|
kv["gemma3.attention.value_length"] = p.HeadDim
|
||||||
|
kv["gemma3.attention.sliding_window"] = p.SlidingWindow
|
||||||
|
kv["gemma3.final_logit_softcapping"] = cmp.Or(p.FinalLogitSoftcap, 30)
|
||||||
|
kv["gemma3.rope.local.freq_base"] = cmp.Or(p.RopeLocalTheta, 10000.0)
|
||||||
|
kv["gemma3.rope.global.freq_base"] = cmp.Or(p.RopeGlobalTheta, 1000000.0)
|
||||||
|
kv["gemma3.embedding_length"] = p.HiddenSize
|
||||||
|
kv["gemma3.feed_forward_length"] = p.IntermediateSize
|
||||||
|
default:
|
||||||
|
kv["gemma3.context_length"] = cmp.Or(p.MaxPositionEmbeddings, 131072)
|
||||||
|
kv["gemma3.embedding_length"] = p.TextModel.HiddenSize
|
||||||
|
kv["gemma3.feed_forward_length"] = p.TextModel.IntermediateSize
|
||||||
|
kv["gemma3.attention.sliding_window"] = p.TextModel.SlidingWindow
|
||||||
|
kv["gemma3.vision.block_count"] = p.VisionModel.NumHiddenLayers
|
||||||
|
kv["gemma3.vision.embedding_length"] = p.VisionModel.HiddenSize
|
||||||
|
kv["gemma3.vision.feed_forward_length"] = p.VisionModel.IntermediateSize
|
||||||
|
kv["gemma3.vision.image_size"] = p.VisionModel.ImageSize
|
||||||
|
kv["gemma3.vision.patch_size"] = p.VisionModel.PatchSize
|
||||||
|
kv["gemma3.vision.num_channels"] = cmp.Or(p.VisionModel.NumChannels, 3)
|
||||||
|
kv["gemma3.vision.attention.head_count"] = p.VisionModel.NumAttentionHeads
|
||||||
|
kv["gemma3.vision.attention.layer_norm_epsilon"] = cmp.Or(p.VisionModel.LayerNormEpsilon, 1e-6)
|
||||||
|
kv["gemma3.attention.key_length"] = cmp.Or(p.TextModel.HeadDim, 256)
|
||||||
|
kv["gemma3.attention.value_length"] = cmp.Or(p.TextModel.HeadDim, 256)
|
||||||
|
}
|
||||||
|
|
||||||
|
if p.MultiModalTokensPerImage > 0 {
|
||||||
|
kv["gemma3.mm.tokens_per_image"] = p.MultiModalTokensPerImage
|
||||||
|
}
|
||||||
|
|
||||||
|
return kv
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *gemma3Model) Replacements() []string {
|
||||||
|
return []string{
|
||||||
|
"lm_head", "output",
|
||||||
|
"model.embed_tokens", "token_embd",
|
||||||
|
"model.norm", "output_norm",
|
||||||
|
"vision_tower.vision_model.embeddings", "v",
|
||||||
|
"vision_tower.vision_model", "v",
|
||||||
|
"vision_model.vision_model.embeddings", "v",
|
||||||
|
"vision_model.vision_model", "v",
|
||||||
|
"language_model.", "",
|
||||||
|
"model.layers", "blk",
|
||||||
|
"encoder.layers", "blk",
|
||||||
|
"input_layernorm", "attn_norm",
|
||||||
|
"self_attn.q_proj", "attn_q",
|
||||||
|
"self_attn.q_norm", "attn_q_norm",
|
||||||
|
"self_attn.k_proj", "attn_k",
|
||||||
|
"self_attn.k_norm", "attn_k_norm",
|
||||||
|
"self_attn.v_proj", "attn_v",
|
||||||
|
"self_attn.o_proj", "attn_output",
|
||||||
|
"self_attn.out_proj", "attn_output",
|
||||||
|
"mlp.gate_proj", "ffn_gate",
|
||||||
|
"mlp.down_proj", "ffn_down",
|
||||||
|
"mlp.up_proj", "ffn_up",
|
||||||
|
"post_attention_layernorm", "post_attention_norm",
|
||||||
|
"pre_feedforward_layernorm", "ffn_norm",
|
||||||
|
"post_feedforward_layernorm", "post_ffw_norm",
|
||||||
|
"input_projection_weight", "input_projection.weight",
|
||||||
|
"multi_modal_projector", "mm",
|
||||||
|
}
|
||||||
|
}
|
||||||
165
convert/convert_gemma3n.go
Normal file
165
convert/convert_gemma3n.go
Normal file
@@ -0,0 +1,165 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import (
|
||||||
|
"slices"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
|
"github.com/pdevine/tensor"
|
||||||
|
"github.com/pdevine/tensor/native"
|
||||||
|
"gonum.org/v1/gonum/stat/distuv"
|
||||||
|
)
|
||||||
|
|
||||||
|
type gemma3nModel struct {
|
||||||
|
ModelParameters
|
||||||
|
|
||||||
|
TextModel struct {
|
||||||
|
ActivationSparsityPattern []float32 `json:"activation_sparsity_pattern"`
|
||||||
|
AltupActiveIdx uint32 `json:"altup_active_idx"`
|
||||||
|
AltupCoefClip float32 `json:"altup_coef_clip"`
|
||||||
|
AltupCorrectScale bool `json:"altup_correct_scale"`
|
||||||
|
AltupLRMultiplier float32 `json:"altup_lr_multiplier"`
|
||||||
|
AltupNumInputs uint32 `json:"altup_num_inputs"`
|
||||||
|
HeadDim uint32 `json:"head_dim"`
|
||||||
|
HiddenSize uint32 `json:"hidden_size"`
|
||||||
|
HiddenSizePerLayerInput uint32 `json:"hidden_size_per_layer_input"`
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"`
|
||||||
|
MaxPositionEmbeddings uint32 `json:"max_position_embeddings"`
|
||||||
|
NumAttentionHeads uint32 `json:"num_attention_heads"`
|
||||||
|
NumHiddenLayers uint32 `json:"num_hidden_layers"`
|
||||||
|
NumKeyValueHeads uint32 `json:"num_key_value_heads"`
|
||||||
|
NumKVSharedLayers uint32 `json:"num_kv_shared_layers"`
|
||||||
|
RMSNormEPS float32 `json:"rms_norm_eps"`
|
||||||
|
RopeLocalBaseFreq float32 `json:"rope_local_base_freq"`
|
||||||
|
RopeTheta float32 `json:"rope_theta"`
|
||||||
|
SlidingWindow uint32 `json:"sliding_window"`
|
||||||
|
LayerTypes []string `json:"layer_types"`
|
||||||
|
} `json:"text_config"`
|
||||||
|
VisionModel struct{} `json:"vision_config"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *gemma3nModel) KV(t *Tokenizer) ggml.KV {
|
||||||
|
kv := m.ModelParameters.KV(t)
|
||||||
|
kv["general.architecture"] = "gemma3n"
|
||||||
|
kv["gemma3n.activation_sparsity_scale"] = slices.Collect(func(yield func(float32) bool) {
|
||||||
|
norm := distuv.Normal{Mu: 0, Sigma: 1}
|
||||||
|
for _, v := range m.TextModel.ActivationSparsityPattern {
|
||||||
|
if !yield(float32(norm.Quantile(float64(v)))) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
kv["gemma3n.altup.active_idx"] = m.TextModel.AltupActiveIdx
|
||||||
|
kv["gemma3n.altup.correct_scale"] = m.TextModel.AltupCorrectScale
|
||||||
|
kv["gemma3n.altup.lr_multiplier"] = m.TextModel.AltupLRMultiplier
|
||||||
|
kv["gemma3n.altup.num_inputs"] = m.TextModel.AltupNumInputs
|
||||||
|
kv["gemma3n.attention.head_count_kv"] = m.TextModel.NumKeyValueHeads
|
||||||
|
kv["gemma3n.attention.head_count"] = m.TextModel.NumAttentionHeads
|
||||||
|
kv["gemma3n.attention.layer_norm_rms_epsilon"] = m.TextModel.RMSNormEPS
|
||||||
|
kv["gemma3n.attention.sliding_window"] = m.TextModel.SlidingWindow
|
||||||
|
kv["gemma3n.attention.sliding_window_pattern"] = slices.Collect(func(yield func(bool) bool) {
|
||||||
|
for _, t := range m.TextModel.LayerTypes {
|
||||||
|
if !yield(t == "sliding_attention") {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
kv["gemma3n.attention.shared_kv_layers"] = m.TextModel.NumKVSharedLayers
|
||||||
|
kv["gemma3n.block_count"] = m.TextModel.NumHiddenLayers
|
||||||
|
kv["gemma3n.context_length"] = m.TextModel.MaxPositionEmbeddings
|
||||||
|
kv["gemma3n.embedding_length_per_layer_input"] = m.TextModel.HiddenSizePerLayerInput
|
||||||
|
kv["gemma3n.embedding_length"] = m.TextModel.HiddenSize
|
||||||
|
kv["gemma3n.feed_forward_length"] = m.TextModel.IntermediateSize
|
||||||
|
kv["gemma3n.head_dim"] = m.TextModel.HeadDim
|
||||||
|
kv["gemma3n.rope.freq_base_local"] = m.TextModel.RopeLocalBaseFreq
|
||||||
|
kv["gemma3n.rope.freq_base"] = m.TextModel.RopeTheta
|
||||||
|
return kv
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *gemma3nModel) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
|
out, ts := mergeTensors(ts,
|
||||||
|
merge{"altup_proj.*.weight", "altup_proj.weight"},
|
||||||
|
merge{"altup_unembd_proj.*.weight", "altup_unembd_proj.weight"},
|
||||||
|
)
|
||||||
|
|
||||||
|
for _, t := range ts {
|
||||||
|
switch {
|
||||||
|
case strings.Contains(t.Name(), "audio_tower"),
|
||||||
|
strings.Contains(t.Name(), "embed_audio"),
|
||||||
|
strings.Contains(t.Name(), "vision_tower"),
|
||||||
|
strings.Contains(t.Name(), "embed_vision"):
|
||||||
|
// TODO: handle audio and vision towers
|
||||||
|
continue
|
||||||
|
case strings.Contains(t.Name(), "altup_predict_coef"),
|
||||||
|
strings.Contains(t.Name(), "altup_correct_coef"):
|
||||||
|
if m.TextModel.AltupCoefClip > 0 {
|
||||||
|
t.SetRepacker(func(name string, data []float32, shape []uint64) (_ []float32, err error) {
|
||||||
|
dims := make([]int, len(shape))
|
||||||
|
for i := range shape {
|
||||||
|
dims[i] = int(shape[i])
|
||||||
|
}
|
||||||
|
|
||||||
|
var t tensor.Tensor = tensor.New(tensor.WithShape(dims...), tensor.WithBacking(data))
|
||||||
|
|
||||||
|
t, err = tensor.Clamp(t, -m.TextModel.AltupCoefClip, m.TextModel.AltupCoefClip)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := t.Reshape(t.Shape().TotalSize()); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return native.VectorF32(t.(*tensor.Dense))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: t.Name(),
|
||||||
|
Kind: t.Kind(),
|
||||||
|
Shape: t.Shape(),
|
||||||
|
WriterTo: t,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *gemma3nModel) Replacements() []string {
|
||||||
|
return []string{
|
||||||
|
"model.language_model.embed_tokens_per_layer", "per_layer_token_embd",
|
||||||
|
"model.language_model.embed_tokens", "token_embd",
|
||||||
|
"model.language_model.per_layer_model_projection", "per_layer_model_proj",
|
||||||
|
"model.language_model.per_layer_projection_norm", "per_layer_proj_norm", "model.language_model.altup_projections", "altup_proj",
|
||||||
|
"model.language_model.altup_unembed_projections", "altup_unembd_proj",
|
||||||
|
"model.language_model.norm", "output_norm",
|
||||||
|
"model.language_model.layers", "blk",
|
||||||
|
|
||||||
|
"input_layernorm", "attn_norm",
|
||||||
|
"self_attn.q_proj", "attn_q",
|
||||||
|
"self_attn.q_norm", "attn_q_norm",
|
||||||
|
"self_attn.k_proj", "attn_k",
|
||||||
|
"self_attn.k_norm", "attn_k_norm",
|
||||||
|
"self_attn.v_proj", "attn_v",
|
||||||
|
"self_attn.o_proj", "attn_output",
|
||||||
|
"post_attention_layernorm", "post_attention_norm",
|
||||||
|
"pre_feedforward_layernorm", "ffn_norm",
|
||||||
|
"mlp.gate_proj", "ffn_gate",
|
||||||
|
"mlp.up_proj", "ffn_up",
|
||||||
|
"mlp.down_proj", "ffn_down",
|
||||||
|
"post_feedforward_layernorm", "post_ffw_norm",
|
||||||
|
"per_layer_input_gate", "inp_gate",
|
||||||
|
"per_layer_projection", "proj",
|
||||||
|
"post_per_layer_input_norm", "post_norm",
|
||||||
|
"altup.", "altup_",
|
||||||
|
"modality_router", "router",
|
||||||
|
"prediction_coefs", "predict_coef",
|
||||||
|
"correction_coefs", "correct_coef",
|
||||||
|
"correct_output_scale", "correct_scale.weight",
|
||||||
|
"laurel.", "laurel_",
|
||||||
|
"linear_left", "l",
|
||||||
|
"linear_right", "r",
|
||||||
|
"post_laurel_norm", "post_norm",
|
||||||
|
}
|
||||||
|
}
|
||||||
266
convert/convert_gptoss.go
Normal file
266
convert/convert_gptoss.go
Normal file
@@ -0,0 +1,266 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"cmp"
|
||||||
|
"encoding/binary"
|
||||||
|
"io"
|
||||||
|
"slices"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
|
"github.com/pdevine/tensor"
|
||||||
|
"github.com/pdevine/tensor/native"
|
||||||
|
)
|
||||||
|
|
||||||
|
type gptossModel struct {
|
||||||
|
ModelParameters
|
||||||
|
HiddenLayers uint32 `json:"num_hidden_layers"`
|
||||||
|
MaxPositionEmbeddings uint32 `json:"max_position_embeddings"`
|
||||||
|
HiddenSize uint32 `json:"hidden_size"`
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"`
|
||||||
|
AttentionHeads uint32 `json:"num_attention_heads"`
|
||||||
|
KeyValueHeads uint32 `json:"num_key_value_heads"`
|
||||||
|
HeadDim uint32 `json:"head_dim"`
|
||||||
|
Experts uint32 `json:"num_experts"`
|
||||||
|
LocalExperts uint32 `json:"num_local_experts"`
|
||||||
|
ExpertsPerToken uint32 `json:"experts_per_token"`
|
||||||
|
RMSNormEpsilon float32 `json:"rms_norm_eps"`
|
||||||
|
InitialContextLength uint32 `json:"initial_context_length"`
|
||||||
|
RopeTheta float32 `json:"rope_theta"`
|
||||||
|
RopeScalingFactor float32 `json:"rope_scaling_factor"`
|
||||||
|
RopeScaling struct {
|
||||||
|
Factor float32 `json:"factor"`
|
||||||
|
} `json:"rope_scaling"`
|
||||||
|
SlidingWindow uint32 `json:"sliding_window"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ ModelConverter = (*gptossModel)(nil)
|
||||||
|
|
||||||
|
func (m *gptossModel) KV(t *Tokenizer) ggml.KV {
|
||||||
|
kv := m.ModelParameters.KV(t)
|
||||||
|
kv["general.architecture"] = "gptoss"
|
||||||
|
kv["general.file_type"] = uint32(4)
|
||||||
|
kv["gptoss.context_length"] = cmp.Or(m.MaxPositionEmbeddings, uint32(m.RopeScalingFactor*float32(m.InitialContextLength)))
|
||||||
|
kv["gptoss.block_count"] = m.HiddenLayers
|
||||||
|
kv["gptoss.embedding_length"] = m.HiddenSize
|
||||||
|
kv["gptoss.feed_forward_length"] = m.IntermediateSize
|
||||||
|
kv["gptoss.expert_count"] = cmp.Or(m.Experts, m.LocalExperts)
|
||||||
|
kv["gptoss.expert_used_count"] = m.ExpertsPerToken
|
||||||
|
kv["gptoss.attention.head_count"] = m.AttentionHeads
|
||||||
|
kv["gptoss.attention.head_count_kv"] = m.KeyValueHeads
|
||||||
|
kv["gptoss.attention.key_length"] = m.HeadDim
|
||||||
|
kv["gptoss.attention.value_length"] = m.HeadDim
|
||||||
|
kv["gptoss.attention.layer_norm_rms_epsilon"] = cmp.Or(m.RMSNormEpsilon, 1e-5)
|
||||||
|
kv["gptoss.attention.sliding_window"] = m.SlidingWindow
|
||||||
|
kv["gptoss.rope.freq_base"] = m.RopeTheta
|
||||||
|
kv["gptoss.rope.scaling.factor"] = cmp.Or(m.RopeScalingFactor, m.RopeScaling.Factor)
|
||||||
|
kv["gptoss.rope.scaling.original_context_length"] = m.InitialContextLength
|
||||||
|
kv["tokenizer.ggml.bos_token_id"] = uint32(199998) // <|startoftext|>
|
||||||
|
kv["tokenizer.ggml.add_bos_token"] = false
|
||||||
|
kv["tokenizer.ggml.eos_token_id"] = uint32(199999) // <|endoftext|>
|
||||||
|
kv["tokenizer.ggml.eos_token_ids"] = []int32{
|
||||||
|
199999, /* <|endoftext|> */
|
||||||
|
200002, /* <|return|> */
|
||||||
|
200012, /* <|call|> */
|
||||||
|
}
|
||||||
|
kv["tokenizer.ggml.add_eos_token"] = false
|
||||||
|
return kv
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *gptossModel) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
|
var out []*ggml.Tensor
|
||||||
|
mxfp4s := make(map[string]*mxfp4)
|
||||||
|
for _, t := range ts {
|
||||||
|
if strings.HasSuffix(t.Name(), ".blocks") || strings.HasSuffix(t.Name(), ".scales") {
|
||||||
|
dot := strings.LastIndex(t.Name(), ".")
|
||||||
|
name, suffix := t.Name()[:dot], t.Name()[dot+1:]
|
||||||
|
if _, ok := mxfp4s[name]; !ok {
|
||||||
|
mxfp4s[name] = &mxfp4{}
|
||||||
|
}
|
||||||
|
|
||||||
|
switch suffix {
|
||||||
|
case "blocks":
|
||||||
|
mxfp4s[name].blocks = t
|
||||||
|
case "scales":
|
||||||
|
mxfp4s[name].scales = t
|
||||||
|
}
|
||||||
|
} else if strings.HasSuffix(t.Name(), "gate_up_exps.bias") {
|
||||||
|
// gate_up_exps is interleaved, need to split into gate_exps and up_exps
|
||||||
|
// e.g. gate_exps, up_exps = gate_up_exps[:, 0::2, ...], gate_up_exps[:, 1::2, ...]
|
||||||
|
out = append(out, slices.Collect(splitDim(t, 1,
|
||||||
|
split{
|
||||||
|
Replacer: strings.NewReplacer("gate_up_exps", "gate_exps"),
|
||||||
|
slices: []tensor.Slice{nil, tensor.S(0, int(t.Shape()[1]), 2)},
|
||||||
|
},
|
||||||
|
split{
|
||||||
|
Replacer: strings.NewReplacer("gate_up_exps", "up_exps"),
|
||||||
|
slices: []tensor.Slice{nil, tensor.S(1, int(t.Shape()[1]), 2)},
|
||||||
|
},
|
||||||
|
))...)
|
||||||
|
} else {
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: t.Name(),
|
||||||
|
Kind: t.Kind(),
|
||||||
|
Shape: t.Shape(),
|
||||||
|
WriterTo: t,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for name, mxfp4 := range mxfp4s {
|
||||||
|
dims := mxfp4.blocks.Shape()
|
||||||
|
if strings.Contains(name, "ffn_down_exps") {
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: name + ".weight",
|
||||||
|
Kind: uint32(ggml.TensorTypeMXFP4),
|
||||||
|
Shape: []uint64{dims[0], dims[1], dims[2] * dims[3] * 2},
|
||||||
|
WriterTo: mxfp4,
|
||||||
|
})
|
||||||
|
} else if strings.Contains(name, "ffn_gate_up_exps") {
|
||||||
|
// gate_up_exps is interleaved, need to split into gate_exps and up_exps
|
||||||
|
// e.g. gate_exps, up_exps = gate_up_exps[:, 0::2, ...], gate_up_exps[:, 1::2, ...]
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: strings.Replace(name, "gate_up", "gate", 1) + ".weight",
|
||||||
|
Kind: uint32(ggml.TensorTypeMXFP4),
|
||||||
|
Shape: []uint64{dims[0], dims[1] / 2, dims[2] * dims[3] * 2},
|
||||||
|
WriterTo: mxfp4.slice(1, 0, int(dims[1]), 2),
|
||||||
|
}, &ggml.Tensor{
|
||||||
|
Name: strings.Replace(name, "gate_up", "up", 1) + ".weight",
|
||||||
|
Kind: uint32(ggml.TensorTypeMXFP4),
|
||||||
|
Shape: []uint64{dims[0], dims[1] / 2, dims[2] * dims[3] * 2},
|
||||||
|
WriterTo: mxfp4.slice(1, 1, int(dims[1]), 2),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *gptossModel) Replacements() []string {
|
||||||
|
var replacements []string
|
||||||
|
if m.MaxPositionEmbeddings > 0 {
|
||||||
|
// hf flavored model
|
||||||
|
replacements = []string{
|
||||||
|
"lm_head", "output",
|
||||||
|
"model.embed_tokens", "token_embd",
|
||||||
|
"model.layers", "blk",
|
||||||
|
"input_layernorm", "attn_norm",
|
||||||
|
"self_attn.q_proj", "attn_q",
|
||||||
|
"self_attn.k_proj", "attn_k",
|
||||||
|
"self_attn.v_proj", "attn_v",
|
||||||
|
"self_attn.o_proj", "attn_out",
|
||||||
|
"self_attn.sinks", "attn_sinks",
|
||||||
|
"post_attention_layernorm", "ffn_norm",
|
||||||
|
"mlp.router", "ffn_gate_inp",
|
||||||
|
"mlp.experts.gate_up_proj_", "ffn_gate_up_exps.",
|
||||||
|
"mlp.experts.down_proj_", "ffn_down_exps.",
|
||||||
|
"model.norm", "output_norm",
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
replacements = []string{
|
||||||
|
// noop replacements so other replacements will not be applied
|
||||||
|
".blocks", ".blocks",
|
||||||
|
".scales", ".scales",
|
||||||
|
// real replacements
|
||||||
|
"block", "blk",
|
||||||
|
"attn.norm", "attn_norm",
|
||||||
|
"attn.qkv", "attn_qkv",
|
||||||
|
"attn.sinks", "attn_sinks",
|
||||||
|
"attn.out", "attn_out",
|
||||||
|
"mlp.norm", "ffn_norm",
|
||||||
|
"mlp.gate", "ffn_gate_inp",
|
||||||
|
"mlp.mlp1_", "ffn_gate_up_exps.",
|
||||||
|
"mlp.mlp2_", "ffn_down_exps.",
|
||||||
|
"embedding", "token_embd",
|
||||||
|
"norm", "output_norm",
|
||||||
|
"unembedding", "output",
|
||||||
|
"scale", "weight",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return replacements
|
||||||
|
}
|
||||||
|
|
||||||
|
type mxfp4 struct {
|
||||||
|
slices []tensor.Slice
|
||||||
|
|
||||||
|
blocks, scales Tensor
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mxfp4) slice(dim, start, end, step int) *mxfp4 {
|
||||||
|
slice := slices.Repeat([]tensor.Slice{nil}, len(m.blocks.Shape()))
|
||||||
|
slice[dim] = tensor.S(start, end, step)
|
||||||
|
return &mxfp4{
|
||||||
|
slices: slice,
|
||||||
|
blocks: m.blocks,
|
||||||
|
scales: m.scales,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mxfp4) WriteTo(w io.Writer) (int64, error) {
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := m.blocks.WriteTo(&b); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
blocksDims := make([]int, len(m.blocks.Shape()))
|
||||||
|
for i, d := range m.blocks.Shape() {
|
||||||
|
blocksDims[i] = int(d)
|
||||||
|
}
|
||||||
|
|
||||||
|
bts := b.Bytes()
|
||||||
|
var tmp [16]byte
|
||||||
|
for i := 0; i < b.Len(); i += 16 {
|
||||||
|
for j := range 8 {
|
||||||
|
// transform a1b2c3 ... x7y8z9 -> 71xa82yb93zc
|
||||||
|
a, b := bts[i+j], bts[i+j+8]
|
||||||
|
tmp[2*j+0] = (a & 0x0F) | (b << 4)
|
||||||
|
tmp[2*j+1] = (a >> 4) | (b & 0xF0)
|
||||||
|
}
|
||||||
|
|
||||||
|
copy(bts[i:i+16], tmp[:])
|
||||||
|
}
|
||||||
|
|
||||||
|
var blocks tensor.Tensor = tensor.New(tensor.WithShape(blocksDims...), tensor.WithBacking(bts))
|
||||||
|
|
||||||
|
var s bytes.Buffer
|
||||||
|
if _, err := m.scales.WriteTo(&s); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
scalesDims := slices.Repeat([]int{1}, len(m.blocks.Shape()))
|
||||||
|
for i, d := range m.scales.Shape() {
|
||||||
|
scalesDims[i] = int(d)
|
||||||
|
}
|
||||||
|
|
||||||
|
var scales tensor.Tensor = tensor.New(tensor.WithShape(scalesDims...), tensor.WithBacking(s.Bytes()))
|
||||||
|
|
||||||
|
out, err := tensor.Concat(3, scales, blocks)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(m.slices) > 0 {
|
||||||
|
out, err = out.Slice(m.slices...)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
out = tensor.Materialize(out)
|
||||||
|
|
||||||
|
if err := out.Reshape(out.Shape().TotalSize()); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
u8s, err := native.VectorU8(out.(*tensor.Dense))
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := binary.Write(w, binary.LittleEndian, u8s); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return int64(len(u8s)), nil
|
||||||
|
}
|
||||||
@@ -9,7 +9,7 @@ import (
|
|||||||
"github.com/pdevine/tensor"
|
"github.com/pdevine/tensor"
|
||||||
"github.com/pdevine/tensor/native"
|
"github.com/pdevine/tensor/native"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type llamaModel struct {
|
type llamaModel struct {
|
||||||
@@ -33,7 +33,7 @@ type llamaModel struct {
|
|||||||
Factor float32 `json:"factor"`
|
Factor float32 `json:"factor"`
|
||||||
LowFrequencyFactor float32 `json:"low_freq_factor"`
|
LowFrequencyFactor float32 `json:"low_freq_factor"`
|
||||||
HighFrequencyFactor float32 `json:"high_freq_factor"`
|
HighFrequencyFactor float32 `json:"high_freq_factor"`
|
||||||
OriginalMaxPositionalEmbeddings uint32 `json:"original_max_positional_embeddings"`
|
OriginalMaxPositionEmbeddings uint32 `json:"original_max_position_embeddings"`
|
||||||
|
|
||||||
factors ropeFactor
|
factors ropeFactor
|
||||||
} `json:"rope_scaling"`
|
} `json:"rope_scaling"`
|
||||||
@@ -42,11 +42,13 @@ type llamaModel struct {
|
|||||||
LayerNormEpsilon float32 `json:"layer_norm_epsilon"`
|
LayerNormEpsilon float32 `json:"layer_norm_epsilon"`
|
||||||
NormEpsilon float32 `json:"norm_epsilon"`
|
NormEpsilon float32 `json:"norm_epsilon"`
|
||||||
HeadDim uint32 `json:"head_dim"`
|
HeadDim uint32 `json:"head_dim"`
|
||||||
|
|
||||||
|
skipRepack bool
|
||||||
}
|
}
|
||||||
|
|
||||||
var _ ModelConverter = (*llamaModel)(nil)
|
var _ ModelConverter = (*llamaModel)(nil)
|
||||||
|
|
||||||
func (p *llamaModel) KV(t *Tokenizer) llm.KV {
|
func (p *llamaModel) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := p.ModelParameters.KV(t)
|
kv := p.ModelParameters.KV(t)
|
||||||
kv["general.architecture"] = "llama"
|
kv["general.architecture"] = "llama"
|
||||||
kv["llama.vocab_size"] = p.VocabSize
|
kv["llama.vocab_size"] = p.VocabSize
|
||||||
@@ -70,6 +72,10 @@ func (p *llamaModel) KV(t *Tokenizer) llm.KV {
|
|||||||
kv["llama.rope.dimension_count"] = p.HiddenSize / headCount
|
kv["llama.rope.dimension_count"] = p.HiddenSize / headCount
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if p.HeadDim > 0 {
|
||||||
|
kv["llama.attention.head_dim"] = p.HeadDim
|
||||||
|
}
|
||||||
|
|
||||||
if p.RopeTheta > 0 {
|
if p.RopeTheta > 0 {
|
||||||
kv["llama.rope.freq_base"] = p.RopeTheta
|
kv["llama.rope.freq_base"] = p.RopeTheta
|
||||||
}
|
}
|
||||||
@@ -84,7 +90,7 @@ func (p *llamaModel) KV(t *Tokenizer) llm.KV {
|
|||||||
factorLow := cmp.Or(p.RopeScaling.LowFrequencyFactor, 1.0)
|
factorLow := cmp.Or(p.RopeScaling.LowFrequencyFactor, 1.0)
|
||||||
factorHigh := cmp.Or(p.RopeScaling.HighFrequencyFactor, 4.0)
|
factorHigh := cmp.Or(p.RopeScaling.HighFrequencyFactor, 4.0)
|
||||||
|
|
||||||
original := cmp.Or(p.RopeScaling.OriginalMaxPositionalEmbeddings, 8192)
|
original := cmp.Or(p.RopeScaling.OriginalMaxPositionEmbeddings, 8192)
|
||||||
lambdaLow := float32(original) / factorLow
|
lambdaLow := float32(original) / factorLow
|
||||||
lambdaHigh := float32(original) / factorHigh
|
lambdaHigh := float32(original) / factorHigh
|
||||||
|
|
||||||
@@ -120,11 +126,11 @@ func (p *llamaModel) KV(t *Tokenizer) llm.KV {
|
|||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *llamaModel) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *llamaModel) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
var out []llm.Tensor
|
var out []*ggml.Tensor
|
||||||
|
|
||||||
if p.RopeScaling.factors != nil {
|
if p.RopeScaling.factors != nil {
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, &ggml.Tensor{
|
||||||
Name: "rope_freqs.weight",
|
Name: "rope_freqs.weight",
|
||||||
Kind: 0,
|
Kind: 0,
|
||||||
Shape: []uint64{uint64(len(p.RopeScaling.factors))},
|
Shape: []uint64{uint64(len(p.RopeScaling.factors))},
|
||||||
@@ -133,12 +139,14 @@ func (p *llamaModel) Tensors(ts []Tensor) []llm.Tensor {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for _, t := range ts {
|
for _, t := range ts {
|
||||||
if strings.HasSuffix(t.Name(), "attn_q.weight") ||
|
if strings.HasSuffix(t.Name(), "attn_q.weight") || strings.HasSuffix(t.Name(), "attn_k.weight") ||
|
||||||
strings.HasSuffix(t.Name(), "attn_k.weight") {
|
strings.HasSuffix(t.Name(), "attn_q_proj.weight") || strings.HasSuffix(t.Name(), "attn_k_proj.weight") {
|
||||||
|
if !p.skipRepack {
|
||||||
t.SetRepacker(p.repack)
|
t.SetRepacker(p.repack)
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, &ggml.Tensor{
|
||||||
Name: t.Name(),
|
Name: t.Name(),
|
||||||
Kind: t.Kind(),
|
Kind: t.Kind(),
|
||||||
Shape: t.Shape(),
|
Shape: t.Shape(),
|
||||||
@@ -174,9 +182,9 @@ func (p *llamaModel) repack(name string, data []float32, shape []uint64) ([]floa
|
|||||||
}
|
}
|
||||||
|
|
||||||
var heads uint32
|
var heads uint32
|
||||||
if strings.HasSuffix(name, "attn_q.weight") {
|
if strings.HasSuffix(name, "attn_q.weight") || strings.HasSuffix(name, "attn_q_proj.weight") {
|
||||||
heads = p.NumAttentionHeads
|
heads = p.NumAttentionHeads
|
||||||
} else if strings.HasSuffix(name, "attn_k.weight") {
|
} else if strings.HasSuffix(name, "attn_k.weight") || strings.HasSuffix(name, "attn_k_proj.weight") {
|
||||||
heads = cmp.Or(p.NumKeyValueHeads, p.NumAttentionHeads)
|
heads = cmp.Or(p.NumKeyValueHeads, p.NumAttentionHeads)
|
||||||
} else {
|
} else {
|
||||||
return nil, fmt.Errorf("unknown tensor for repack: %s", name)
|
return nil, fmt.Errorf("unknown tensor for repack: %s", name)
|
||||||
|
|||||||
169
convert/convert_llama4.go
Normal file
169
convert/convert_llama4.go
Normal file
@@ -0,0 +1,169 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import (
|
||||||
|
"slices"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/pdevine/tensor"
|
||||||
|
"github.com/pdevine/tensor/native"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
|
)
|
||||||
|
|
||||||
|
type llama4Model struct {
|
||||||
|
ModelParameters
|
||||||
|
TextModel struct {
|
||||||
|
llamaModel
|
||||||
|
NumExpertsPerToken uint32 `json:"num_experts_per_tok"`
|
||||||
|
NumLocalExperts uint32 `json:"num_local_experts"`
|
||||||
|
InterleaveMOELayerStep uint32 `json:"interleave_moe_layer_step"`
|
||||||
|
UseQKNorm bool `json:"use_qk_norm"`
|
||||||
|
IntermediateSizeMLP uint32 `json:"intermediate_size_mlp"`
|
||||||
|
AttentionChunkSize uint32 `json:"attention_chunk_size"`
|
||||||
|
} `json:"text_config"`
|
||||||
|
VisionModel struct {
|
||||||
|
NumHiddenLayers uint32 `json:"num_hidden_layers"`
|
||||||
|
HiddenSize uint32 `json:"hidden_size"`
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"`
|
||||||
|
NumAttentionHeads uint32 `json:"num_attention_heads"`
|
||||||
|
ImageSize uint32 `json:"image_size"`
|
||||||
|
PatchSize uint32 `json:"patch_size"`
|
||||||
|
RopeTheta float32 `json:"rope_theta"`
|
||||||
|
NormEpsilon float32 `json:"norm_eps"`
|
||||||
|
PixelShuffleRatio float32 `json:"pixel_shuffle_ratio"`
|
||||||
|
} `json:"vision_config"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// KV implements ModelConverter.
|
||||||
|
func (p *llama4Model) KV(t *Tokenizer) ggml.KV {
|
||||||
|
kv := p.ModelParameters.KV(t)
|
||||||
|
kv["general.architecture"] = "llama4"
|
||||||
|
|
||||||
|
for k, v := range p.TextModel.KV(t) {
|
||||||
|
if strings.HasPrefix(k, "llama.") {
|
||||||
|
kv[strings.ReplaceAll(k, "llama.", "llama4.")] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
kv["llama4.feed_forward_length"] = p.TextModel.IntermediateSizeMLP
|
||||||
|
kv["llama4.expert_feed_forward_length"] = p.TextModel.IntermediateSize
|
||||||
|
|
||||||
|
kv["llama4.expert_count"] = p.TextModel.NumLocalExperts
|
||||||
|
kv["llama4.expert_used_count"] = p.TextModel.NumExpertsPerToken
|
||||||
|
kv["llama4.interleave_moe_layer_step"] = p.TextModel.InterleaveMOELayerStep
|
||||||
|
kv["llama4.use_qk_norm"] = p.TextModel.UseQKNorm
|
||||||
|
kv["llama4.attention.chunk_size"] = p.TextModel.AttentionChunkSize
|
||||||
|
|
||||||
|
kv["llama4.vision.block_count"] = p.VisionModel.NumHiddenLayers
|
||||||
|
kv["llama4.vision.embedding_length"] = p.VisionModel.HiddenSize
|
||||||
|
kv["llama4.vision.feed_forward_length"] = p.VisionModel.IntermediateSize
|
||||||
|
kv["llama4.vision.attention.head_count"] = p.VisionModel.NumAttentionHeads
|
||||||
|
kv["llama4.vision.image_size"] = p.VisionModel.ImageSize
|
||||||
|
kv["llama4.vision.patch_size"] = p.VisionModel.PatchSize
|
||||||
|
kv["llama4.vision.rope.freq_base"] = p.VisionModel.RopeTheta
|
||||||
|
kv["llama4.vision.layer_norm_epsilon"] = p.VisionModel.NormEpsilon
|
||||||
|
kv["llama4.vision.pixel_shuffle_ratio"] = p.VisionModel.PixelShuffleRatio
|
||||||
|
return kv
|
||||||
|
}
|
||||||
|
|
||||||
|
// Replacements implements ModelConverter.
|
||||||
|
func (p *llama4Model) Replacements() []string {
|
||||||
|
return append(
|
||||||
|
p.TextModel.Replacements(),
|
||||||
|
"language_model.", "",
|
||||||
|
"vision_model", "v",
|
||||||
|
"multi_modal_projector", "mm",
|
||||||
|
"feed_forward.down_proj", "ffn_down",
|
||||||
|
"feed_forward.up_proj", "ffn_up",
|
||||||
|
"feed_forward.gate_proj", "ffn_gate",
|
||||||
|
"feed_forward.", "ffn_",
|
||||||
|
"shared_expert.down_proj", "down_shexp",
|
||||||
|
"shared_expert.gate_proj", "gate_shexp",
|
||||||
|
"shared_expert.up_proj", "up_shexp",
|
||||||
|
"experts.down_proj", "down_exps.weight",
|
||||||
|
"experts.gate_up_proj", "gate_up_exps.weight",
|
||||||
|
"router", "gate_inp",
|
||||||
|
"patch_embedding.linear", "patch_embedding",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Tensors implements ModelConverter.
|
||||||
|
func (p *llama4Model) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
|
var out []*ggml.Tensor
|
||||||
|
|
||||||
|
var textTensors []Tensor
|
||||||
|
for _, t := range ts {
|
||||||
|
if strings.HasPrefix(t.Name(), "v.") || strings.HasPrefix(t.Name(), "mm.") {
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: t.Name(),
|
||||||
|
Kind: t.Kind(),
|
||||||
|
Shape: t.Shape(),
|
||||||
|
WriterTo: t,
|
||||||
|
})
|
||||||
|
} else if strings.Contains(t.Name(), "ffn_gate_up_exps") {
|
||||||
|
// gate and up projectors are fused
|
||||||
|
// dims[1], dims[2] must be swapped
|
||||||
|
// [experts, hidden_size, intermediate_size * 2] --> [experts, intermediate_size, hidden_size]
|
||||||
|
halfDim := int(t.Shape()[2]) / 2
|
||||||
|
|
||||||
|
newShape := slices.Clone(t.Shape())
|
||||||
|
newShape[1], newShape[2] = newShape[2]/2, newShape[1]
|
||||||
|
for i, name := range []string{"ffn_gate_exps", "ffn_up_exps"} {
|
||||||
|
// clone tensor since we need separate repackers
|
||||||
|
tt := t.Clone()
|
||||||
|
tt.SetRepacker(p.repack(nil, nil, tensor.S(i*halfDim, (i+1)*halfDim)))
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: strings.ReplaceAll(tt.Name(), "ffn_gate_up_exps", name),
|
||||||
|
Kind: tt.Kind(),
|
||||||
|
Shape: newShape,
|
||||||
|
WriterTo: tt,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
} else if strings.Contains(t.Name(), "ffn_down_exps") {
|
||||||
|
// dims[1], dims[2] must be swapped
|
||||||
|
// [experts, intermediate_size, hidden_size] --> [experts, hidden_size, intermediate_size]
|
||||||
|
t.SetRepacker(p.repack())
|
||||||
|
newShape := slices.Clone(t.Shape())
|
||||||
|
newShape[1], newShape[2] = newShape[2], newShape[1]
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: t.Name(),
|
||||||
|
Kind: t.Kind(),
|
||||||
|
Shape: newShape,
|
||||||
|
WriterTo: t,
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
textTensors = append(textTensors, t)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
p.TextModel.skipRepack = true
|
||||||
|
out = append(out, p.TextModel.Tensors(textTensors)...)
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *llama4Model) repack(slice ...tensor.Slice) Repacker {
|
||||||
|
return func(name string, data []float32, shape []uint64) ([]float32, error) {
|
||||||
|
dims := make([]int, len(shape))
|
||||||
|
for i, dim := range shape {
|
||||||
|
dims[i] = int(dim)
|
||||||
|
}
|
||||||
|
|
||||||
|
var t tensor.Tensor = tensor.New(tensor.WithShape(dims...), tensor.WithBacking(data))
|
||||||
|
t, err := t.Slice(slice...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := t.T(0, 2, 1); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
t = tensor.Materialize(t)
|
||||||
|
// flatten tensor so it can be return as a vector
|
||||||
|
if err := t.Reshape(t.Shape().TotalSize()); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return native.VectorF32(t.(*tensor.Dense))
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -7,7 +7,7 @@ import (
|
|||||||
"github.com/pdevine/tensor"
|
"github.com/pdevine/tensor"
|
||||||
"github.com/pdevine/tensor/native"
|
"github.com/pdevine/tensor/native"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type llamaAdapter struct {
|
type llamaAdapter struct {
|
||||||
@@ -18,7 +18,7 @@ type llamaAdapter struct {
|
|||||||
|
|
||||||
var _ AdapterConverter = (*llamaAdapter)(nil)
|
var _ AdapterConverter = (*llamaAdapter)(nil)
|
||||||
|
|
||||||
func (p *llamaAdapter) KV(baseKV llm.KV) llm.KV {
|
func (p *llamaAdapter) KV(baseKV ggml.KV) ggml.KV {
|
||||||
kv := p.AdapterParameters.KV()
|
kv := p.AdapterParameters.KV()
|
||||||
kv["general.architecture"] = "llama"
|
kv["general.architecture"] = "llama"
|
||||||
kv["llama.attention.head_count"] = baseKV["llama.attention.head_count"]
|
kv["llama.attention.head_count"] = baseKV["llama.attention.head_count"]
|
||||||
@@ -29,8 +29,8 @@ func (p *llamaAdapter) KV(baseKV llm.KV) llm.KV {
|
|||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *llamaAdapter) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *llamaAdapter) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
var out []llm.Tensor
|
var out []*ggml.Tensor
|
||||||
for _, t := range ts {
|
for _, t := range ts {
|
||||||
shape := t.Shape()
|
shape := t.Shape()
|
||||||
if (strings.HasSuffix(t.Name(), "weight.lora_a") && shape[0] > shape[1]) ||
|
if (strings.HasSuffix(t.Name(), "weight.lora_a") && shape[0] > shape[1]) ||
|
||||||
@@ -41,7 +41,7 @@ func (p *llamaAdapter) Tensors(ts []Tensor) []llm.Tensor {
|
|||||||
t.SetRepacker(p.repack)
|
t.SetRepacker(p.repack)
|
||||||
}
|
}
|
||||||
|
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, &ggml.Tensor{
|
||||||
Name: t.Name(),
|
Name: t.Name(),
|
||||||
Kind: t.Kind(),
|
Kind: t.Kind(),
|
||||||
Shape: shape,
|
Shape: shape,
|
||||||
|
|||||||
190
convert/convert_mistral.go
Normal file
190
convert/convert_mistral.go
Normal file
@@ -0,0 +1,190 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import (
|
||||||
|
"cmp"
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/pdevine/tensor"
|
||||||
|
"github.com/pdevine/tensor/native"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
|
)
|
||||||
|
|
||||||
|
type mistral3Model struct {
|
||||||
|
ModelParameters
|
||||||
|
ImageTokenIndex uint32 `json:"image_token_index"`
|
||||||
|
SpatialMergeSize uint32 `json:"spatial_merge_size"`
|
||||||
|
VisionFeatureLayer int32 `json:"vision_feature_layer"`
|
||||||
|
TextModel struct {
|
||||||
|
NumHiddenLayers uint32 `json:"num_hidden_layers"`
|
||||||
|
MaxPositionEmbeddings uint32 `json:"max_position_embeddings"`
|
||||||
|
HiddenSize uint32 `json:"hidden_size"`
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"`
|
||||||
|
NumAttentionHeads uint32 `json:"num_attention_heads"`
|
||||||
|
NumKeyValueHeads uint32 `json:"num_key_value_heads"`
|
||||||
|
RopeTheta float32 `json:"rope_theta"`
|
||||||
|
RMSNormEPS float32 `json:"rms_norm_eps"`
|
||||||
|
HeadDim uint32 `json:"head_dim"`
|
||||||
|
SlidingWindow *uint32 `json:"sliding_window"`
|
||||||
|
HiddenAct string `json:"hidden_act"`
|
||||||
|
VocabSize uint32 `json:"vocab_size"`
|
||||||
|
} `json:"text_config"`
|
||||||
|
VisionModel struct {
|
||||||
|
NumAttentionHeads uint32 `json:"num_attention_heads"`
|
||||||
|
NumHiddenLayers uint32 `json:"num_hidden_layers"`
|
||||||
|
HiddenSize uint32 `json:"hidden_size"`
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"`
|
||||||
|
ImageSize uint32 `json:"image_size"`
|
||||||
|
NumChannels uint32 `json:"num_channels"`
|
||||||
|
PatchSize uint32 `json:"patch_size"`
|
||||||
|
HeadDim uint32 `json:"head_dim"`
|
||||||
|
HiddenAct string `json:"hidden_act"`
|
||||||
|
RopeTheta float32 `json:"rope_theta"`
|
||||||
|
} `json:"vision_config"`
|
||||||
|
MultiModalProjectorBias bool `json:"multimodal_projector_bias"`
|
||||||
|
ProjectorHiddenAct string `json:"projector_hidden_act"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *mistral3Model) KV(t *Tokenizer) ggml.KV {
|
||||||
|
kv := p.ModelParameters.KV(t)
|
||||||
|
kv["general.architecture"] = "mistral3"
|
||||||
|
kv["mistral3.vocab_size"] = p.TextModel.VocabSize
|
||||||
|
|
||||||
|
// Text configuration
|
||||||
|
kv["mistral3.block_count"] = p.TextModel.NumHiddenLayers
|
||||||
|
kv["mistral3.context_length"] = p.TextModel.MaxPositionEmbeddings
|
||||||
|
kv["mistral3.embedding_length"] = p.TextModel.HiddenSize
|
||||||
|
kv["mistral3.feed_forward_length"] = p.TextModel.IntermediateSize
|
||||||
|
kv["mistral3.attention.head_count"] = p.TextModel.NumAttentionHeads
|
||||||
|
kv["mistral3.attention.head_count_kv"] = p.TextModel.NumKeyValueHeads
|
||||||
|
kv["mistral3.attention.layer_norm_rms_epsilon"] = p.TextModel.RMSNormEPS
|
||||||
|
kv["mistral3.attention.key_length"] = p.TextModel.HeadDim
|
||||||
|
kv["mistral3.attention.value_length"] = p.TextModel.HeadDim
|
||||||
|
kv["mistral3.rope.dimension_count"] = p.TextModel.HiddenSize / p.TextModel.NumHiddenLayers
|
||||||
|
kv["mistral3.rope.freq_base"] = p.TextModel.RopeTheta
|
||||||
|
|
||||||
|
// Vision configuration
|
||||||
|
kv["mistral3.vision.block_count"] = p.VisionModel.NumHiddenLayers
|
||||||
|
kv["mistral3.vision.embedding_length"] = p.VisionModel.HiddenSize
|
||||||
|
kv["mistral3.vision.feed_forward_length"] = p.VisionModel.IntermediateSize
|
||||||
|
kv["mistral3.vision.attention.head_count"] = p.VisionModel.NumAttentionHeads
|
||||||
|
kv["mistral3.vision.attention.key_length"] = p.VisionModel.HeadDim
|
||||||
|
kv["mistral3.vision.image_size"] = p.VisionModel.ImageSize
|
||||||
|
kv["mistral3.vision.patch_size"] = p.VisionModel.PatchSize
|
||||||
|
kv["mistral3.vision.num_channels"] = p.VisionModel.NumChannels
|
||||||
|
// kv["mistral3.vision.attention.layer_norm_epsilon"] = 1e-05 // Default value
|
||||||
|
kv["mistral3.vision.rope.freq_base"] = p.VisionModel.RopeTheta
|
||||||
|
|
||||||
|
// Multimodal configuration
|
||||||
|
kv["mistral3.image_token_index"] = p.ImageTokenIndex
|
||||||
|
kv["mistral3.spatial_merge_size"] = p.SpatialMergeSize
|
||||||
|
|
||||||
|
kv["mistral3.mm.projector_bias"] = p.MultiModalProjectorBias
|
||||||
|
|
||||||
|
if p.ProjectorHiddenAct != "" {
|
||||||
|
kv["mistral3.mm.projector_hidden_act"] = p.ProjectorHiddenAct
|
||||||
|
}
|
||||||
|
|
||||||
|
return kv
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *mistral3Model) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
|
var out []*ggml.Tensor
|
||||||
|
|
||||||
|
for _, t := range ts {
|
||||||
|
if !strings.HasPrefix(t.Name(), "v.") {
|
||||||
|
if strings.HasSuffix(t.Name(), ".attn_q.weight") ||
|
||||||
|
strings.HasSuffix(t.Name(), ".attn_k.weight") {
|
||||||
|
t.SetRepacker(p.repack)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: t.Name(),
|
||||||
|
Kind: t.Kind(),
|
||||||
|
Shape: t.Shape(),
|
||||||
|
WriterTo: t,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *mistral3Model) Replacements() []string {
|
||||||
|
return []string{
|
||||||
|
"language_model.model.norm", "output_norm",
|
||||||
|
"language_model.model.", "",
|
||||||
|
"language_model.", "",
|
||||||
|
"layers", "blk",
|
||||||
|
"transformer.layers", "blk",
|
||||||
|
"vision_tower", "v",
|
||||||
|
"ln_pre", "encoder_norm",
|
||||||
|
"input_layernorm", "attn_norm",
|
||||||
|
"post_attention_layernorm", "ffn_norm",
|
||||||
|
"embed_tokens", "token_embd",
|
||||||
|
"self_attn.q_proj", "attn_q",
|
||||||
|
"self_attn.k_proj", "attn_k",
|
||||||
|
"self_attn.v_proj", "attn_v",
|
||||||
|
"self_attn.o_proj", "attn_output",
|
||||||
|
"mlp.down_proj", "ffn_down",
|
||||||
|
"mlp.gate_proj", "ffn_gate",
|
||||||
|
"mlp.up_proj", "ffn_up",
|
||||||
|
"attention.q_proj", "attn_q",
|
||||||
|
"attention.k_proj", "attn_k",
|
||||||
|
"attention.v_proj", "attn_v",
|
||||||
|
"attention.o_proj", "attn_output",
|
||||||
|
"attention_norm", "attn_norm",
|
||||||
|
"feed_forward.gate_proj", "ffn_gate",
|
||||||
|
"feed_forward.down_proj", "ffn_down",
|
||||||
|
"feed_forward.up_proj", "ffn_up",
|
||||||
|
"multi_modal_projector", "mm",
|
||||||
|
"ffn_norm", "ffn_norm",
|
||||||
|
"lm_head", "output",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *mistral3Model) repack(name string, data []float32, shape []uint64) ([]float32, error) {
|
||||||
|
var dims []int
|
||||||
|
for _, dim := range shape {
|
||||||
|
dims = append(dims, int(dim))
|
||||||
|
}
|
||||||
|
|
||||||
|
var heads uint32
|
||||||
|
if strings.HasSuffix(name, ".attn_q.weight") {
|
||||||
|
heads = p.TextModel.NumAttentionHeads
|
||||||
|
} else if strings.HasSuffix(name, ".attn_k.weight") {
|
||||||
|
heads = cmp.Or(p.TextModel.NumKeyValueHeads, p.TextModel.NumAttentionHeads)
|
||||||
|
} else {
|
||||||
|
return nil, fmt.Errorf("unknown tensor for repack: %s", name)
|
||||||
|
}
|
||||||
|
|
||||||
|
n := tensor.New(tensor.WithShape(dims...), tensor.WithBacking(data))
|
||||||
|
if err := n.Reshape(append([]int{int(heads), 2, dims[0] / int(heads) / 2}, dims[1:]...)...); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := n.T(0, 2, 1, 3); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := n.Reshape(dims...); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := n.Transpose(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
ts, err := native.SelectF32(n, 1)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var f32s []float32
|
||||||
|
for _, t := range ts {
|
||||||
|
f32s = append(f32s, t...)
|
||||||
|
}
|
||||||
|
|
||||||
|
return f32s, nil
|
||||||
|
}
|
||||||
@@ -2,11 +2,8 @@ package convert
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
|
||||||
"slices"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type mixtralModel struct {
|
type mixtralModel struct {
|
||||||
@@ -15,7 +12,7 @@ type mixtralModel struct {
|
|||||||
NumExpertsPerToken uint32 `json:"num_experts_per_tok"`
|
NumExpertsPerToken uint32 `json:"num_experts_per_tok"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *mixtralModel) KV(t *Tokenizer) llm.KV {
|
func (p *mixtralModel) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := p.llamaModel.KV(t)
|
kv := p.llamaModel.KV(t)
|
||||||
|
|
||||||
if p.NumLocalExperts > 0 {
|
if p.NumLocalExperts > 0 {
|
||||||
@@ -29,66 +26,39 @@ func (p *mixtralModel) KV(t *Tokenizer) llm.KV {
|
|||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *mixtralModel) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *mixtralModel) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
oldnew := []string{
|
merges := make([]merge, 0, p.NumHiddenLayers*6)
|
||||||
"model.layers", "blk",
|
for i := range p.NumHiddenLayers {
|
||||||
"w1", "ffn_gate_exps",
|
merges = append(merges, merge{
|
||||||
"w2", "ffn_down_exps",
|
fmt.Sprintf("blk.%d.*.w1.weight", i),
|
||||||
"w3", "ffn_up_exps",
|
fmt.Sprintf("blk.%d.ffn_gate_exps.weight", i),
|
||||||
}
|
}, merge{
|
||||||
|
fmt.Sprintf("blk.%d.*.w1.bias", i),
|
||||||
for i := range p.NumLocalExperts {
|
fmt.Sprintf("blk.%d.ffn_gate_exps.bias", i),
|
||||||
oldnew = append(oldnew, fmt.Sprintf(".block_sparse_moe.experts.%d.", i), ".")
|
}, merge{
|
||||||
}
|
fmt.Sprintf("blk.%d.*.w2.weight", i),
|
||||||
|
fmt.Sprintf("blk.%d.ffn_up_exps.weight", i),
|
||||||
// group experts of the same layer (model.layers.%d) and type (w[123]) into a single tensor
|
}, merge{
|
||||||
namer := strings.NewReplacer(oldnew...)
|
fmt.Sprintf("blk.%d.*.w2.bias", i),
|
||||||
experts := make(map[string]experts)
|
fmt.Sprintf("blk.%d.ffn_up_exps.bias", i),
|
||||||
|
}, merge{
|
||||||
// merge experts into a single tensor while removing them from ts
|
fmt.Sprintf("blk.%d.*.w3.weight", i),
|
||||||
ts = slices.DeleteFunc(ts, func(t Tensor) bool {
|
fmt.Sprintf("blk.%d.ffn_down_exps.weight", i),
|
||||||
if !strings.Contains(t.Name(), ".block_sparse_moe.experts.") {
|
}, merge{
|
||||||
return false
|
fmt.Sprintf("blk.%d.*.w3.bias", i),
|
||||||
}
|
fmt.Sprintf("blk.%d.ffn_down_exps.bias", i),
|
||||||
|
|
||||||
name := namer.Replace(t.Name())
|
|
||||||
experts[name] = append(experts[name], t)
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
|
|
||||||
var out []llm.Tensor
|
|
||||||
for n, e := range experts {
|
|
||||||
// TODO(mxyng): sanity check experts
|
|
||||||
out = append(out, llm.Tensor{
|
|
||||||
Name: n,
|
|
||||||
Kind: e[0].Kind(),
|
|
||||||
Shape: append([]uint64{uint64(len(e))}, e[0].Shape()...),
|
|
||||||
WriterTo: e,
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
out, ts := mergeTensors(ts, merges...)
|
||||||
return append(out, p.llamaModel.Tensors(ts)...)
|
return append(out, p.llamaModel.Tensors(ts)...)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *mixtralModel) Replacements() []string {
|
func (p *mixtralModel) Replacements() []string {
|
||||||
return append(
|
return append(
|
||||||
p.llamaModel.Replacements(),
|
p.llamaModel.Replacements(),
|
||||||
|
"model.layers", "blk",
|
||||||
"block_sparse_moe.gate", "ffn_gate_inp",
|
"block_sparse_moe.gate", "ffn_gate_inp",
|
||||||
|
"block_sparse_moe.experts.", ".",
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
type experts []Tensor
|
|
||||||
|
|
||||||
func (e experts) WriteTo(w io.Writer) (int64, error) {
|
|
||||||
// TODO(mxyng): experts _should_ be numerically sorted by expert but this should check
|
|
||||||
for _, t := range e {
|
|
||||||
// the canonical merged experts tensor stacks all experts along a new, 0 axis,
|
|
||||||
// e.g. `tensor.Stack(0, e[0], e[1:]...)`, which requires allocating temporary buffers
|
|
||||||
// this accomplishes the same thing by writing each expert tensor in sequence
|
|
||||||
if _, err := t.WriteTo(w); err != nil {
|
|
||||||
return 0, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
|
|||||||
179
convert/convert_mllama.go
Normal file
179
convert/convert_mllama.go
Normal file
@@ -0,0 +1,179 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import (
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
|
"github.com/pdevine/tensor"
|
||||||
|
"github.com/pdevine/tensor/native"
|
||||||
|
)
|
||||||
|
|
||||||
|
type mllamaModel struct {
|
||||||
|
ModelParameters
|
||||||
|
TextModel struct {
|
||||||
|
llamaModel
|
||||||
|
|
||||||
|
CrossAttentionLayers []int32 `json:"cross_attention_layers"`
|
||||||
|
} `json:"text_config"`
|
||||||
|
VisionModel struct {
|
||||||
|
NumHiddenLayers uint32 `json:"num_hidden_layers"`
|
||||||
|
NumGlobalLayers uint32 `json:"num_global_layers"`
|
||||||
|
IntermediateLayersIndices []int32 `json:"intermediate_layers_indices"`
|
||||||
|
|
||||||
|
HiddenSize uint32 `json:"hidden_size"`
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"`
|
||||||
|
|
||||||
|
AttentionHeads uint32 `json:"attention_heads"`
|
||||||
|
|
||||||
|
ImageSize uint32 `json:"image_size"`
|
||||||
|
PatchSize uint32 `json:"patch_size"`
|
||||||
|
NumChannels uint32 `json:"num_channels"`
|
||||||
|
MaxNumTiles uint32 `json:"max_num_tiles"`
|
||||||
|
NormEpsilon float32 `json:"norm_eps"`
|
||||||
|
RopeTheta float32 `json:"rope.freq_base"`
|
||||||
|
} `json:"vision_config"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mllamaModel) KV(t *Tokenizer) ggml.KV {
|
||||||
|
kv := m.ModelParameters.KV(t)
|
||||||
|
kv["general.architecture"] = "mllama"
|
||||||
|
|
||||||
|
for k, v := range m.TextModel.KV(t) {
|
||||||
|
if strings.HasPrefix(k, "llama.") {
|
||||||
|
kv[strings.ReplaceAll(k, "llama.", "mllama.")] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
kv["mllama.attention.cross_attention_layers"] = m.TextModel.CrossAttentionLayers
|
||||||
|
|
||||||
|
kv["mllama.vision.block_count"] = m.VisionModel.NumHiddenLayers
|
||||||
|
kv["mllama.vision.global.block_count"] = m.VisionModel.NumGlobalLayers
|
||||||
|
kv["mllama.vision.intermediate_layers_indices"] = m.VisionModel.IntermediateLayersIndices
|
||||||
|
|
||||||
|
kv["mllama.vision.embedding_length"] = m.VisionModel.HiddenSize
|
||||||
|
kv["mllama.vision.feed_forward_length"] = m.VisionModel.IntermediateSize
|
||||||
|
|
||||||
|
kv["mllama.vision.attention.head_count"] = m.VisionModel.AttentionHeads
|
||||||
|
kv["mllama.vision.attention.layer_norm_epsilon"] = m.VisionModel.NormEpsilon
|
||||||
|
|
||||||
|
kv["mllama.vision.image_size"] = m.VisionModel.ImageSize
|
||||||
|
kv["mllama.vision.patch_size"] = m.VisionModel.PatchSize
|
||||||
|
kv["mllama.vision.max_num_tiles"] = m.VisionModel.MaxNumTiles
|
||||||
|
kv["mllama.vision.num_channels"] = m.VisionModel.NumChannels
|
||||||
|
|
||||||
|
return kv
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mllamaModel) Replacements() []string {
|
||||||
|
return append(
|
||||||
|
m.TextModel.Replacements(),
|
||||||
|
"language_model.", "",
|
||||||
|
"gate_attn", "attn_gate",
|
||||||
|
"gate_ffn", "ffn_gate",
|
||||||
|
"cross_attn.", "cross_attn_",
|
||||||
|
"vision_model", "v",
|
||||||
|
"class_embedding", "class_embd",
|
||||||
|
"patch_embedding", "patch_embd",
|
||||||
|
"gated_positional_embedding.tile_embedding", "tile_position_embd",
|
||||||
|
"gated_positional_embedding.embedding", "position_embd.weight",
|
||||||
|
"gated_positional_embedding", "position_embd",
|
||||||
|
"embedding.weight", "weight",
|
||||||
|
"pre_tile_positional_embedding", "pre_tile_position_embd",
|
||||||
|
"post_tile_positional_embedding", "post_tile_position_embd",
|
||||||
|
"layernorm_pre", "pre_ln",
|
||||||
|
"layernorm_post", "post_ln",
|
||||||
|
"global_transformer.layers", "global.blk",
|
||||||
|
"transformer.layers", "blk",
|
||||||
|
"mlp.fc1", "ffn_up",
|
||||||
|
"mlp.fc2", "ffn_down",
|
||||||
|
"multi_modal_projector", "mm.0",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mllamaModel) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
|
var out []*ggml.Tensor
|
||||||
|
var text []Tensor
|
||||||
|
for _, t := range ts {
|
||||||
|
if !strings.HasPrefix(t.Name(), "v.") && !strings.HasPrefix(t.Name(), "mm.") {
|
||||||
|
text = append(text, t)
|
||||||
|
} else if t.Name() == "v.position_embd.gate" {
|
||||||
|
for _, name := range []string{"v.position_embd.gate", "v.tile_position_embd.gate"} {
|
||||||
|
tt := t.Clone()
|
||||||
|
tt.SetRepacker(m.repack(name))
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: name,
|
||||||
|
Kind: t.Kind(),
|
||||||
|
Shape: t.Shape(),
|
||||||
|
WriterTo: tt,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if t.Name() == "v.pre_tile_position_embd.gate" || t.Name() == "v.post_tile_position_embd.gate" {
|
||||||
|
t.SetRepacker(m.repack(t.Name()))
|
||||||
|
} else if strings.HasSuffix(t.Name(), "attn_q.weight") || strings.HasSuffix(t.Name(), "attn_k.weight") {
|
||||||
|
t.SetRepacker(m.repack(t.Name()))
|
||||||
|
} else if strings.HasSuffix(t.Name(), "attn_gate") || strings.HasSuffix(t.Name(), "ffn_gate") {
|
||||||
|
t.SetRepacker(m.repack(t.Name()))
|
||||||
|
}
|
||||||
|
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: t.Name(),
|
||||||
|
Kind: t.Kind(),
|
||||||
|
Shape: t.Shape(),
|
||||||
|
WriterTo: t,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return append(out, m.TextModel.Tensors(text)...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mllamaModel) repack(name string) Repacker {
|
||||||
|
return func(_ string, data []float32, shape []uint64) (_ []float32, err error) {
|
||||||
|
dims := make([]int, len(shape))
|
||||||
|
for i, dim := range shape {
|
||||||
|
dims[i] = int(dim)
|
||||||
|
}
|
||||||
|
|
||||||
|
var t tensor.Tensor = tensor.New(tensor.WithShape(dims...), tensor.WithBacking(data))
|
||||||
|
|
||||||
|
if strings.HasSuffix(name, "attn_q.weight") || strings.HasSuffix(name, "attn_k.weight") {
|
||||||
|
heads := m.VisionModel.AttentionHeads
|
||||||
|
if err := t.Reshape(append([]int{int(heads), 2, dims[0] / int(heads) / 2}, dims[1:]...)...); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := t.T(0, 2, 1, 3); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := t.Reshape(dims...); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := t.Transpose(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
t, err = tensor.Tanh(t)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if name == "v.position_embd.gate" {
|
||||||
|
t, err = tensor.Sub(float32(1), t)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
t = tensor.Materialize(t)
|
||||||
|
// flatten tensor so it can be return as a vector
|
||||||
|
if err := t.Reshape(t.Shape().TotalSize()); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return native.VectorF32(t.(*tensor.Dense))
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -8,7 +8,7 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
)
|
)
|
||||||
|
|
||||||
type phi3Model struct {
|
type phi3Model struct {
|
||||||
@@ -37,7 +37,7 @@ type phi3Model struct {
|
|||||||
|
|
||||||
var _ ModelConverter = (*phi3Model)(nil)
|
var _ ModelConverter = (*phi3Model)(nil)
|
||||||
|
|
||||||
func (p *phi3Model) KV(t *Tokenizer) llm.KV {
|
func (p *phi3Model) KV(t *Tokenizer) ggml.KV {
|
||||||
kv := p.ModelParameters.KV(t)
|
kv := p.ModelParameters.KV(t)
|
||||||
kv["general.architecture"] = "phi3"
|
kv["general.architecture"] = "phi3"
|
||||||
kv["phi3.context_length"] = p.MaxPositionEmbeddings
|
kv["phi3.context_length"] = p.MaxPositionEmbeddings
|
||||||
@@ -68,19 +68,19 @@ func (p *phi3Model) KV(t *Tokenizer) llm.KV {
|
|||||||
return kv
|
return kv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *phi3Model) Tensors(ts []Tensor) []llm.Tensor {
|
func (p *phi3Model) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
var addRopeFactors sync.Once
|
var addRopeFactors sync.Once
|
||||||
|
|
||||||
out := make([]llm.Tensor, 0, len(ts)+2)
|
out := make([]*ggml.Tensor, 0, len(ts)+2)
|
||||||
for _, t := range ts {
|
for _, t := range ts {
|
||||||
if strings.HasPrefix(t.Name(), "blk.0.") {
|
if strings.HasPrefix(t.Name(), "blk.0.") {
|
||||||
addRopeFactors.Do(func() {
|
addRopeFactors.Do(func() {
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, &ggml.Tensor{
|
||||||
Name: "rope_factors_long.weight",
|
Name: "rope_factors_long.weight",
|
||||||
Kind: 0,
|
Kind: 0,
|
||||||
Shape: []uint64{uint64(len(p.RopeScaling.LongFactor))},
|
Shape: []uint64{uint64(len(p.RopeScaling.LongFactor))},
|
||||||
WriterTo: p.RopeScaling.LongFactor,
|
WriterTo: p.RopeScaling.LongFactor,
|
||||||
}, llm.Tensor{
|
}, &ggml.Tensor{
|
||||||
Name: "rope_factors_short.weight",
|
Name: "rope_factors_short.weight",
|
||||||
Kind: 0,
|
Kind: 0,
|
||||||
Shape: []uint64{uint64(len(p.RopeScaling.ShortFactor))},
|
Shape: []uint64{uint64(len(p.RopeScaling.ShortFactor))},
|
||||||
@@ -89,7 +89,7 @@ func (p *phi3Model) Tensors(ts []Tensor) []llm.Tensor {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
out = append(out, llm.Tensor{
|
out = append(out, &ggml.Tensor{
|
||||||
Name: t.Name(),
|
Name: t.Name(),
|
||||||
Kind: t.Kind(),
|
Kind: t.Kind(),
|
||||||
Shape: t.Shape(),
|
Shape: t.Shape(),
|
||||||
@@ -118,6 +118,5 @@ func (p *phi3Model) Replacements() []string {
|
|||||||
type ropeFactor []float32
|
type ropeFactor []float32
|
||||||
|
|
||||||
func (r ropeFactor) WriteTo(w io.Writer) (int64, error) {
|
func (r ropeFactor) WriteTo(w io.Writer) (int64, error) {
|
||||||
err := binary.Write(w, binary.LittleEndian, r)
|
return 0, binary.Write(w, binary.LittleEndian, r)
|
||||||
return 0, err
|
|
||||||
}
|
}
|
||||||
|
|||||||
81
convert/convert_qwen2.go
Normal file
81
convert/convert_qwen2.go
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import "github.com/ollama/ollama/fs/ggml"
|
||||||
|
|
||||||
|
type qwen2Model struct {
|
||||||
|
ModelParameters
|
||||||
|
MaxPositionEmbeddings uint32 `json:"max_position_embeddings"`
|
||||||
|
HiddenSize uint32 `json:"hidden_size"`
|
||||||
|
HiddenLayers uint32 `json:"num_hidden_layers"`
|
||||||
|
IntermediateSize uint32 `json:"intermediate_size"`
|
||||||
|
NumAttentionHeads uint32 `json:"num_attention_heads"`
|
||||||
|
NumKeyValueHeads uint32 `json:"num_key_value_heads"`
|
||||||
|
RopeTheta float32 `json:"rope_theta"`
|
||||||
|
RopeScaling struct {
|
||||||
|
Type string `json:"type"`
|
||||||
|
Factor ropeFactor `json:"factor"`
|
||||||
|
OriginalMaxPositionEmbeddings uint32 `json:"original_max_position_embeddings"`
|
||||||
|
MropeSection []int32 `json:"mrope_section"`
|
||||||
|
} `json:"rope_scaling"`
|
||||||
|
RMSNormEPS float32 `json:"rms_norm_eps"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ ModelConverter = (*qwen2Model)(nil)
|
||||||
|
|
||||||
|
func (q *qwen2Model) KV(t *Tokenizer) ggml.KV {
|
||||||
|
kv := q.ModelParameters.KV(t)
|
||||||
|
kv["general.architecture"] = "qwen2"
|
||||||
|
kv["qwen2.block_count"] = q.HiddenLayers
|
||||||
|
kv["qwen2.context_length"] = q.MaxPositionEmbeddings
|
||||||
|
kv["qwen2.embedding_length"] = q.HiddenSize
|
||||||
|
kv["qwen2.feed_forward_length"] = q.IntermediateSize
|
||||||
|
kv["qwen2.attention.head_count"] = q.NumAttentionHeads
|
||||||
|
kv["qwen2.attention.head_count_kv"] = q.NumKeyValueHeads
|
||||||
|
kv["qwen2.rope.freq_base"] = q.RopeTheta
|
||||||
|
kv["qwen2.attention.layer_norm_rms_epsilon"] = q.RMSNormEPS
|
||||||
|
|
||||||
|
switch q.RopeScaling.Type {
|
||||||
|
case "":
|
||||||
|
// no scaling
|
||||||
|
case "yarn":
|
||||||
|
kv["qwen2.rope.scaling.type"] = q.RopeScaling.Type
|
||||||
|
kv["qwen2.rope.scaling.factor"] = q.RopeScaling.Factor
|
||||||
|
case "mrope", "default":
|
||||||
|
kv["qwen2.rope.mrope_section"] = q.RopeScaling.MropeSection
|
||||||
|
default:
|
||||||
|
panic("unknown rope scaling type")
|
||||||
|
}
|
||||||
|
return kv
|
||||||
|
}
|
||||||
|
|
||||||
|
func (q *qwen2Model) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
|
var out []*ggml.Tensor
|
||||||
|
for _, t := range ts {
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: t.Name(),
|
||||||
|
Kind: t.Kind(),
|
||||||
|
Shape: t.Shape(),
|
||||||
|
WriterTo: t,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *qwen2Model) Replacements() []string {
|
||||||
|
return []string{
|
||||||
|
"lm_head", "output",
|
||||||
|
"model.embed_tokens", "token_embd",
|
||||||
|
"model.layers", "blk",
|
||||||
|
"input_layernorm", "attn_norm",
|
||||||
|
"self_attn.k_proj", "attn_k",
|
||||||
|
"self_attn.v_proj", "attn_v",
|
||||||
|
"self_attn.q_proj", "attn_q",
|
||||||
|
"self_attn.o_proj", "attn_output",
|
||||||
|
"mlp.down_proj", "ffn_down",
|
||||||
|
"mlp.gate_proj", "ffn_gate",
|
||||||
|
"mlp.up_proj", "ffn_up",
|
||||||
|
"post_attention_layernorm", "ffn_norm",
|
||||||
|
"model.norm", "output_norm",
|
||||||
|
}
|
||||||
|
}
|
||||||
102
convert/convert_qwen25vl.go
Normal file
102
convert/convert_qwen25vl.go
Normal file
@@ -0,0 +1,102 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import (
|
||||||
|
"cmp"
|
||||||
|
"slices"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
|
)
|
||||||
|
|
||||||
|
type qwen25VLModel struct {
|
||||||
|
qwen2Model
|
||||||
|
|
||||||
|
VisionModel struct {
|
||||||
|
Depth uint32 `json:"depth"`
|
||||||
|
HiddenSize uint32 `json:"hidden_size"`
|
||||||
|
NumHeads uint32 `json:"num_heads"`
|
||||||
|
InChannels uint32 `json:"in_chans"`
|
||||||
|
PatchSize uint32 `json:"patch_size"`
|
||||||
|
SpatialMergeSize uint32 `json:"spatial_merge_size"`
|
||||||
|
SpatialPatchSize uint32 `json:"spatial_patch_size"`
|
||||||
|
WindowSize uint32 `json:"window_size"`
|
||||||
|
RMSNormEps float32 `json:"layer_norm_epsilon"`
|
||||||
|
RopeTheta float32 `json:"rope_theta"`
|
||||||
|
FullAttentionBlocks []int32 `json:"fullatt_block_indexes"`
|
||||||
|
TemporalPatchSize uint32 `json:"temporal_patch_size"`
|
||||||
|
} `json:"vision_config"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ ModelConverter = (*qwen25VLModel)(nil)
|
||||||
|
|
||||||
|
func (q *qwen25VLModel) KV(t *Tokenizer) ggml.KV {
|
||||||
|
kv := q.ModelParameters.KV(t)
|
||||||
|
kv["general.architecture"] = "qwen25vl"
|
||||||
|
|
||||||
|
for k, v := range q.qwen2Model.KV(t) {
|
||||||
|
if strings.HasPrefix(k, "qwen2.") {
|
||||||
|
kv[strings.Replace(k, "qwen2.", "qwen25vl.", 1)] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if q.VisionModel.FullAttentionBlocks == nil {
|
||||||
|
kv["qwen25vl.vision.fullatt_block_indexes"] = []int32{7, 15, 23, 31}
|
||||||
|
}
|
||||||
|
|
||||||
|
kv["qwen25vl.vision.block_count"] = cmp.Or(q.VisionModel.Depth, 32)
|
||||||
|
kv["qwen25vl.vision.embedding_length"] = q.VisionModel.HiddenSize
|
||||||
|
kv["qwen25vl.vision.attention.head_count"] = cmp.Or(q.VisionModel.NumHeads, 16)
|
||||||
|
kv["qwen25vl.vision.num_channels"] = q.VisionModel.InChannels
|
||||||
|
kv["qwen25vl.vision.patch_size"] = cmp.Or(q.VisionModel.PatchSize, 14)
|
||||||
|
kv["qwen25vl.vision.spatial_merge_size"] = cmp.Or(q.VisionModel.SpatialMergeSize, 2)
|
||||||
|
kv["qwen25vl.vision.spatial_patch_size"] = q.VisionModel.SpatialPatchSize
|
||||||
|
kv["qwen25vl.vision.window_size"] = cmp.Or(q.VisionModel.WindowSize, 112)
|
||||||
|
kv["qwen25vl.vision.attention.layer_norm_epsilon"] = cmp.Or(q.VisionModel.RMSNormEps, 1e-6)
|
||||||
|
kv["qwen25vl.vision.rope.freq_base"] = cmp.Or(q.VisionModel.RopeTheta, 1e4)
|
||||||
|
kv["qwen25vl.vision.fullatt_block_indexes"] = q.VisionModel.FullAttentionBlocks
|
||||||
|
kv["qwen25vl.vision.temporal_patch_size"] = cmp.Or(q.VisionModel.TemporalPatchSize, 2)
|
||||||
|
|
||||||
|
return kv
|
||||||
|
}
|
||||||
|
|
||||||
|
func (q *qwen25VLModel) Tensors(ts []Tensor) []*ggml.Tensor {
|
||||||
|
var out []*ggml.Tensor
|
||||||
|
|
||||||
|
for _, t := range ts {
|
||||||
|
if strings.Contains(t.Name(), "patch_embed.proj") {
|
||||||
|
for t := range splitDim(t, 2,
|
||||||
|
split{Replacer: strings.NewReplacer("patch_embed.proj", "patch_embd_0")},
|
||||||
|
split{Replacer: strings.NewReplacer("patch_embed.proj", "patch_embd_1")},
|
||||||
|
) {
|
||||||
|
t.Shape = slices.DeleteFunc(t.Shape, func(i uint64) bool { return i == 1 })
|
||||||
|
out = append(out, t)
|
||||||
|
}
|
||||||
|
} else if strings.Contains(t.Name(), "attn.qkv") {
|
||||||
|
out = append(out, slices.Collect(splitDim(t, 0,
|
||||||
|
split{Replacer: strings.NewReplacer("attn.qkv", "attn_q")},
|
||||||
|
split{Replacer: strings.NewReplacer("attn.qkv", "attn_k")},
|
||||||
|
split{Replacer: strings.NewReplacer("attn.qkv", "attn_v")},
|
||||||
|
))...)
|
||||||
|
} else {
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: t.Name(),
|
||||||
|
Kind: t.Kind(),
|
||||||
|
Shape: t.Shape(),
|
||||||
|
WriterTo: t,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *qwen25VLModel) Replacements() []string {
|
||||||
|
return append(
|
||||||
|
p.qwen2Model.Replacements(),
|
||||||
|
"visual", "v",
|
||||||
|
"blocks", "blk",
|
||||||
|
"attn.proj", "attn_out",
|
||||||
|
"norm1", "ln1",
|
||||||
|
"norm2", "ln2",
|
||||||
|
)
|
||||||
|
}
|
||||||
@@ -11,16 +11,14 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"io/fs"
|
"io/fs"
|
||||||
"log/slog"
|
"log/slog"
|
||||||
"math"
|
"maps"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"slices"
|
"slices"
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"golang.org/x/exp/maps"
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
|
|
||||||
"github.com/ollama/ollama/llm"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type tensorData struct {
|
type tensorData struct {
|
||||||
@@ -29,7 +27,7 @@ type tensorData struct {
|
|||||||
Shape []int `json:"shape"`
|
Shape []int `json:"shape"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func convertFull(t *testing.T, fsys fs.FS) (*os.File, llm.KV, llm.Tensors) {
|
func convertFull(t *testing.T, fsys fs.FS) (*os.File, ggml.KV, ggml.Tensors) {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
f, err := os.CreateTemp(t.TempDir(), "f16")
|
f, err := os.CreateTemp(t.TempDir(), "f16")
|
||||||
@@ -48,7 +46,7 @@ func convertFull(t *testing.T, fsys fs.FS) (*os.File, llm.KV, llm.Tensors) {
|
|||||||
}
|
}
|
||||||
t.Cleanup(func() { r.Close() })
|
t.Cleanup(func() { r.Close() })
|
||||||
|
|
||||||
m, _, err := llm.DecodeGGML(r, math.MaxInt)
|
m, err := ggml.Decode(r, -1)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
@@ -60,7 +58,7 @@ func convertFull(t *testing.T, fsys fs.FS) (*os.File, llm.KV, llm.Tensors) {
|
|||||||
return r, m.KV(), m.Tensors()
|
return r, m.KV(), m.Tensors()
|
||||||
}
|
}
|
||||||
|
|
||||||
func generateResultsJSON(t *testing.T, f *os.File, kv llm.KV, tensors llm.Tensors) map[string]string {
|
func generateResultsJSON(t *testing.T, f *os.File, kv ggml.KV, tensors ggml.Tensors) map[string]string {
|
||||||
actual := make(map[string]string)
|
actual := make(map[string]string)
|
||||||
for k, v := range kv {
|
for k, v := range kv {
|
||||||
if s, ok := v.(json.Marshaler); !ok {
|
if s, ok := v.(json.Marshaler); !ok {
|
||||||
@@ -75,7 +73,7 @@ func generateResultsJSON(t *testing.T, f *os.File, kv llm.KV, tensors llm.Tensor
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tensor := range tensors.Items {
|
for _, tensor := range tensors.Items() {
|
||||||
sha256sum := sha256.New()
|
sha256sum := sha256.New()
|
||||||
sr := io.NewSectionReader(f, int64(tensors.Offset+tensor.Offset), int64(tensor.Size()))
|
sr := io.NewSectionReader(f, int64(tensors.Offset+tensor.Offset), int64(tensor.Size()))
|
||||||
if _, err := io.Copy(sha256sum, sr); err != nil {
|
if _, err := io.Copy(sha256sum, sr); err != nil {
|
||||||
@@ -108,6 +106,8 @@ func TestConvertModel(t *testing.T) {
|
|||||||
"Phi-3-mini-128k-instruct",
|
"Phi-3-mini-128k-instruct",
|
||||||
"all-MiniLM-L6-v2",
|
"all-MiniLM-L6-v2",
|
||||||
"gemma-2-9b-it",
|
"gemma-2-9b-it",
|
||||||
|
"Qwen2.5-0.5B-Instruct",
|
||||||
|
"c4ai-command-r-v01",
|
||||||
}
|
}
|
||||||
|
|
||||||
for i := range cases {
|
for i := range cases {
|
||||||
@@ -129,15 +129,14 @@ func TestConvertModel(t *testing.T) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
defer expectFile.Close()
|
||||||
|
|
||||||
var expect map[string]string
|
var expect map[string]string
|
||||||
if err := json.NewDecoder(expectFile).Decode(&expect); err != nil {
|
if err := json.NewDecoder(expectFile).Decode(&expect); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
keys := maps.Keys(expect)
|
for _, k := range slices.Sorted(maps.Keys(expect)) {
|
||||||
slices.Sort(keys)
|
|
||||||
for _, k := range keys {
|
|
||||||
if v, ok := actual[k]; !ok {
|
if v, ok := actual[k]; !ok {
|
||||||
t.Errorf("missing %s", k)
|
t.Errorf("missing %s", k)
|
||||||
} else if v != expect[k] {
|
} else if v != expect[k] {
|
||||||
@@ -330,7 +329,7 @@ func TestConvertAdapter(t *testing.T) {
|
|||||||
}
|
}
|
||||||
defer r.Close()
|
defer r.Close()
|
||||||
|
|
||||||
m, _, err := llm.DecodeGGML(r, math.MaxInt)
|
m, err := ggml.Decode(r, -1)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
@@ -341,9 +340,7 @@ func TestConvertAdapter(t *testing.T) {
|
|||||||
|
|
||||||
actual := generateResultsJSON(t, r, m.KV(), m.Tensors())
|
actual := generateResultsJSON(t, r, m.KV(), m.Tensors())
|
||||||
|
|
||||||
keys := maps.Keys(c.Expected)
|
for _, k := range slices.Sorted(maps.Keys(c.Expected)) {
|
||||||
slices.Sort(keys)
|
|
||||||
for _, k := range keys {
|
|
||||||
if v, ok := actual[k]; !ok {
|
if v, ok := actual[k]; !ok {
|
||||||
t.Errorf("missing %s", k)
|
t.Errorf("missing %s", k)
|
||||||
} else if v != c.Expected[k] {
|
} else if v != c.Expected[k] {
|
||||||
|
|||||||
@@ -1,58 +0,0 @@
|
|||||||
package convert
|
|
||||||
|
|
||||||
import (
|
|
||||||
"archive/zip"
|
|
||||||
"errors"
|
|
||||||
"io"
|
|
||||||
"io/fs"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
)
|
|
||||||
|
|
||||||
type ZipReader struct {
|
|
||||||
r *zip.Reader
|
|
||||||
p string
|
|
||||||
|
|
||||||
// limit is the maximum size of a file that can be read directly
|
|
||||||
// from the zip archive. Files larger than this size will be extracted
|
|
||||||
limit int64
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewZipReader(r *zip.Reader, p string, limit int64) fs.FS {
|
|
||||||
return &ZipReader{r, p, limit}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (z *ZipReader) Open(name string) (fs.File, error) {
|
|
||||||
r, err := z.r.Open(name)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
defer r.Close()
|
|
||||||
|
|
||||||
if fi, err := r.Stat(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
} else if fi.Size() < z.limit {
|
|
||||||
return r, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if !filepath.IsLocal(name) {
|
|
||||||
return nil, zip.ErrInsecurePath
|
|
||||||
}
|
|
||||||
|
|
||||||
n := filepath.Join(z.p, name)
|
|
||||||
if _, err := os.Stat(n); errors.Is(err, os.ErrNotExist) {
|
|
||||||
w, err := os.Create(n)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
defer w.Close()
|
|
||||||
|
|
||||||
if _, err := io.Copy(w, r); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
} else if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return os.Open(n)
|
|
||||||
}
|
|
||||||
@@ -11,14 +11,15 @@ type Tensor interface {
|
|||||||
Name() string
|
Name() string
|
||||||
Shape() []uint64
|
Shape() []uint64
|
||||||
Kind() uint32
|
Kind() uint32
|
||||||
SetRepacker(repacker)
|
SetRepacker(Repacker)
|
||||||
WriteTo(io.Writer) (int64, error)
|
WriteTo(io.Writer) (int64, error)
|
||||||
|
Clone() Tensor
|
||||||
}
|
}
|
||||||
|
|
||||||
type tensorBase struct {
|
type tensorBase struct {
|
||||||
name string
|
name string
|
||||||
shape []uint64
|
shape []uint64
|
||||||
repacker
|
repacker Repacker
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t tensorBase) Name() string {
|
func (t tensorBase) Name() string {
|
||||||
@@ -30,42 +31,46 @@ func (t tensorBase) Shape() []uint64 {
|
|||||||
}
|
}
|
||||||
|
|
||||||
const (
|
const (
|
||||||
tensorKindF32 uint32 = iota
|
tensorKindFP32 uint32 = iota
|
||||||
tensorKindF16
|
tensorKindFP16
|
||||||
|
tensorKindBF16 = 30
|
||||||
|
tensorKindMXFP4 = 39
|
||||||
)
|
)
|
||||||
|
|
||||||
func (t tensorBase) Kind() uint32 {
|
func (t tensorBase) Kind() uint32 {
|
||||||
if strings.HasSuffix(t.name, ".ffn_gate_inp.weight") ||
|
if strings.HasSuffix(t.name, ".ffn_gate_inp.weight") ||
|
||||||
t.name == "token_types.weight" {
|
strings.HasSuffix(t.name, ".bias") ||
|
||||||
|
t.name == "token_types.weight" ||
|
||||||
|
t.name == "v.positional_embedding_vlm" ||
|
||||||
|
t.name == "v.tile_position_embd.weight" ||
|
||||||
|
t.name == "v.pre_tile_position_embd.weight" ||
|
||||||
|
t.name == "v.post_tile_position_embd.weight" {
|
||||||
// these tensors are always F32
|
// these tensors are always F32
|
||||||
return 0
|
return tensorKindFP32
|
||||||
}
|
}
|
||||||
|
|
||||||
switch len(t.shape) {
|
switch len(t.shape) {
|
||||||
case 0:
|
case 0:
|
||||||
panic("invalid tensor shape")
|
panic("invalid tensor shape")
|
||||||
case 1:
|
case 1:
|
||||||
return tensorKindF32
|
return tensorKindFP32
|
||||||
default:
|
default:
|
||||||
return tensorKindF16
|
return tensorKindFP16
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *tensorBase) SetRepacker(fn repacker) {
|
func (t *tensorBase) SetRepacker(fn Repacker) {
|
||||||
t.repacker = fn
|
t.repacker = fn
|
||||||
}
|
}
|
||||||
|
|
||||||
type repacker func(string, []float32, []uint64) ([]float32, error)
|
type Repacker func(string, []float32, []uint64) ([]float32, error)
|
||||||
|
|
||||||
func parseTensors(fsys fs.FS, replacer *strings.Replacer) ([]Tensor, error) {
|
func parseTensors(fsys fs.FS, replacer *strings.Replacer) ([]Tensor, error) {
|
||||||
patterns := []struct {
|
patterns := []struct {
|
||||||
Pattern string
|
Pattern string
|
||||||
Func func(fs.FS, *strings.Replacer, ...string) ([]Tensor, error)
|
Func func(fs.FS, *strings.Replacer, ...string) ([]Tensor, error)
|
||||||
}{
|
}{
|
||||||
{"model-*-of-*.safetensors", parseSafetensors},
|
{"*.safetensors", parseSafetensors},
|
||||||
{"model.safetensors", parseSafetensors},
|
|
||||||
{"adapters.safetensors", parseSafetensors},
|
|
||||||
{"adapter_model.safetensors", parseSafetensors},
|
|
||||||
{"pytorch_model-*-of-*.bin", parseTorch},
|
{"pytorch_model-*-of-*.bin", parseTorch},
|
||||||
{"pytorch_model.bin", parseTorch},
|
{"pytorch_model.bin", parseTorch},
|
||||||
{"consolidated.*.pth", parseTorch},
|
{"consolidated.*.pth", parseTorch},
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
package convert
|
package convert
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bufio"
|
||||||
"bytes"
|
"bytes"
|
||||||
"encoding/binary"
|
"encoding/binary"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
@@ -8,12 +9,12 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"io/fs"
|
"io/fs"
|
||||||
|
"maps"
|
||||||
"slices"
|
"slices"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/d4l3k/go-bfloat16"
|
"github.com/d4l3k/go-bfloat16"
|
||||||
"github.com/x448/float16"
|
"github.com/x448/float16"
|
||||||
"golang.org/x/exp/maps"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type safetensorMetadata struct {
|
type safetensorMetadata struct {
|
||||||
@@ -46,8 +47,7 @@ func parseSafetensors(fsys fs.FS, replacer *strings.Replacer, ps ...string) ([]T
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
keys := maps.Keys(headers)
|
keys := slices.Sorted(maps.Keys(headers))
|
||||||
slices.Sort(keys)
|
|
||||||
|
|
||||||
names := make(map[string]struct{}, len(keys))
|
names := make(map[string]struct{}, len(keys))
|
||||||
|
|
||||||
@@ -94,6 +94,30 @@ type safetensor struct {
|
|||||||
*tensorBase
|
*tensorBase
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (st safetensor) Kind() uint32 {
|
||||||
|
kind := st.tensorBase.Kind()
|
||||||
|
if !strings.HasPrefix(st.name, "v.") && st.dtype == "BF16" && kind != tensorKindFP32 {
|
||||||
|
kind = tensorKindBF16
|
||||||
|
}
|
||||||
|
|
||||||
|
return kind
|
||||||
|
}
|
||||||
|
|
||||||
|
func (st safetensor) Clone() Tensor {
|
||||||
|
return &safetensor{
|
||||||
|
fs: st.fs,
|
||||||
|
path: st.path,
|
||||||
|
dtype: st.dtype,
|
||||||
|
offset: st.offset,
|
||||||
|
size: st.size,
|
||||||
|
tensorBase: &tensorBase{
|
||||||
|
name: st.name,
|
||||||
|
repacker: st.repacker,
|
||||||
|
shape: slices.Clone(st.shape),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func (st safetensor) WriteTo(w io.Writer) (int64, error) {
|
func (st safetensor) WriteTo(w io.Writer) (int64, error) {
|
||||||
f, err := st.fs.Open(st.path)
|
f, err := st.fs.Open(st.path)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -101,26 +125,41 @@ func (st safetensor) WriteTo(w io.Writer) (int64, error) {
|
|||||||
}
|
}
|
||||||
defer f.Close()
|
defer f.Close()
|
||||||
|
|
||||||
if seeker, ok := f.(io.Seeker); ok {
|
r, err := func() (io.Reader, error) {
|
||||||
if _, err := seeker.Seek(st.offset, io.SeekStart); err != nil {
|
if readerAt, ok := f.(io.ReaderAt); ok {
|
||||||
return 0, err
|
return io.NewSectionReader(readerAt, st.offset, st.size), nil
|
||||||
}
|
} else if seeker, ok := f.(io.Seeker); ok {
|
||||||
|
_, err := seeker.Seek(st.offset, io.SeekStart)
|
||||||
|
return f, err
|
||||||
} else {
|
} else {
|
||||||
if _, err := io.CopyN(io.Discard, f, st.offset); err != nil {
|
_, err := io.CopyN(io.Discard, f, st.offset)
|
||||||
|
return f, err
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
if err != nil {
|
||||||
return 0, err
|
return 0, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
br := bufio.NewReaderSize(r, min(32<<10, int(st.size)))
|
||||||
|
// special case when input and output are same type and the
|
||||||
|
// tensor doesn't need repacking
|
||||||
|
if (st.repacker == nil) &&
|
||||||
|
((st.dtype == "F32" && st.Kind() == tensorKindFP32) ||
|
||||||
|
(st.dtype == "F16" && st.Kind() == tensorKindFP16) ||
|
||||||
|
(st.dtype == "U8")) {
|
||||||
|
return io.CopyN(w, br, st.size)
|
||||||
}
|
}
|
||||||
|
|
||||||
var f32s []float32
|
var f32s []float32
|
||||||
switch st.dtype {
|
switch st.dtype {
|
||||||
case "F32":
|
case "F32":
|
||||||
f32s = make([]float32, st.size/4)
|
f32s = make([]float32, st.size/4)
|
||||||
if err = binary.Read(f, binary.LittleEndian, f32s); err != nil {
|
if err = binary.Read(br, binary.LittleEndian, f32s); err != nil {
|
||||||
return 0, err
|
return 0, err
|
||||||
}
|
}
|
||||||
case "F16":
|
case "F16":
|
||||||
u16s := make([]uint16, st.size/2)
|
u16s := make([]uint16, st.size/2)
|
||||||
if err = binary.Read(f, binary.LittleEndian, u16s); err != nil {
|
if err = binary.Read(br, binary.LittleEndian, u16s); err != nil {
|
||||||
return 0, err
|
return 0, err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -131,7 +170,7 @@ func (st safetensor) WriteTo(w io.Writer) (int64, error) {
|
|||||||
|
|
||||||
case "BF16":
|
case "BF16":
|
||||||
u8s := make([]uint8, st.size)
|
u8s := make([]uint8, st.size)
|
||||||
if err = binary.Read(f, binary.LittleEndian, u8s); err != nil {
|
if err = binary.Read(br, binary.LittleEndian, u8s); err != nil {
|
||||||
return 0, err
|
return 0, err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -148,15 +187,18 @@ func (st safetensor) WriteTo(w io.Writer) (int64, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
switch st.Kind() {
|
switch st.Kind() {
|
||||||
case tensorKindF32:
|
case tensorKindFP32:
|
||||||
return 0, binary.Write(w, binary.LittleEndian, f32s)
|
return int64(len(f32s) * 4), binary.Write(w, binary.LittleEndian, f32s)
|
||||||
case tensorKindF16:
|
case tensorKindFP16:
|
||||||
f16s := make([]uint16, len(f32s))
|
f16s := make([]uint16, len(f32s))
|
||||||
for i := range f32s {
|
for i := range f32s {
|
||||||
f16s[i] = float16.Fromfloat32(f32s[i]).Bits()
|
f16s[i] = float16.Fromfloat32(f32s[i]).Bits()
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0, binary.Write(w, binary.LittleEndian, f16s)
|
return int64(len(f16s) * 2), binary.Write(w, binary.LittleEndian, f16s)
|
||||||
|
case tensorKindBF16:
|
||||||
|
u8s := bfloat16.EncodeFloat32(f32s)
|
||||||
|
return int64(len(u8s)), binary.Write(w, binary.LittleEndian, u8s)
|
||||||
default:
|
default:
|
||||||
return 0, fmt.Errorf("unknown storage type: %d", st.Kind())
|
return 0, fmt.Errorf("unknown storage type: %d", st.Kind())
|
||||||
}
|
}
|
||||||
|
|||||||
294
convert/reader_test.go
Normal file
294
convert/reader_test.go
Normal file
@@ -0,0 +1,294 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/binary"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/d4l3k/go-bfloat16"
|
||||||
|
"github.com/google/go-cmp/cmp"
|
||||||
|
"github.com/x448/float16"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestSafetensors(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
root, err := os.OpenRoot(t.TempDir())
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
defer root.Close()
|
||||||
|
|
||||||
|
cases := []struct {
|
||||||
|
name,
|
||||||
|
dtype string
|
||||||
|
offset,
|
||||||
|
size int64
|
||||||
|
shape []uint64
|
||||||
|
setup func(*testing.T, *os.File)
|
||||||
|
want []byte
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "fp32-fp32",
|
||||||
|
dtype: "F32",
|
||||||
|
size: 32 * 4, // 32 floats, each 4 bytes
|
||||||
|
shape: []uint64{32},
|
||||||
|
setup: func(t *testing.T, f *os.File) {
|
||||||
|
f32s := make([]float32, 32)
|
||||||
|
for i := range f32s {
|
||||||
|
f32s[i] = float32(i)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := binary.Write(f, binary.LittleEndian, f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
want: []byte{
|
||||||
|
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x3f, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x40, 0x40,
|
||||||
|
0x00, 0x00, 0x80, 0x40, 0x00, 0x00, 0xa0, 0x40, 0x00, 0x00, 0xc0, 0x40, 0x00, 0x00, 0xe0, 0x40,
|
||||||
|
0x00, 0x00, 0x00, 0x41, 0x00, 0x00, 0x10, 0x41, 0x00, 0x00, 0x20, 0x41, 0x00, 0x00, 0x30, 0x41,
|
||||||
|
0x00, 0x00, 0x40, 0x41, 0x00, 0x00, 0x50, 0x41, 0x00, 0x00, 0x60, 0x41, 0x00, 0x00, 0x70, 0x41,
|
||||||
|
0x00, 0x00, 0x80, 0x41, 0x00, 0x00, 0x88, 0x41, 0x00, 0x00, 0x90, 0x41, 0x00, 0x00, 0x98, 0x41,
|
||||||
|
0x00, 0x00, 0xa0, 0x41, 0x00, 0x00, 0xa8, 0x41, 0x00, 0x00, 0xb0, 0x41, 0x00, 0x00, 0xb8, 0x41,
|
||||||
|
0x00, 0x00, 0xc0, 0x41, 0x00, 0x00, 0xc8, 0x41, 0x00, 0x00, 0xd0, 0x41, 0x00, 0x00, 0xd8, 0x41,
|
||||||
|
0x00, 0x00, 0xe0, 0x41, 0x00, 0x00, 0xe8, 0x41, 0x00, 0x00, 0xf0, 0x41, 0x00, 0x00, 0xf8, 0x41,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "fp32-fp16",
|
||||||
|
dtype: "F32",
|
||||||
|
size: 32 * 4, // 32 floats, each 4 bytes
|
||||||
|
shape: []uint64{16, 2},
|
||||||
|
setup: func(t *testing.T, f *os.File) {
|
||||||
|
f32s := make([]float32, 32)
|
||||||
|
for i := range f32s {
|
||||||
|
f32s[i] = float32(i)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := binary.Write(f, binary.LittleEndian, f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
want: []byte{
|
||||||
|
0x00, 0x00, 0x00, 0x3c, 0x00, 0x40, 0x00, 0x42, 0x00, 0x44, 0x00, 0x45, 0x00, 0x46, 0x00, 0x47,
|
||||||
|
0x00, 0x48, 0x80, 0x48, 0x00, 0x49, 0x80, 0x49, 0x00, 0x4a, 0x80, 0x4a, 0x00, 0x4b, 0x80, 0x4b,
|
||||||
|
0x00, 0x4c, 0x40, 0x4c, 0x80, 0x4c, 0xc0, 0x4c, 0x00, 0x4d, 0x40, 0x4d, 0x80, 0x4d, 0xc0, 0x4d,
|
||||||
|
0x00, 0x4e, 0x40, 0x4e, 0x80, 0x4e, 0xc0, 0x4e, 0x00, 0x4f, 0x40, 0x4f, 0x80, 0x4f, 0xc0, 0x4f,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "fp16-fp16",
|
||||||
|
dtype: "F16",
|
||||||
|
size: 32 * 2, // 32 floats, each 2 bytes
|
||||||
|
shape: []uint64{16, 2},
|
||||||
|
setup: func(t *testing.T, f *os.File) {
|
||||||
|
u16s := make([]uint16, 32)
|
||||||
|
for i := range u16s {
|
||||||
|
u16s[i] = float16.Fromfloat32(float32(i)).Bits()
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := binary.Write(f, binary.LittleEndian, u16s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
want: []byte{
|
||||||
|
0x00, 0x00, 0x00, 0x3c, 0x00, 0x40, 0x00, 0x42, 0x00, 0x44, 0x00, 0x45, 0x00, 0x46, 0x00, 0x47,
|
||||||
|
0x00, 0x48, 0x80, 0x48, 0x00, 0x49, 0x80, 0x49, 0x00, 0x4a, 0x80, 0x4a, 0x00, 0x4b, 0x80, 0x4b,
|
||||||
|
0x00, 0x4c, 0x40, 0x4c, 0x80, 0x4c, 0xc0, 0x4c, 0x00, 0x4d, 0x40, 0x4d, 0x80, 0x4d, 0xc0, 0x4d,
|
||||||
|
0x00, 0x4e, 0x40, 0x4e, 0x80, 0x4e, 0xc0, 0x4e, 0x00, 0x4f, 0x40, 0x4f, 0x80, 0x4f, 0xc0, 0x4f,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "fp16-fp32",
|
||||||
|
dtype: "F16",
|
||||||
|
size: 32 * 2, // 32 floats, each 2 bytes
|
||||||
|
shape: []uint64{32},
|
||||||
|
setup: func(t *testing.T, f *os.File) {
|
||||||
|
u16s := make([]uint16, 32)
|
||||||
|
for i := range u16s {
|
||||||
|
u16s[i] = float16.Fromfloat32(float32(i)).Bits()
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := binary.Write(f, binary.LittleEndian, u16s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
want: []byte{
|
||||||
|
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x3f, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x40, 0x40,
|
||||||
|
0x00, 0x00, 0x80, 0x40, 0x00, 0x00, 0xa0, 0x40, 0x00, 0x00, 0xc0, 0x40, 0x00, 0x00, 0xe0, 0x40,
|
||||||
|
0x00, 0x00, 0x00, 0x41, 0x00, 0x00, 0x10, 0x41, 0x00, 0x00, 0x20, 0x41, 0x00, 0x00, 0x30, 0x41,
|
||||||
|
0x00, 0x00, 0x40, 0x41, 0x00, 0x00, 0x50, 0x41, 0x00, 0x00, 0x60, 0x41, 0x00, 0x00, 0x70, 0x41,
|
||||||
|
0x00, 0x00, 0x80, 0x41, 0x00, 0x00, 0x88, 0x41, 0x00, 0x00, 0x90, 0x41, 0x00, 0x00, 0x98, 0x41,
|
||||||
|
0x00, 0x00, 0xa0, 0x41, 0x00, 0x00, 0xa8, 0x41, 0x00, 0x00, 0xb0, 0x41, 0x00, 0x00, 0xb8, 0x41,
|
||||||
|
0x00, 0x00, 0xc0, 0x41, 0x00, 0x00, 0xc8, 0x41, 0x00, 0x00, 0xd0, 0x41, 0x00, 0x00, 0xd8, 0x41,
|
||||||
|
0x00, 0x00, 0xe0, 0x41, 0x00, 0x00, 0xe8, 0x41, 0x00, 0x00, 0xf0, 0x41, 0x00, 0x00, 0xf8, 0x41,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "bf16-bf16",
|
||||||
|
dtype: "BF16",
|
||||||
|
size: 32 * 2, // 32 brain floats, each 2 bytes
|
||||||
|
shape: []uint64{16, 2},
|
||||||
|
setup: func(t *testing.T, f *os.File) {
|
||||||
|
f32s := make([]float32, 32)
|
||||||
|
for i := range f32s {
|
||||||
|
f32s[i] = float32(i)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := binary.Write(f, binary.LittleEndian, bfloat16.EncodeFloat32(f32s)); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
want: []byte{
|
||||||
|
0x00, 0x00, 0x80, 0x3f, 0x00, 0x40, 0x40, 0x40, 0x80, 0x40, 0xa0, 0x40, 0xc0, 0x40, 0xe0, 0x40,
|
||||||
|
0x00, 0x41, 0x10, 0x41, 0x20, 0x41, 0x30, 0x41, 0x40, 0x41, 0x50, 0x41, 0x60, 0x41, 0x70, 0x41,
|
||||||
|
0x80, 0x41, 0x88, 0x41, 0x90, 0x41, 0x98, 0x41, 0xa0, 0x41, 0xa8, 0x41, 0xb0, 0x41, 0xb8, 0x41,
|
||||||
|
0xc0, 0x41, 0xc8, 0x41, 0xd0, 0x41, 0xd8, 0x41, 0xe0, 0x41, 0xe8, 0x41, 0xf0, 0x41, 0xf8, 0x41,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "bf16-fp32",
|
||||||
|
dtype: "BF16",
|
||||||
|
size: 32 * 2, // 32 brain floats, each 2 bytes
|
||||||
|
shape: []uint64{32},
|
||||||
|
setup: func(t *testing.T, f *os.File) {
|
||||||
|
f32s := make([]float32, 32)
|
||||||
|
for i := range f32s {
|
||||||
|
f32s[i] = float32(i)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := binary.Write(f, binary.LittleEndian, bfloat16.EncodeFloat32(f32s)); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
want: []byte{
|
||||||
|
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x3f, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x40, 0x40,
|
||||||
|
0x00, 0x00, 0x80, 0x40, 0x00, 0x00, 0xa0, 0x40, 0x00, 0x00, 0xc0, 0x40, 0x00, 0x00, 0xe0, 0x40,
|
||||||
|
0x00, 0x00, 0x00, 0x41, 0x00, 0x00, 0x10, 0x41, 0x00, 0x00, 0x20, 0x41, 0x00, 0x00, 0x30, 0x41,
|
||||||
|
0x00, 0x00, 0x40, 0x41, 0x00, 0x00, 0x50, 0x41, 0x00, 0x00, 0x60, 0x41, 0x00, 0x00, 0x70, 0x41,
|
||||||
|
0x00, 0x00, 0x80, 0x41, 0x00, 0x00, 0x88, 0x41, 0x00, 0x00, 0x90, 0x41, 0x00, 0x00, 0x98, 0x41,
|
||||||
|
0x00, 0x00, 0xa0, 0x41, 0x00, 0x00, 0xa8, 0x41, 0x00, 0x00, 0xb0, 0x41, 0x00, 0x00, 0xb8, 0x41,
|
||||||
|
0x00, 0x00, 0xc0, 0x41, 0x00, 0x00, 0xc8, 0x41, 0x00, 0x00, 0xd0, 0x41, 0x00, 0x00, 0xd8, 0x41,
|
||||||
|
0x00, 0x00, 0xe0, 0x41, 0x00, 0x00, 0xe8, 0x41, 0x00, 0x00, 0xf0, 0x41, 0x00, 0x00, 0xf8, 0x41,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "u8-u8",
|
||||||
|
dtype: "U8",
|
||||||
|
size: 32, // 32 brain floats, each 1 bytes
|
||||||
|
shape: []uint64{32},
|
||||||
|
setup: func(t *testing.T, f *os.File) {
|
||||||
|
u8s := make([]uint8, 32)
|
||||||
|
for i := range u8s {
|
||||||
|
u8s[i] = uint8(i)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := binary.Write(f, binary.LittleEndian, u8s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
want: []byte{
|
||||||
|
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
|
||||||
|
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range cases {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
path := filepath.Base(t.Name())
|
||||||
|
st := safetensor{
|
||||||
|
fs: root.FS(),
|
||||||
|
path: path,
|
||||||
|
dtype: tt.dtype,
|
||||||
|
offset: tt.offset,
|
||||||
|
size: tt.size,
|
||||||
|
tensorBase: &tensorBase{
|
||||||
|
name: tt.name,
|
||||||
|
shape: tt.shape,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
f, err := root.Create(path)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
tt.setup(t, f)
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := st.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.want, b.Bytes()); diff != "" {
|
||||||
|
t.Errorf("safetensor.WriteTo() mismatch (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSafetensorKind(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
st safetensor
|
||||||
|
expected uint32
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "BF16 dtype with non-v. prefix and non-FP32 base kind should return BF16",
|
||||||
|
st: safetensor{
|
||||||
|
tensorBase: &tensorBase{
|
||||||
|
name: "weight.matrix",
|
||||||
|
shape: []uint64{10, 10}, // will default to FP16
|
||||||
|
},
|
||||||
|
dtype: "BF16",
|
||||||
|
},
|
||||||
|
expected: tensorKindBF16,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "BF16 dtype with v. prefix should return base kind",
|
||||||
|
st: safetensor{
|
||||||
|
tensorBase: &tensorBase{
|
||||||
|
name: "v.weight.matrix",
|
||||||
|
shape: []uint64{10, 10}, // will default to FP16
|
||||||
|
},
|
||||||
|
dtype: "BF16",
|
||||||
|
},
|
||||||
|
expected: tensorKindFP16,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "BF16 dtype with FP32 base kind should return FP32",
|
||||||
|
st: safetensor{
|
||||||
|
tensorBase: &tensorBase{
|
||||||
|
name: "weight.matrix",
|
||||||
|
shape: []uint64{10}, // will default to FP32
|
||||||
|
},
|
||||||
|
dtype: "BF16",
|
||||||
|
},
|
||||||
|
expected: tensorKindFP32,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Non-BF16 dtype should return base kind",
|
||||||
|
st: safetensor{
|
||||||
|
tensorBase: &tensorBase{
|
||||||
|
name: "weight.matrix",
|
||||||
|
shape: []uint64{10, 10}, // will default to FP16
|
||||||
|
},
|
||||||
|
dtype: "FP16",
|
||||||
|
},
|
||||||
|
expected: tensorKindFP16,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
result := tt.st.Kind()
|
||||||
|
if result != tt.expected {
|
||||||
|
t.Errorf("Kind() = %d, expected %d", result, tt.expected)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -43,6 +43,17 @@ type torch struct {
|
|||||||
*tensorBase
|
*tensorBase
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (t torch) Clone() Tensor {
|
||||||
|
return torch{
|
||||||
|
storage: t.storage,
|
||||||
|
tensorBase: &tensorBase{
|
||||||
|
name: t.name,
|
||||||
|
shape: t.shape,
|
||||||
|
repacker: t.repacker,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func (pt torch) WriteTo(w io.Writer) (int64, error) {
|
func (pt torch) WriteTo(w io.Writer) (int64, error) {
|
||||||
return 0, nil
|
return 0, nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -331,7 +331,7 @@ type TrainerSpec struct {
|
|||||||
// Reserved special meta tokens.
|
// Reserved special meta tokens.
|
||||||
// * -1 is not used.
|
// * -1 is not used.
|
||||||
// * unk_id must not be -1.
|
// * unk_id must not be -1.
|
||||||
// Id must starts with 0 and be contigous.
|
// Id must start with 0 and be contiguous.
|
||||||
UnkId *int32 `protobuf:"varint,40,opt,name=unk_id,json=unkId,def=0" json:"unk_id,omitempty"` // <unk>
|
UnkId *int32 `protobuf:"varint,40,opt,name=unk_id,json=unkId,def=0" json:"unk_id,omitempty"` // <unk>
|
||||||
BosId *int32 `protobuf:"varint,41,opt,name=bos_id,json=bosId,def=1" json:"bos_id,omitempty"` // <s>
|
BosId *int32 `protobuf:"varint,41,opt,name=bos_id,json=bosId,def=1" json:"bos_id,omitempty"` // <s>
|
||||||
EosId *int32 `protobuf:"varint,42,opt,name=eos_id,json=eosId,def=2" json:"eos_id,omitempty"` // </s>
|
EosId *int32 `protobuf:"varint,42,opt,name=eos_id,json=eosId,def=2" json:"eos_id,omitempty"` // </s>
|
||||||
@@ -1360,7 +1360,7 @@ func file_sentencepiece_model_proto_rawDescGZIP() []byte {
|
|||||||
|
|
||||||
var file_sentencepiece_model_proto_enumTypes = make([]protoimpl.EnumInfo, 2)
|
var file_sentencepiece_model_proto_enumTypes = make([]protoimpl.EnumInfo, 2)
|
||||||
var file_sentencepiece_model_proto_msgTypes = make([]protoimpl.MessageInfo, 6)
|
var file_sentencepiece_model_proto_msgTypes = make([]protoimpl.MessageInfo, 6)
|
||||||
var file_sentencepiece_model_proto_goTypes = []interface{}{
|
var file_sentencepiece_model_proto_goTypes = []any{
|
||||||
(TrainerSpec_ModelType)(0), // 0: sentencepiece.TrainerSpec.ModelType
|
(TrainerSpec_ModelType)(0), // 0: sentencepiece.TrainerSpec.ModelType
|
||||||
(ModelProto_SentencePiece_Type)(0), // 1: sentencepiece.ModelProto.SentencePiece.Type
|
(ModelProto_SentencePiece_Type)(0), // 1: sentencepiece.ModelProto.SentencePiece.Type
|
||||||
(*TrainerSpec)(nil), // 2: sentencepiece.TrainerSpec
|
(*TrainerSpec)(nil), // 2: sentencepiece.TrainerSpec
|
||||||
@@ -1392,7 +1392,7 @@ func file_sentencepiece_model_proto_init() {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
if !protoimpl.UnsafeEnabled {
|
if !protoimpl.UnsafeEnabled {
|
||||||
file_sentencepiece_model_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
|
file_sentencepiece_model_proto_msgTypes[0].Exporter = func(v any, i int) any {
|
||||||
switch v := v.(*TrainerSpec); i {
|
switch v := v.(*TrainerSpec); i {
|
||||||
case 0:
|
case 0:
|
||||||
return &v.state
|
return &v.state
|
||||||
@@ -1406,7 +1406,7 @@ func file_sentencepiece_model_proto_init() {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
file_sentencepiece_model_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
|
file_sentencepiece_model_proto_msgTypes[1].Exporter = func(v any, i int) any {
|
||||||
switch v := v.(*NormalizerSpec); i {
|
switch v := v.(*NormalizerSpec); i {
|
||||||
case 0:
|
case 0:
|
||||||
return &v.state
|
return &v.state
|
||||||
@@ -1420,7 +1420,7 @@ func file_sentencepiece_model_proto_init() {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
file_sentencepiece_model_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
|
file_sentencepiece_model_proto_msgTypes[2].Exporter = func(v any, i int) any {
|
||||||
switch v := v.(*SelfTestData); i {
|
switch v := v.(*SelfTestData); i {
|
||||||
case 0:
|
case 0:
|
||||||
return &v.state
|
return &v.state
|
||||||
@@ -1434,7 +1434,7 @@ func file_sentencepiece_model_proto_init() {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
file_sentencepiece_model_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
|
file_sentencepiece_model_proto_msgTypes[3].Exporter = func(v any, i int) any {
|
||||||
switch v := v.(*ModelProto); i {
|
switch v := v.(*ModelProto); i {
|
||||||
case 0:
|
case 0:
|
||||||
return &v.state
|
return &v.state
|
||||||
@@ -1448,7 +1448,7 @@ func file_sentencepiece_model_proto_init() {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
file_sentencepiece_model_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
|
file_sentencepiece_model_proto_msgTypes[4].Exporter = func(v any, i int) any {
|
||||||
switch v := v.(*SelfTestData_Sample); i {
|
switch v := v.(*SelfTestData_Sample); i {
|
||||||
case 0:
|
case 0:
|
||||||
return &v.state
|
return &v.state
|
||||||
@@ -1460,7 +1460,7 @@ func file_sentencepiece_model_proto_init() {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
file_sentencepiece_model_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
|
file_sentencepiece_model_proto_msgTypes[5].Exporter = func(v any, i int) any {
|
||||||
switch v := v.(*ModelProto_SentencePiece); i {
|
switch v := v.(*ModelProto_SentencePiece); i {
|
||||||
case 0:
|
case 0:
|
||||||
return &v.state
|
return &v.state
|
||||||
|
|||||||
@@ -213,7 +213,7 @@ message TrainerSpec {
|
|||||||
// Reserved special meta tokens.
|
// Reserved special meta tokens.
|
||||||
// * -1 is not used.
|
// * -1 is not used.
|
||||||
// * unk_id must not be -1.
|
// * unk_id must not be -1.
|
||||||
// Id must starts with 0 and be contigous.
|
// Id must start with 0 and be contiguous.
|
||||||
optional int32 unk_id = 40 [default = 0]; // <unk>
|
optional int32 unk_id = 40 [default = 0]; // <unk>
|
||||||
optional int32 bos_id = 41 [default = 1]; // <s>
|
optional int32 bos_id = 41 [default = 1]; // <s>
|
||||||
optional int32 eos_id = 42 [default = 2]; // </s>
|
optional int32 eos_id = 42 [default = 2]; // </s>
|
||||||
|
|||||||
133
convert/tensor.go
Normal file
133
convert/tensor.go
Normal file
@@ -0,0 +1,133 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import (
|
||||||
|
"cmp"
|
||||||
|
"io"
|
||||||
|
"iter"
|
||||||
|
"path"
|
||||||
|
"slices"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/pdevine/tensor"
|
||||||
|
"github.com/pdevine/tensor/native"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
|
)
|
||||||
|
|
||||||
|
type split struct {
|
||||||
|
*strings.Replacer
|
||||||
|
dim int
|
||||||
|
slices []tensor.Slice
|
||||||
|
|
||||||
|
// fn is an optional function to apply to the tensor after slicing
|
||||||
|
fn func(tensor.Tensor) (tensor.Tensor, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// splitDim splits a tensor along a specified dimension into multiple tensors. The dimension
|
||||||
|
// is split evenly based on the number of replacers provided unless a specific count is given.
|
||||||
|
func splitDim(t Tensor, dim int, splits ...split) iter.Seq[*ggml.Tensor] {
|
||||||
|
return func(yield func(*ggml.Tensor) bool) {
|
||||||
|
var offset int
|
||||||
|
for _, split := range splits {
|
||||||
|
t := t.Clone()
|
||||||
|
shape := slices.Clone(t.Shape())
|
||||||
|
shape[dim] = cmp.Or(uint64(split.dim), shape[dim]/uint64(len(splits)))
|
||||||
|
|
||||||
|
slice := split.slices
|
||||||
|
if len(slice) == 0 {
|
||||||
|
slice = slices.Repeat([]tensor.Slice{nil}, len(shape))
|
||||||
|
slice[dim] = tensor.S(offset, offset+int(shape[dim]))
|
||||||
|
offset += int(shape[dim])
|
||||||
|
}
|
||||||
|
|
||||||
|
t.SetRepacker(func(_ string, data []float32, shape []uint64) ([]float32, error) {
|
||||||
|
dims := make([]int, len(shape))
|
||||||
|
for i := range shape {
|
||||||
|
dims[i] = int(shape[i])
|
||||||
|
}
|
||||||
|
|
||||||
|
var tt tensor.Tensor = tensor.New(tensor.WithShape(dims...), tensor.WithBacking(data))
|
||||||
|
tt, err := tt.Slice(slice...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
tt = tensor.Materialize(tt)
|
||||||
|
|
||||||
|
if split.fn != nil {
|
||||||
|
tt, err = split.fn(tt)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// flatten tensor so it can be written as a vector
|
||||||
|
if err := tt.Reshape(tt.Shape().TotalSize()); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return native.VectorF32(tt.(*tensor.Dense))
|
||||||
|
})
|
||||||
|
|
||||||
|
if !yield(&ggml.Tensor{
|
||||||
|
Name: split.Replace(t.Name()),
|
||||||
|
Kind: t.Kind(),
|
||||||
|
Shape: shape,
|
||||||
|
WriterTo: t,
|
||||||
|
}) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type merge struct {
|
||||||
|
pattern, name string
|
||||||
|
}
|
||||||
|
|
||||||
|
// mergeTensors merges tensors that match a given pattern into a single tensor.
|
||||||
|
func mergeTensors(unmatched []Tensor, merges ...merge) (out []*ggml.Tensor, _ []Tensor) {
|
||||||
|
var matched []Tensor
|
||||||
|
for i := range merges {
|
||||||
|
matched, unmatched = slicesSplitFunc(unmatched, func(t Tensor) bool {
|
||||||
|
matched, _ := path.Match(merges[i].pattern, t.Name())
|
||||||
|
return matched
|
||||||
|
})
|
||||||
|
|
||||||
|
if len(matched) > 0 {
|
||||||
|
out = append(out, &ggml.Tensor{
|
||||||
|
Name: merges[i].name,
|
||||||
|
Kind: matched[0].Kind(),
|
||||||
|
Shape: append([]uint64{uint64(len(matched))}, matched[0].Shape()...),
|
||||||
|
WriterTo: mergeGroup(matched),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return out, unmatched
|
||||||
|
}
|
||||||
|
|
||||||
|
// slicesSplitFunc splits a slice into two slices based on a predicate function.
|
||||||
|
func slicesSplitFunc[S ~[]E, E comparable](s S, fn func(e E) bool) (matched, unmatched S) {
|
||||||
|
for _, e := range s {
|
||||||
|
if fn(e) {
|
||||||
|
matched = append(matched, e)
|
||||||
|
} else {
|
||||||
|
unmatched = append(unmatched, e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return matched, unmatched
|
||||||
|
}
|
||||||
|
|
||||||
|
type mergeGroup []Tensor
|
||||||
|
|
||||||
|
func (g mergeGroup) WriteTo(w io.Writer) (int64, error) {
|
||||||
|
for _, t := range g {
|
||||||
|
if _, err := t.WriteTo(w); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
953
convert/tensor_test.go
Normal file
953
convert/tensor_test.go
Normal file
@@ -0,0 +1,953 @@
|
|||||||
|
package convert
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/binary"
|
||||||
|
"io"
|
||||||
|
"iter"
|
||||||
|
"slices"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/google/go-cmp/cmp"
|
||||||
|
"github.com/ollama/ollama/fs/ggml"
|
||||||
|
"github.com/pdevine/tensor"
|
||||||
|
)
|
||||||
|
|
||||||
|
type fakeTensor struct {
|
||||||
|
name string
|
||||||
|
shape []uint64
|
||||||
|
data []float32
|
||||||
|
|
||||||
|
repacker Repacker
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f fakeTensor) Name() string {
|
||||||
|
return f.name
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f fakeTensor) Shape() []uint64 {
|
||||||
|
return f.shape
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f fakeTensor) Kind() uint32 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *fakeTensor) SetRepacker(fn Repacker) {
|
||||||
|
f.repacker = fn
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f fakeTensor) Clone() Tensor {
|
||||||
|
return &fakeTensor{
|
||||||
|
name: f.name,
|
||||||
|
shape: slices.Clone(f.shape),
|
||||||
|
data: slices.Clone(f.data),
|
||||||
|
repacker: f.repacker,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f fakeTensor) WriteTo(w io.Writer) (n int64, err error) {
|
||||||
|
data := f.data
|
||||||
|
if f.repacker != nil {
|
||||||
|
data, err = f.repacker(f.name, data, f.shape)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := binary.Write(w, binary.LittleEndian, data); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return int64(len(data) * 4), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func mul(shape []uint64) int {
|
||||||
|
n := 1
|
||||||
|
for _, dim := range shape {
|
||||||
|
n *= int(dim)
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSplitDim(t *testing.T) {
|
||||||
|
t.Run("2d", func(t *testing.T) {
|
||||||
|
r := fakeTensor{
|
||||||
|
name: "a.b",
|
||||||
|
shape: []uint64{3, 4},
|
||||||
|
data: []float32{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11},
|
||||||
|
}
|
||||||
|
|
||||||
|
t.Run("no split", func(t *testing.T) {
|
||||||
|
for tt := range splitDim(&r, 0, split{Replacer: strings.NewReplacer("a", "x")}) {
|
||||||
|
if tt.Name != "x.b" {
|
||||||
|
t.Fatalf("expected name 'x', got '%s'", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 4}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("even split", func(t *testing.T) {
|
||||||
|
next, stop := iter.Pull(splitDim(&r, 1,
|
||||||
|
split{Replacer: strings.NewReplacer("a", "x")},
|
||||||
|
split{Replacer: strings.NewReplacer("b", "y")},
|
||||||
|
))
|
||||||
|
defer stop()
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "x.b" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{0, 1, 4, 5, 8, 9}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "a.y" {
|
||||||
|
t.Fatal("expected name 'a.y', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{2, 3, 6, 7, 10, 11}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("uneven split", func(t *testing.T) {
|
||||||
|
next, stop := iter.Pull(splitDim(&r, 0,
|
||||||
|
split{Replacer: strings.NewReplacer("a", "x"), dim: 2},
|
||||||
|
split{Replacer: strings.NewReplacer("b", "y"), dim: 1},
|
||||||
|
))
|
||||||
|
defer stop()
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "x.b" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{2, 4}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{0, 1, 2, 3, 4, 5, 6, 7}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "a.y" {
|
||||||
|
t.Fatal("expected name 'a.y', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{1, 4}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{8, 9, 10, 11}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("three way split", func(t *testing.T) {
|
||||||
|
next, stop := iter.Pull(splitDim(&r, 0,
|
||||||
|
split{Replacer: strings.NewReplacer("a", "x"), dim: 1},
|
||||||
|
split{Replacer: strings.NewReplacer("b", "y"), dim: 1},
|
||||||
|
split{Replacer: strings.NewReplacer("b", "z"), dim: 1},
|
||||||
|
))
|
||||||
|
defer stop()
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "x.b" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{1, 4}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{0, 1, 2, 3}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "a.y" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{1, 4}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{4, 5, 6, 7}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "a.z" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{1, 4}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{8, 9, 10, 11}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("uneven three way split", func(t *testing.T) {
|
||||||
|
next, stop := iter.Pull(splitDim(&r, 1,
|
||||||
|
split{Replacer: strings.NewReplacer("a", "x"), dim: 2},
|
||||||
|
split{Replacer: strings.NewReplacer("b", "y"), dim: 1},
|
||||||
|
split{Replacer: strings.NewReplacer("b", "z"), dim: 1},
|
||||||
|
))
|
||||||
|
defer stop()
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "x.b" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{0, 1, 4, 5, 8, 9}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "a.y" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 1}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{2, 6, 10}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "a.z" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 1}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{3, 7, 11}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("split with transpose", func(t *testing.T) {
|
||||||
|
next, stop := iter.Pull(splitDim(&r, 1,
|
||||||
|
split{Replacer: strings.NewReplacer("a", "x")},
|
||||||
|
split{Replacer: strings.NewReplacer("b", "y"), fn: func(tt tensor.Tensor) (tensor.Tensor, error) {
|
||||||
|
return tensor.Transpose(tt, 1, 0)
|
||||||
|
}},
|
||||||
|
))
|
||||||
|
defer stop()
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "x.b" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{0, 1, 4, 5, 8, 9}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "a.y" {
|
||||||
|
t.Fatal("expected name 'a.y', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{2, 6, 10, 3, 7, 11}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
t.Run("3d", func(t *testing.T) {
|
||||||
|
r := fakeTensor{
|
||||||
|
name: "a.b",
|
||||||
|
shape: []uint64{3, 4, 2},
|
||||||
|
data: []float32{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23},
|
||||||
|
}
|
||||||
|
|
||||||
|
t.Run("no split", func(t *testing.T) {
|
||||||
|
for tt := range splitDim(&r, 0, split{Replacer: strings.NewReplacer("a", "x")}) {
|
||||||
|
if tt.Name != "x.b" {
|
||||||
|
t.Fatalf("expected name 'x', got '%s'", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 4, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("even split", func(t *testing.T) {
|
||||||
|
next, stop := iter.Pull(splitDim(&r, 1,
|
||||||
|
split{Replacer: strings.NewReplacer("a", "x")},
|
||||||
|
split{Replacer: strings.NewReplacer("b", "y")},
|
||||||
|
))
|
||||||
|
defer stop()
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "x.b" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 2, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{0, 1, 2, 3, 8, 9, 10, 11, 16, 17, 18, 19}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "a.y" {
|
||||||
|
t.Fatal("expected name 'a.y', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 2, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{4, 5, 6, 7, 12, 13, 14, 15, 20, 21, 22, 23}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("uneven split", func(t *testing.T) {
|
||||||
|
next, stop := iter.Pull(splitDim(&r, 0,
|
||||||
|
split{Replacer: strings.NewReplacer("a", "x"), dim: 2},
|
||||||
|
split{Replacer: strings.NewReplacer("b", "y"), dim: 1},
|
||||||
|
))
|
||||||
|
defer stop()
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "x.b" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{2, 4, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "a.y" {
|
||||||
|
t.Fatal("expected name 'a.y', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{1, 4, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{16, 17, 18, 19, 20, 21, 22, 23}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("three way split", func(t *testing.T) {
|
||||||
|
next, stop := iter.Pull(splitDim(&r, 0,
|
||||||
|
split{Replacer: strings.NewReplacer("a", "x"), dim: 1},
|
||||||
|
split{Replacer: strings.NewReplacer("b", "y"), dim: 1},
|
||||||
|
split{Replacer: strings.NewReplacer("b", "z"), dim: 1},
|
||||||
|
))
|
||||||
|
defer stop()
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "x.b" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{1, 4, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{0, 1, 2, 3, 4, 5, 6, 7}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "a.y" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{1, 4, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{8, 9, 10, 11, 12, 13, 14, 15}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "a.z" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{1, 4, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{16, 17, 18, 19, 20, 21, 22, 23}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("uneven three way split", func(t *testing.T) {
|
||||||
|
next, stop := iter.Pull(splitDim(&r, 1,
|
||||||
|
split{Replacer: strings.NewReplacer("a", "x"), dim: 2},
|
||||||
|
split{Replacer: strings.NewReplacer("b", "y"), dim: 1},
|
||||||
|
split{Replacer: strings.NewReplacer("b", "z"), dim: 1},
|
||||||
|
))
|
||||||
|
defer stop()
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "x.b" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 2, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{0, 1, 2, 3, 8, 9, 10, 11, 16, 17, 18, 19}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "a.y" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 1, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{4, 5, 12, 13, 20, 21}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
tt, ok := next()
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("expected at least one split")
|
||||||
|
}
|
||||||
|
|
||||||
|
if tt.Name != "a.z" {
|
||||||
|
t.Fatal("expected name 'x.b', got", tt.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(tt.Shape, []uint64{3, 1, 2}); diff != "" {
|
||||||
|
t.Errorf("unexpected shape (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := tt.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, mul(tt.Shape))
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(f32s, []float32{6, 7, 14, 15, 22, 23}); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMerge(t *testing.T) {
|
||||||
|
unmatched := []Tensor{
|
||||||
|
&fakeTensor{
|
||||||
|
name: "a.0.b",
|
||||||
|
shape: []uint64{5, 2},
|
||||||
|
data: []float32{10, 11, 12, 13, 14, 15, 16, 17, 18, 19},
|
||||||
|
},
|
||||||
|
&fakeTensor{
|
||||||
|
name: "a.1.b",
|
||||||
|
shape: []uint64{5, 2},
|
||||||
|
data: []float32{20, 21, 22, 23, 24, 25, 26, 27, 28, 29},
|
||||||
|
},
|
||||||
|
&fakeTensor{
|
||||||
|
name: "c.0.d",
|
||||||
|
shape: []uint64{5, 2},
|
||||||
|
data: []float32{30, 31, 32, 33, 34, 35, 36, 37, 38, 39},
|
||||||
|
},
|
||||||
|
&fakeTensor{
|
||||||
|
name: "c.1.d",
|
||||||
|
shape: []uint64{5, 2},
|
||||||
|
data: []float32{40, 41, 42, 43, 44, 45, 46, 47, 48, 49},
|
||||||
|
},
|
||||||
|
&fakeTensor{
|
||||||
|
name: "e.0.f",
|
||||||
|
shape: []uint64{5, 2},
|
||||||
|
data: []float32{50, 51, 52, 53, 54, 55, 56, 57, 58, 59},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
checkMatched := func(t *testing.T, n int, matched []*ggml.Tensor) {
|
||||||
|
for i := range n {
|
||||||
|
got := matched[i]
|
||||||
|
if diff := cmp.Diff([]uint64{2, 5, 2}, got.Shape); diff != "" {
|
||||||
|
t.Errorf("unexpected (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
if _, err := got.WriteTo(&b); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f32s := make([]float32, 20)
|
||||||
|
if err := binary.Read(&b, binary.LittleEndian, &f32s); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
offset := 10 + (i * 20)
|
||||||
|
want := make([]float32, 20)
|
||||||
|
for j := range 20 {
|
||||||
|
want[j] = float32(offset + j)
|
||||||
|
}
|
||||||
|
|
||||||
|
if diff := cmp.Diff(want, f32s); diff != "" {
|
||||||
|
t.Errorf("unexpected data (-want +got):\n%s", diff)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
t.Run("single merge", func(t *testing.T) {
|
||||||
|
matched, unmatched := mergeTensors(unmatched, merge{"a.*.b", "a.b"})
|
||||||
|
if len(unmatched) != 3 {
|
||||||
|
t.Error("expected 3 remaining tensors, got", len(unmatched))
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(matched) != 1 {
|
||||||
|
t.Error("expected 1 merged tensor, got", len(matched))
|
||||||
|
}
|
||||||
|
|
||||||
|
checkMatched(t, 1, matched)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("multiple merges", func(t *testing.T) {
|
||||||
|
matched, unmatched := mergeTensors(unmatched, merge{"a.*.b", "a.b"}, merge{"c.*.d", "c.d"})
|
||||||
|
if len(unmatched) != 1 {
|
||||||
|
t.Error("expected 1 remaining tensors, got", len(unmatched))
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(matched) != 2 {
|
||||||
|
t.Error("expected 2 merged tensor, got", len(matched))
|
||||||
|
}
|
||||||
|
|
||||||
|
checkMatched(t, 2, matched)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("no match", func(t *testing.T) {
|
||||||
|
matched, unmatched := mergeTensors(unmatched, merge{"x.*.y", "x.y"})
|
||||||
|
if len(unmatched) != 5 {
|
||||||
|
t.Error("expected 5 remaining tensors, got", len(unmatched))
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(matched) != 0 {
|
||||||
|
t.Error("expected no merged tensors, got", len(matched))
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
314
convert/testdata/Qwen2.5-0.5B-Instruct.json
vendored
Normal file
314
convert/testdata/Qwen2.5-0.5B-Instruct.json
vendored
Normal file
@@ -0,0 +1,314 @@
|
|||||||
|
{
|
||||||
|
"general.architecture": "qwen2",
|
||||||
|
"general.file_type": "1",
|
||||||
|
"general.parameter_count": "494032768",
|
||||||
|
"general.quantization_version": "2",
|
||||||
|
"output_norm.weight": "93a01a6db3419e85320a244bbf8ae81c43033b1d10c342bea3797ff2ce348390",
|
||||||
|
"qwen2.attention.head_count": "14",
|
||||||
|
"qwen2.attention.head_count_kv": "2",
|
||||||
|
"qwen2.attention.layer_norm_rms_epsilon": "1e-06",
|
||||||
|
"qwen2.block_count": "24",
|
||||||
|
"qwen2.context_length": "32768",
|
||||||
|
"qwen2.embedding_length": "896",
|
||||||
|
"qwen2.feed_forward_length": "4864",
|
||||||
|
"qwen2.rope.freq_base": "1e+06",
|
||||||
|
"token_embd.weight": "d74257dc547b48be5ae7b93f1c9af072c0c42dbbb85503078e25c59cd09e68d0",
|
||||||
|
"tokenizer.ggml.add_eos_token": "false",
|
||||||
|
"tokenizer.ggml.add_padding_token": "false",
|
||||||
|
"tokenizer.ggml.eos_token_id": "151645",
|
||||||
|
"tokenizer.ggml.merges": "6b1b1c58f1223d74f9095929d3e6416cdd74784440221a5507b87b8197f2bfd2",
|
||||||
|
"tokenizer.ggml.model": "gpt2",
|
||||||
|
"tokenizer.ggml.padding_token_id": "151643",
|
||||||
|
"tokenizer.ggml.pre": "qwen2",
|
||||||
|
"tokenizer.ggml.scores": "94e247e531e8b0fa3d248f3de09c9beae0c87da8106208a8edfaac0b8ec4b53d",
|
||||||
|
"tokenizer.ggml.token_type": "b178dbc9d1b2e08f84d02918e00fc2de2619a250e6c188c91a6605f701860055",
|
||||||
|
"tokenizer.ggml.tokens": "1d93f6679b23a1152b725f7f473792d54d53c1040c5250d3e46b42f81e0a1a34",
|
||||||
|
"blk.0.attn_k.bias": "5ce6617845f66c34515978d23d52e729c298d8bffa28c356a0428bef17142cf1",
|
||||||
|
"blk.0.attn_k.weight": "a960832a9e0e83e4d95402e5d1a01cc74300fcca0c381237162126330e1a7af8",
|
||||||
|
"blk.0.attn_norm.weight": "32c7d51cd0958f1f1771174192db341f9770516d7595a2f0fd18a4d78bd5aba3",
|
||||||
|
"blk.0.attn_output.weight": "c67e6e7e868354a11bf9121c70ee56c140b20eec611a8955e7dfe54a21d40a98",
|
||||||
|
"blk.0.attn_q.bias": "3e9e994eb1f03bccfc82f8bb3c324c920d42d547e07de5be83be12c428645063",
|
||||||
|
"blk.0.attn_q.weight": "dc12132f789b97cfa1e3f5775ceb835247fa67aa47400fd09c8f9f3769208583",
|
||||||
|
"blk.0.attn_v.bias": "a3fd0757b31fdc78af5ec320332d239c1a79d34e8804df06c5454e86955e8cc9",
|
||||||
|
"blk.0.attn_v.weight": "f43094a2134c7ee2dcc52aac3c8b7d9d64fb0295a8adb94cabfd49213f017b84",
|
||||||
|
"blk.0.ffn_down.weight": "18c2aec92db14f21976838a8c35d5575f80d0e4b1e05ccc0d8388d5877e80147",
|
||||||
|
"blk.0.ffn_gate.weight": "a3a1c4ef38f8f750eabadfe3d83bbb0f77941eec1cc1a388e51852e99c8691f6",
|
||||||
|
"blk.0.ffn_norm.weight": "b59b779c42d44b5c4cec41e39b4eb61e0491a07c1b3e946ccb5b8d5c657eda3f",
|
||||||
|
"blk.0.ffn_up.weight": "db64f09987ea59449e90abae5a2ffcc20efd9203f0eebec77a6aacb5809d6cff",
|
||||||
|
"blk.1.attn_k.bias": "a5c8c5671703ec0aa0143ff70a20ffdd67b5d5790ca1dfa5bba4e87e4071ed9f",
|
||||||
|
"blk.1.attn_k.weight": "835c7c7cc95b3cb2e55bd9cac585aa0760a033896621d3e06421f3378c540f7d",
|
||||||
|
"blk.1.attn_norm.weight": "f4c36fb6c14fce721fab0de78cc118d6f66e3a3d3ea0017bb14aade24c3c5434",
|
||||||
|
"blk.1.attn_output.weight": "cc1e80310c97cef068e48e40b7096f32fa2138519d6209c6a1a9994985999016",
|
||||||
|
"blk.1.attn_q.bias": "bc332780e66b0aac80ec5e63ac32344919a840db2fcc8f87bcef16a43a54138e",
|
||||||
|
"blk.1.attn_q.weight": "d766f06c925cce38d4b31b2165b3448e1fb49a7d561985f95d9cd2fcba52367a",
|
||||||
|
"blk.1.attn_v.bias": "9f486626fb6ed9ac84970a71e9b9818dd2758501fd3f61bb1c08540dcc7a8631",
|
||||||
|
"blk.1.attn_v.weight": "e873d1e5bd4f4d6abfd47c0f55119c2c111105838753ee273a03c5ccea25ce5c",
|
||||||
|
"blk.1.ffn_down.weight": "b3ce82b093f187344de04284b1783a452de1b72640914609b8f830dc81580521",
|
||||||
|
"blk.1.ffn_gate.weight": "5cd44ad237edaca525a28a3ac13975d1b565f576d6a8003237a341ae0d156f2e",
|
||||||
|
"blk.1.ffn_norm.weight": "4ac774ee8afaee119610c46aa1ff89fc6c9084a29d226075bc4aa4d2f15f746c",
|
||||||
|
"blk.1.ffn_up.weight": "042d81ab5f1983d85c81213232f3bfc05a9302d9dfaa98d931ebba326b6058b8",
|
||||||
|
"blk.10.attn_k.bias": "767ecfeacd60a2c2221ac4d76c357190849dd9cdf64ced418d9d0c7949101401",
|
||||||
|
"blk.10.attn_k.weight": "a9f3df343227537636be8202303453086375091944e498bad11e0b91e45e8c71",
|
||||||
|
"blk.10.attn_norm.weight": "01acd0e7b3e363f873dbfde6f0995ffcce83f5aaa10ff91c31dbf775035f6d5a",
|
||||||
|
"blk.10.attn_output.weight": "a531fe660769604ab869f01b203eb115e025cad4c0baeacdd1bcca99cf6d0264",
|
||||||
|
"blk.10.attn_q.bias": "356a02c9163dd660c1340fbe1e049b335ac6178891e00996131bba9ab4cb3e59",
|
||||||
|
"blk.10.attn_q.weight": "81be0cfb227339d83f954cd8dcf35828441211c6e1d184060e3eb76085041e2f",
|
||||||
|
"blk.10.attn_v.bias": "ed0450653284b62f8bf2c2db19c0ff7a6cf3cda1324d0a044c5e3db7bb692bd3",
|
||||||
|
"blk.10.attn_v.weight": "c1247ff7092babd2ed979883095b9aa022b2996cab1c77fb9e6176ddc1498d16",
|
||||||
|
"blk.10.ffn_down.weight": "fda7544965dc9af874f1062c22151c6cefc8ba08cbe15dc67aa89979e77b2de4",
|
||||||
|
"blk.10.ffn_gate.weight": "9f2632b1dee7304d10c70bd38d85bb1f148a628a8468f894f57975b8a2f1d945",
|
||||||
|
"blk.10.ffn_norm.weight": "94f8cbd6b17a4d5aabd93fa32930a687db3b11f086142f1cd71c535c11adcad4",
|
||||||
|
"blk.10.ffn_up.weight": "8dc2f8db0474939a277a3d89db34c3bcc3381cfea57bd05a8426a164634d9112",
|
||||||
|
"blk.11.attn_k.bias": "3b8e5a662b19411e3f6530714b766aad2ee41eebc8161bec9db0bc82d383a6e0",
|
||||||
|
"blk.11.attn_k.weight": "2c29f1ed1ce53ce9604e9ea3663c2c373157e909a0d6064a8920005f6d15dad9",
|
||||||
|
"blk.11.attn_norm.weight": "48f68a99c3da4ab4c9e492677b606d1b8e0e3de1fdbf6a977523f97b8c21ec31",
|
||||||
|
"blk.11.attn_output.weight": "5859f3838a94898b020c23040941ed88f4fcb132db400d0849f30a01f62c0f1c",
|
||||||
|
"blk.11.attn_q.bias": "c5ad89a5628f2bd81252ef44ef6bbcbff15c33ad16fba66435509b959c2af6d3",
|
||||||
|
"blk.11.attn_q.weight": "d102104e5d61c1e3219564f1d0149fd593db6c6daa9f3872460c84403323cfef",
|
||||||
|
"blk.11.attn_v.bias": "8653f7d48c5f75a5b55630819f99ecf01c932f12d33fd1a3ee634613e70edde8",
|
||||||
|
"blk.11.attn_v.weight": "e0a7c7d89b9f2d0d781ce85330022229126e130a8600a09d4a5f920f0bbd50b2",
|
||||||
|
"blk.11.ffn_down.weight": "4a22b3361eba8bbe1d9a6fda1812618e894c49f13bcacb505defa9badb6b96a6",
|
||||||
|
"blk.11.ffn_gate.weight": "484698b206760d3fd8df68b252a3c5bae65c8bf6392fb53a5261b021b6f39144",
|
||||||
|
"blk.11.ffn_norm.weight": "da69e96338cbe30882cf5a9544004387f5bbc0bcb6038e61ba2baabbd2623bac",
|
||||||
|
"blk.11.ffn_up.weight": "26ec74f1f504d1281715680dfbcc321db4e9900c53932fa40955daceb891b9aa",
|
||||||
|
"blk.12.attn_k.bias": "f94b49ec3e498f14f6bc3ebefe1f82018935bbe594df03253bfffae36bc20751",
|
||||||
|
"blk.12.attn_k.weight": "ae6323d0bbcfcea01f598d308993d1a7530317e78c1f64923e36d4b1649e9e73",
|
||||||
|
"blk.12.attn_norm.weight": "3784536a7611a839a42a29a5cc538c74ee4f9793092e5efe1b227b48f8c4d37f",
|
||||||
|
"blk.12.attn_output.weight": "46826c00b066829355db78293ab216e890f5eaaed3a70499ee68785189a6b0d9",
|
||||||
|
"blk.12.attn_q.bias": "b14db2d327ce0deec97beda7d3965a56c43e1e63dc9181840fb176b114cf643a",
|
||||||
|
"blk.12.attn_q.weight": "30f67df52ced06f76b6c85531657584276a454d6ec9bb7d0c7d2ca8f067f5551",
|
||||||
|
"blk.12.attn_v.bias": "57ab4b7e43f4fc5853bca7bfbb2702f8c2c391a49252a760abbb7b26330dc4aa",
|
||||||
|
"blk.12.attn_v.weight": "3ccd9da0cfe241cd33a63310f3ca6d81c5bc5a50d200bfea6612ac376166aca2",
|
||||||
|
"blk.12.ffn_down.weight": "a095774413198a83c549ce132d7c9684c0baef33145eaa889be370ef9c881c81",
|
||||||
|
"blk.12.ffn_gate.weight": "bb3b2bbdfb065d2a0a795909c53beec327781a4a7e974bf9f99c436cea459991",
|
||||||
|
"blk.12.ffn_norm.weight": "3b486c6cd97eb4b17967d9d6c0cc3821a1a6ad73d96b4d8fbf980101b32b8dab",
|
||||||
|
"blk.12.ffn_up.weight": "d020b82dd39a5d5a9d3881397bf53a567790a07f395284e6eb0f5fe0fef53de3",
|
||||||
|
"blk.13.attn_k.bias": "69381f8254586eba3623eceb18697fe79f9b4d8f2c30136acb10d5926e3ba1d0",
|
||||||
|
"blk.13.attn_k.weight": "c4d7a31495d71269f81b586203a50abea3a9e2985667faf258c9306ec6030f1d",
|
||||||
|
"blk.13.attn_norm.weight": "907da11075d16eda668dabe548af3cfd794df26b8ab53939af1344d91bec6fba",
|
||||||
|
"blk.13.attn_output.weight": "ca01cf6d2b8ece2fb3b0f56f1eb76194471ac27b54fe264f99c909f5eb7fef4a",
|
||||||
|
"blk.13.attn_q.bias": "2f5ecebafe03b1d485b93c41cff756ca57fb65b02e9d8336f14a3d26ab5d159a",
|
||||||
|
"blk.13.attn_q.weight": "f557f8acad7f0fa62da06b5da134182fe04a5bed8bdb269e316f970c9cc440fb",
|
||||||
|
"blk.13.attn_v.bias": "a492a88ae131e95714b092545a8752eaea7c7d2f9cb77852628ca8296c415525",
|
||||||
|
"blk.13.attn_v.weight": "d1220b1fe9f1cc0a5a88ee239d65fec900f5eaf6c448b6c2cbe74c81e15ed333",
|
||||||
|
"blk.13.ffn_down.weight": "53184e33440b49848a896304eb16a983efbc6b8bee0b93de8c8de716e1585fcb",
|
||||||
|
"blk.13.ffn_gate.weight": "684bf8896f148c851506c62717e45c426921b93c10d536ecdeb0fb28259a106d",
|
||||||
|
"blk.13.ffn_norm.weight": "6cb4e547ad8665eb7c174855c08afe1e5490fece66122522c1e9e8132d9064eb",
|
||||||
|
"blk.13.ffn_up.weight": "c64107897e38c06727075aba4ea7940b2cdd0e278b5c555dffb2790ef553bb57",
|
||||||
|
"blk.14.attn_k.bias": "2814ca9b160b16ae39557c9b629482fbe3a7592d372c1e1bf1ac59a2d578fde1",
|
||||||
|
"blk.14.attn_k.weight": "3377177396463afba667742972920ebb45dfdc37e9950e1f0e1d60a2f936b27d",
|
||||||
|
"blk.14.attn_norm.weight": "5cae870477d51dd35a6d22aaeacfce4dff218ffba693820ede6a4e11f02afd6d",
|
||||||
|
"blk.14.attn_output.weight": "3cfe9ccf3d48ae9e95b93a132a1c6240189a277d764f58590fb36fdbb714cad0",
|
||||||
|
"blk.14.attn_q.bias": "6a75acc2f090b2e67bfc26f7fca080ae8bd7c7aa090ec252e694be66b8b8f038",
|
||||||
|
"blk.14.attn_q.weight": "5ef45c86d7dda1df585aa1b827b89823adf679a6bb9c164bd0f97b2aa6eb96f1",
|
||||||
|
"blk.14.attn_v.bias": "5534480443e10ed72c31a917f3d104b0f49df5e6dbfa58d0eb5e7318120e3aee",
|
||||||
|
"blk.14.attn_v.weight": "58f45cf3240c4623626ec415c7d5441eaa8d2fb184f101aba973f222989422d1",
|
||||||
|
"blk.14.ffn_down.weight": "2dc82a0f20c05b77512458738130d8d05ce150cc078680ae7ee6dd7ed68d955d",
|
||||||
|
"blk.14.ffn_gate.weight": "d4a6c6f0fcccddfd1fcaa074846622f4a74cb22b9a654ab497abdc1d0dde9450",
|
||||||
|
"blk.14.ffn_norm.weight": "777e444932a0212ff3feac98442444e17bd8a98cb758ea3356697d0846d12c56",
|
||||||
|
"blk.14.ffn_up.weight": "6b75f6bd00195198447b69a417ed9d98f8ca28b3cb8be82f4bad908be0777d57",
|
||||||
|
"blk.15.attn_k.bias": "2d07211a58e6c2f23aa3a6dc03c80a7d135dfb28726b60b0e0fdd0f35ea5c37b",
|
||||||
|
"blk.15.attn_k.weight": "e77f3c0075a1810e70df956cc51fd08612f576cc09b6de8708dcae5daedb0739",
|
||||||
|
"blk.15.attn_norm.weight": "379a10d90609a5d5ba67d633803eda1424fc61ba5cca8d3bffe70c8b18b58ebf",
|
||||||
|
"blk.15.attn_output.weight": "402751c12ee9dbc9db5e3bf66a7b23ebe7d36c0500e0be67be4c8b1c4357fa62",
|
||||||
|
"blk.15.attn_q.bias": "acb37fc409ee725ceedf7a3a41b40106086abc47b76780728f781942c5120208",
|
||||||
|
"blk.15.attn_q.weight": "89cd3047a09b46ed2bb57c69dd687f67a1f0235149b30376fa31b525898e4a55",
|
||||||
|
"blk.15.attn_v.bias": "f081a37289cbe811978feb4da3ef543bdeb7355414d476f44e09b498da10cb2c",
|
||||||
|
"blk.15.attn_v.weight": "8404f242a11e6d512c9ead9b2f083cda031e9b269f8a0a83f57ee4c56934764e",
|
||||||
|
"blk.15.ffn_down.weight": "93438f43ee8cc4f1a7fd3840a6afdd5f02123e76db4f0d9474430c0100d148fc",
|
||||||
|
"blk.15.ffn_gate.weight": "ff935a2698843e87fad9dbf7125f53e460190ec71ee128b650b3fc027fe37bfc",
|
||||||
|
"blk.15.ffn_norm.weight": "4be80f199841cba831982e988451e1833c3c938a4d6ca1169319087bf0bd723e",
|
||||||
|
"blk.15.ffn_up.weight": "ee9ba63c66d71053e33551ddd519878bb30b88eeb03cfe047119c5c4000fb0a6",
|
||||||
|
"blk.16.attn_k.bias": "3f5fbabed4510c620b99d9d542739295fa6a262a7157f3a00a4889253f8341b8",
|
||||||
|
"blk.16.attn_k.weight": "8ca6eb139b281c257324cddea97a8e9aa7c048b53075cf00153123b967c27ee5",
|
||||||
|
"blk.16.attn_norm.weight": "290157f005e5aa7dddf4bd60100e7ee7b0baa7f11ec5c2cea5e0ead2aad3a4c6",
|
||||||
|
"blk.16.attn_output.weight": "b1f4d80a7447f08f1c331712527f750d00147f35c042442ade96fd029dadc5a1",
|
||||||
|
"blk.16.attn_q.bias": "e3e4e442ad4416791b468cad8de0d0d2d68c7e7df8d06002f4d49b4da9cb25e4",
|
||||||
|
"blk.16.attn_q.weight": "cc7392fa5bb1107d3816e7e7363de252d37efd4165d065e258806291ce0a147b",
|
||||||
|
"blk.16.attn_v.bias": "a7629830f2f6293e018916849614636d40b1bcd11245f75dbc34d38abae8f324",
|
||||||
|
"blk.16.attn_v.weight": "b6c7856c7d594437630929c8cf3b31d476e817875daf1095334ec08e40c5e355",
|
||||||
|
"blk.16.ffn_down.weight": "f9c0a777a00170990a4982d5a06717511bf9b0dd08aeaab64d9040d59bcbebba",
|
||||||
|
"blk.16.ffn_gate.weight": "ed88f11bc3176c9f22004e3559ccb9830a278b75edd05e11971d51c014bd5cd2",
|
||||||
|
"blk.16.ffn_norm.weight": "ab24abdcc4957895e434c6bb3a5237a71ff5044efb9f76c1a9e76e280c128410",
|
||||||
|
"blk.16.ffn_up.weight": "99f594dc8db37f554efa606e71d215fbc3907aa464a54038d6e40e9229a547ff",
|
||||||
|
"blk.17.attn_k.bias": "f236625676f9b2faa6781c7184d12d84c089c130d2a9350a6cf70210990f6bf1",
|
||||||
|
"blk.17.attn_k.weight": "c2a4f20cd3e98538308a13afe9cc5880bdd90d543449c6072dedd694b511ee1a",
|
||||||
|
"blk.17.attn_norm.weight": "5a9da4ee168311f487a79fc9d065a035432c6cafa8adb963a84954cf32f57a2a",
|
||||||
|
"blk.17.attn_output.weight": "d5df7031e354186ce65dc09d6f8a92eb721c0319816f8596b0c8a5d148ed0a2a",
|
||||||
|
"blk.17.attn_q.bias": "3212d5eeaa7ed7fac93cc99e16544de93c01bb681ae9391256ed4a8671fc6b00",
|
||||||
|
"blk.17.attn_q.weight": "d18cd9aa7ee10c551cb705549fa1ae974aea233f86471c9a19022dc29b63d0d5",
|
||||||
|
"blk.17.attn_v.bias": "a74ad11a1f8357742f80e2a0c0b3a2578fc8bbaf14c8223000767e07a5d79703",
|
||||||
|
"blk.17.attn_v.weight": "da18ac0e90884436a1cb0ad6a067f97a37f321b03c70b8b03bf481339fef5c80",
|
||||||
|
"blk.17.ffn_down.weight": "81a8a5d7a194fb53d976558e0347efbe9fdb1effffde9634c70162e1a20eff51",
|
||||||
|
"blk.17.ffn_gate.weight": "72870d83ab62f2dcd45f593924e291a45e4ae1b87f804b5b88aa34cfd76dd15e",
|
||||||
|
"blk.17.ffn_norm.weight": "cae39ac69b9bdaeefab7533796fdf11dbb7a4bdbdeed601e20f209503aafe008",
|
||||||
|
"blk.17.ffn_up.weight": "e7cb40b0842468507cec0e502bbed8a86428b51d439e3466bc12f44b2754e28f",
|
||||||
|
"blk.18.attn_k.bias": "8bfc02b94f9587aa125e2d8bbc2b15f0a5eb8f378d8b3e64a8150ae0a8ca3df2",
|
||||||
|
"blk.18.attn_k.weight": "434bc3b3332ea48afee890aa689eb458a75c50bc783492b0cbf64d42db40e8ad",
|
||||||
|
"blk.18.attn_norm.weight": "d6ffc09396c42a70d1f0e97d81113eee704d3bfc9eeae2bed022075a5dd08075",
|
||||||
|
"blk.18.attn_output.weight": "133f001f81f3b082468a7de67cb2e7a76508fce34bcc4dee7f0858e06eee082c",
|
||||||
|
"blk.18.attn_q.bias": "758d0e28bf5e660b3090aafb70e2a3191b4f3bb218d65e9139a086ceacaf599f",
|
||||||
|
"blk.18.attn_q.weight": "12d7b86fc1b09b9fa7f8b7ed43d8a410892cec8672d0c752f8346f6193343696",
|
||||||
|
"blk.18.attn_v.bias": "9efd15bab0519462431d6c6e8a5b7dd4e151dc449468097ee0ddca369c0ecc2e",
|
||||||
|
"blk.18.attn_v.weight": "f631231a79d4a2e9730fb2e386d8c18621eb3fb7900fbfdff5e6d52cc42db122",
|
||||||
|
"blk.18.ffn_down.weight": "874a2dddf456f3ab56b958b0860d71c8c680a6f89322c9bf6b2f32a113592300",
|
||||||
|
"blk.18.ffn_gate.weight": "4549ef8976c345a511df4a7133bdaf6fe387335f52dfd8a4605a8ae3f728c403",
|
||||||
|
"blk.18.ffn_norm.weight": "80c258a2536a860e19bfcbd9f29afa13214fbb4c34bde0d4da51287d354e9a59",
|
||||||
|
"blk.18.ffn_up.weight": "8b03308a581457a3c038b7a086f3cdf14941d7ad4107c4bd6d9d6b062fd00d73",
|
||||||
|
"blk.19.attn_k.bias": "e77f7b0c8e3e0a9b0d61918cd88371047752a1b02b1576936f4ec807d4d870ee",
|
||||||
|
"blk.19.attn_k.weight": "a2a318e93355230c0d0f95c441b080bf9c4914507255f363fb67a5e771d4d1e6",
|
||||||
|
"blk.19.attn_norm.weight": "9a4bdeb3970be21ac74a94c2c81eb36986533db81b78db6edec48d9802910d59",
|
||||||
|
"blk.19.attn_output.weight": "2369b103dd3947e2cef02b2669b405af5957fb3a7f9d0ff40646078c4b4317ad",
|
||||||
|
"blk.19.attn_q.bias": "e20bf427bef69059ae84a5d9f98f7d688489627f198fb6153def018ff9fd2e34",
|
||||||
|
"blk.19.attn_q.weight": "45a3bb3bdfd2f29dd76e5f78ddae73678b9a2a85dfaf609e460240ef5b7be2ad",
|
||||||
|
"blk.19.attn_v.bias": "a441f58a3e02ed86ee1819eefc9bd4e8b70d11b864a929d58a2c2ac0aeb8203d",
|
||||||
|
"blk.19.attn_v.weight": "30b0b04480c510450a7abb2ce9fa05c65b150a3cc4dc76f8916bf8d013f1b6be",
|
||||||
|
"blk.19.ffn_down.weight": "eebb9ab8fdb6a6efcfff8cf383adac9ec2d64aeeff703d16ed60d3621f86c395",
|
||||||
|
"blk.19.ffn_gate.weight": "3fef1493029298378886586478410b3d2e4e879f6aa83c07e210a7ce6481817f",
|
||||||
|
"blk.19.ffn_norm.weight": "e1be99ea1e8fb9678f7b8ba200f3f37e03878f3574d65d57bcd3a9fd796e2112",
|
||||||
|
"blk.19.ffn_up.weight": "f07cf25e09394fb69fe3ef324bdc0df9a4cecf3dc53070b8acc39e6d1689bf82",
|
||||||
|
"blk.2.attn_k.bias": "b29baa8221f125eff6b8ac1a950fa1d7cfc1bce7bdc636bf3df7d4065ab6466c",
|
||||||
|
"blk.2.attn_k.weight": "4bd0c179bced8bc37a09f5748c394e0cf50273942fb38a866e5cf50b6c96c437",
|
||||||
|
"blk.2.attn_norm.weight": "07b3edc6a6325c3428aa12f29bcae0be0de363ce61a6af487bc5c93fb8c468d9",
|
||||||
|
"blk.2.attn_output.weight": "056b5b31dbc81087c81b9d41c25960aa66c7190004c842ba343979644d7f4d88",
|
||||||
|
"blk.2.attn_q.bias": "479b6212401e097767c9d52b12a1adb8961c0fce9fcaaab81f202a9d85744376",
|
||||||
|
"blk.2.attn_q.weight": "f89196076f446a6dd8a9eee017f303504f9c03094c326449cee5a7fc0a97fade",
|
||||||
|
"blk.2.attn_v.bias": "ef9b1b986dbd9d7291027a88b67dc31434435b20e76e4f1e9d6273ebd31224f0",
|
||||||
|
"blk.2.attn_v.weight": "9322f4f00e85f8c0936845c51ca64b202a93df104f36886986a8452a8e4967a5",
|
||||||
|
"blk.2.ffn_down.weight": "7beac0d2440dc49af33ededb85a6cc3ba23ab33ad3ffa5760714b2ef84d94f6e",
|
||||||
|
"blk.2.ffn_gate.weight": "818a93864a5890c1f4dc66429004fad07645a50142350e9bff9a68fe24608a52",
|
||||||
|
"blk.2.ffn_norm.weight": "152c924d5514942ad274aafb8cc91b35c1db3627c3d973d92f60ff75f3daf9ba",
|
||||||
|
"blk.2.ffn_up.weight": "9c9579e600f209546db6015c9acfeda4f51b6d3cca6e8db4d20a04285fe61a37",
|
||||||
|
"blk.20.attn_k.bias": "fd22bfeffb63d818ce2ff1ea2ace0db5d940f7a9489b6bfc1ec4a5398848d7fe",
|
||||||
|
"blk.20.attn_k.weight": "f74439bc74c2f9252130c9c28384fd7352368b58bb7ce3f2444cf0288dfff861",
|
||||||
|
"blk.20.attn_norm.weight": "5c15d2613df87be6495fb7546b7dcedd2801d12fa5ecc02c877df889330e8f37",
|
||||||
|
"blk.20.attn_output.weight": "6731a39286a67f6859832f96695732e579e14e0c36956eccd1edce3db11595b8",
|
||||||
|
"blk.20.attn_q.bias": "04466e5a3f454a19b9b433fc2585396feac780027ece7ccb4e4bb3e406fc14d8",
|
||||||
|
"blk.20.attn_q.weight": "ead4c71daaeb17bf20d014a34c88b97f238456488e815ae0f281a5daf6fc99b8",
|
||||||
|
"blk.20.attn_v.bias": "adcc848e043025de9bd55ccb14dd8fb6343e8b5185ed07e12964be41d0faf99f",
|
||||||
|
"blk.20.attn_v.weight": "81bfc23f83526386a4761c2c16b6a93cd0bbf9d846c1a51b82c71f1474a465f1",
|
||||||
|
"blk.20.ffn_down.weight": "9bf660af3bafad919d03173c89a65fc9c89440a76c42c9e55e4d171076f3c17f",
|
||||||
|
"blk.20.ffn_gate.weight": "c04b4f3ccce44917ee228b998e2c19dd702aef10a43413afb152e808b5ac5c42",
|
||||||
|
"blk.20.ffn_norm.weight": "3d5b555d7746a71220143c6b8fff5ce4eb63283d9d9c772f1233d848f69f4ff4",
|
||||||
|
"blk.20.ffn_up.weight": "d7a196505c39e5469dfc7c6958bdbb54e93629ac1a047a6663ed96b318753094",
|
||||||
|
"blk.21.attn_k.bias": "4db1f48e5c6a3bc5720a5da813bbef08283e6269e12d83f8a9c54e52715d8011",
|
||||||
|
"blk.21.attn_k.weight": "c687b2f0e132a5e220a2a059b61aa2a537f37d8a674d7709f87880637b263b31",
|
||||||
|
"blk.21.attn_norm.weight": "ec23b0ff847a4b45585ab8e04f10fc20bb1637c5f1fbcdc4d73f336bcb5d1bd0",
|
||||||
|
"blk.21.attn_output.weight": "01255390576316c1731ef201e32c6e934eba356c28438cd06d9027ac6a3ff84f",
|
||||||
|
"blk.21.attn_q.bias": "3098f37205a15418e1681e407c82b7ce7c6fda6c6826b0590a13e1b68a38a1ea",
|
||||||
|
"blk.21.attn_q.weight": "30ea62cbb702a5359229dc96819df17ee535e2e9988d044b005c73ea536e1005",
|
||||||
|
"blk.21.attn_v.bias": "7bbedb2c22a04737f21993115701d4a06b985b7ca3b64681f53cd1be8d7ea39e",
|
||||||
|
"blk.21.attn_v.weight": "e11905e63579e36fbee978062af7599339ae29633765a4835628d79a795ec8df",
|
||||||
|
"blk.21.ffn_down.weight": "84def2ffd8aca766f9ce12ed9ac76919ab81eb34bdeae44fa4224417c38af527",
|
||||||
|
"blk.21.ffn_gate.weight": "4e99f05377b4a0b8d875045530a5c59dee6a46ac8a45597f6579f6fdfa800787",
|
||||||
|
"blk.21.ffn_norm.weight": "af48f13d03fba38ff8794a5f5005e666e501f971ca2e30bbded2777a8096f37d",
|
||||||
|
"blk.21.ffn_up.weight": "a29541c39a6acbc364be86994632a5bf55d701027cb7f23320f8c6d55ee42c91",
|
||||||
|
"blk.22.attn_k.bias": "c97f84db6c75422df6ef5768676d4e9abefaa3b8337aa2730ff260f8fc350480",
|
||||||
|
"blk.22.attn_k.weight": "af9a0c56f68779513e95be11611b7be6175ddae27d48bee9dd72fdbf05f6cbfa",
|
||||||
|
"blk.22.attn_norm.weight": "1c7518eb5bcff4a202c6f4a2827f14abd76f9bcc64ce75fe9db60b69437a5c9c",
|
||||||
|
"blk.22.attn_output.weight": "1abcf1f3caa2f59dd018646b93f9cf8fd30d49e98a473e6a8704419a751be46f",
|
||||||
|
"blk.22.attn_q.bias": "7221e01cb692faf2f7f8c2eb6e2fac38a1b751a9c9fdb6a21a0a936eb0bf4b96",
|
||||||
|
"blk.22.attn_q.weight": "faaf8fb7b6c19f343d47f3ea6b57151fb46c787e0b3bd2c292fd327d3d4d8e35",
|
||||||
|
"blk.22.attn_v.bias": "3ec05942e82d735de99dfd0d8228d8425e63e2fc584da98b3326bdef89ecb2e5",
|
||||||
|
"blk.22.attn_v.weight": "42e7b0ad06db76227837da9d4e74b2db97f3df4050ecb3a87cb9b55e08dfcb42",
|
||||||
|
"blk.22.ffn_down.weight": "87ef98ad2d0e824b0fa5ad8aa18787162922e527c9b1b721a99bc07d3bf97c82",
|
||||||
|
"blk.22.ffn_gate.weight": "562d6e5a1654b03aaa0e33864d23c10297fd4bcaa72d30fac69fb771ee1df9d6",
|
||||||
|
"blk.22.ffn_norm.weight": "f8a405dee467749d59427ce05cdd4b9c11bb18934a89258ea461f013b7d251f5",
|
||||||
|
"blk.22.ffn_up.weight": "90e1f4ae4062649d4d838399eb353e8bb8d56a49982b6a7f64aa3945377f7187",
|
||||||
|
"blk.23.attn_k.bias": "9ad22178a85f3be7e25f5aff462f31627466364f2f5e92f265cc91db0da9a8a8",
|
||||||
|
"blk.23.attn_k.weight": "d813beffb10f03278f5b58eea0f9d73cdcb7b5b4045ae025c379592e854f7dfd",
|
||||||
|
"blk.23.attn_norm.weight": "f583c9836044bdb056d6f8911088ac28add68e500043ae1f97b5d9158fe3d769",
|
||||||
|
"blk.23.attn_output.weight": "02789911ac3b97f6b761e958b7dd6dc7da61a46a1be92bd0b346039ca7ecd2b2",
|
||||||
|
"blk.23.attn_q.bias": "38c4970fb9b4f7e4a139258a45639d848653814b4bc89ea9849709b13f16414b",
|
||||||
|
"blk.23.attn_q.weight": "eb694be9a5ab5858b8dab064ee4cce247dc757424e65282989bd4d015b8580ce",
|
||||||
|
"blk.23.attn_v.bias": "0a25f6533aa7e7a152a4b198cf6c411c2408a34afa4f161bb4d5ffba2f74e33f",
|
||||||
|
"blk.23.attn_v.weight": "187e1bac6b70f74e6364de226565aa8275ee2854d09cbe5895451a689596049e",
|
||||||
|
"blk.23.ffn_down.weight": "88880dd9ba7ee80ade972927f810b5d2c30a69520c615190b27f9daabc0a8c5a",
|
||||||
|
"blk.23.ffn_gate.weight": "5abec63197935ab3eb8e6de0a5307396ec46cdb1cc5de25d87c845f3c4a3e887",
|
||||||
|
"blk.23.ffn_norm.weight": "60e1f5e6310c3a531c554a6bb7cd883aed58db1e51853f739436ea461c1843d7",
|
||||||
|
"blk.23.ffn_up.weight": "3d7f502771743f4a634188dfcd8b8a384fb07467ca8528366aee59ddb25b7bce",
|
||||||
|
"blk.3.attn_k.bias": "0b6b442ebbac29c8c4b67e8e3876d0382dd2dc52efdf4ab0ebbc6f71b6252393",
|
||||||
|
"blk.3.attn_k.weight": "480f40584fbda692c26f2cee45f5923780b236f8b4e8ec7bbee0237777a0918d",
|
||||||
|
"blk.3.attn_norm.weight": "39872be2af31bc9cd6b583ebba6fb759f621d586d66e5a2fc0b85991615a8923",
|
||||||
|
"blk.3.attn_output.weight": "924b2c80d8513bf637f8ebb3756a340d9cf2243de723fd08d7f5dccd46b3f8b6",
|
||||||
|
"blk.3.attn_q.bias": "863c9d848156847a3fe9bbc44415a4395245b5d13e95673c014fdb71e494ab0a",
|
||||||
|
"blk.3.attn_q.weight": "bff73ee5de92fba8f6c089bbb19ce57e17ab3c9c29295712804bb752711b882e",
|
||||||
|
"blk.3.attn_v.bias": "e1b6fea126e86189112fcdfee79ffc66a087461527bc9c2dc52dc80f3b7de95e",
|
||||||
|
"blk.3.attn_v.weight": "7812b7f5133636f06cdbb4dcc48ef7803206538641b6c960777b37f60a8e6752",
|
||||||
|
"blk.3.ffn_down.weight": "00b393d6a7e3ad9b5224211ccdbc54a96aae151f24ed631764ac224972a6bc82",
|
||||||
|
"blk.3.ffn_gate.weight": "cfd63fa3a038af05dc53c6eeb3c192f1602f26ff24cb840bcf1510fcb37b5513",
|
||||||
|
"blk.3.ffn_norm.weight": "7389fc240a282949580ea2f5b0d7973ac79f32f76dc0155b537bb6b751f8e27a",
|
||||||
|
"blk.3.ffn_up.weight": "2a945f47090df9cb16f92f1f06c520f156f8e232182eaaed09f257b8947a2a62",
|
||||||
|
"blk.4.attn_k.bias": "62533c31f0de498187593f238c6597503fef2a92e920cd540a96bc5311b3b2a0",
|
||||||
|
"blk.4.attn_k.weight": "93e829868bffd980a8e589b9c4566cd81e6ce4296a5f357a2ae93febe1284156",
|
||||||
|
"blk.4.attn_norm.weight": "9e0aaa4bbdd1389890f8abec20533f3ab16d61b872b1a8dbd623023921c660a9",
|
||||||
|
"blk.4.attn_output.weight": "74467d6f44357d67f452ac49da861468b38e98057017bd38bc9a449f9d3538e6",
|
||||||
|
"blk.4.attn_q.bias": "8e6d9026fd69b314c1773c5946be2e11daf806ef22a5d91d744344fd30c58c59",
|
||||||
|
"blk.4.attn_q.weight": "e5bfbafd94a4d530f3769f5edbba8cc08d9b5bee8f66ebf4cb54e69bc0b7f63b",
|
||||||
|
"blk.4.attn_v.bias": "20c570f92022d9905eb85c0e41d1fdb30db22007a9628b51f512f8268d6c34a2",
|
||||||
|
"blk.4.attn_v.weight": "9638d459d61da03c9dd34dad985e03c43b4f8a5bc9701a82153478329b0517e0",
|
||||||
|
"blk.4.ffn_down.weight": "9d91b06e89d52f4365dece7eaeec50f81e52cb2407b333248a81e6e2f84c05b8",
|
||||||
|
"blk.4.ffn_gate.weight": "bf6350a79c6a6ee9146edfd788b88d4a4c2b54db1aa0adcc1464dbba8a84b646",
|
||||||
|
"blk.4.ffn_norm.weight": "11a70a6b9f7ce336292f4e3a2c6c92d366d4ee4306ad4fdb1870fde107e9cc31",
|
||||||
|
"blk.4.ffn_up.weight": "64f23f493d02b147a72a59605e6b7dd1c4c74f6813a38a2a60818bd66f697347",
|
||||||
|
"blk.5.attn_k.bias": "f6c2c279c0ed686f298ad1e5514b5cd882199341f896abbb2c2129d4c64ce9c5",
|
||||||
|
"blk.5.attn_k.weight": "0e682f75870abf9efaca10dac5f04c580f42820ecf4e234d43af967019acb86f",
|
||||||
|
"blk.5.attn_norm.weight": "01efae7653705e741932fcd79dff3be643d7e97f4b5719b887835dffe44b3a82",
|
||||||
|
"blk.5.attn_output.weight": "69e841d00d196acc489cd70bc5ffbbb63530ac5fabb169d40c4fb3a32ebb8ed8",
|
||||||
|
"blk.5.attn_q.bias": "f3304d76ccd44fed887565857c8e513b1211d89a5d3e81782de507ab3f6fc045",
|
||||||
|
"blk.5.attn_q.weight": "98612a6b7920a247853ada95c240807d4ca8e43604279e7a2fc9bb265ae40469",
|
||||||
|
"blk.5.attn_v.bias": "39940a9b353ceed3edfd4a39b985c9520490aa1b9f11749c94fdf6d879d1a259",
|
||||||
|
"blk.5.attn_v.weight": "839f84b828cf83aecf479a0dc7bc86cce05145ef77dcf29916dc3e0680f5b665",
|
||||||
|
"blk.5.ffn_down.weight": "1f48cbb0960f15e06ab8a3754ade792995a655856389ddbca629c07e89d1b114",
|
||||||
|
"blk.5.ffn_gate.weight": "33d8219fce3189e1aab376039896eebd4ad36ebd26a8278cd19b26e4357e4f81",
|
||||||
|
"blk.5.ffn_norm.weight": "0f4a0f83d37127fa4483f2905cb4f38ef6ddc71584b6cb05632c62a9af313dda",
|
||||||
|
"blk.5.ffn_up.weight": "22a64a11e5f0a1ff45ca327bf9e1efa258f085ff6a96edc398b7474f725b4514",
|
||||||
|
"blk.6.attn_k.bias": "baa91df99d4df2d25e8d590bca4e334b97f2d9aa3df8e748fedc8a6188499111",
|
||||||
|
"blk.6.attn_k.weight": "121f3b9f4b9491996499392e2688a929cafe102a67920b4cb2a039349c43d8eb",
|
||||||
|
"blk.6.attn_norm.weight": "b4cf987e923d71f2f84c58d20ea8af7576b225bf61952145b489fdd395e3d411",
|
||||||
|
"blk.6.attn_output.weight": "a112642150a138d54b2a4038042fd33619035a35694771e966f3575856c635d6",
|
||||||
|
"blk.6.attn_q.bias": "a97ea10469cdfa3fdddf8bad6de683ef99f6170eb8d29d15dcf6bf4bce37c5a3",
|
||||||
|
"blk.6.attn_q.weight": "d80c787019317a87361de6bbc7df6701357216bdd9b404522cede34a719a5500",
|
||||||
|
"blk.6.attn_v.bias": "d846269db9cd77ae28da26ba0914cace1b6754bd5301af9c44607085dfcbd2d7",
|
||||||
|
"blk.6.attn_v.weight": "06567c433e8a391647633291b50828a076ad7c2436106bb9278c60a3f8fccb3b",
|
||||||
|
"blk.6.ffn_down.weight": "f15f66f56b3c474eac8c6315c5fff07c3e29c6e483d7efd4d303c7f43814be91",
|
||||||
|
"blk.6.ffn_gate.weight": "47768f89c6da8eefb29adb766ff4eb38c9dfd79320bbc1386248319fcbcf567f",
|
||||||
|
"blk.6.ffn_norm.weight": "7f8195e6b148212967145fc9d86ce36b699cff0de026042245c2d344f1ef8510",
|
||||||
|
"blk.6.ffn_up.weight": "53d7707ae4347aadb445289f9f87a008b72df5cb855b00080a605442fdd8edf3",
|
||||||
|
"blk.7.attn_k.bias": "63e274df3217dde25b8369a383e480fe4f6b403a74385f15ac0b5db71dce2744",
|
||||||
|
"blk.7.attn_k.weight": "f6fce88602f5945eee09767acbcad387d132614e6da39ae359f2bbf380d94b1f",
|
||||||
|
"blk.7.attn_norm.weight": "bbf5dc7336c0f9a511afef6bf5efeffd78f1b83940850c3eb7eb20c621b75656",
|
||||||
|
"blk.7.attn_output.weight": "d9fb907a138396a859cecbfcb377927308dc93c24c7fb52dba5eb59265feadec",
|
||||||
|
"blk.7.attn_q.bias": "f02ba1318346af77e309f40aee716e2de7ee8cab67e67b17636db9bf40894fb0",
|
||||||
|
"blk.7.attn_q.weight": "54a691e824be287a61c35c172edc01922ed792d2addeee029afc17ba6c7e11b9",
|
||||||
|
"blk.7.attn_v.bias": "3a4f182f51e84ce862d558fb2751b91802b65d74596bb14d624808513a8a83ec",
|
||||||
|
"blk.7.attn_v.weight": "a142fe6e106d3ab484e2dc6f9c72b8fc0a385279dde08deb1ad1fd05ac25deb1",
|
||||||
|
"blk.7.ffn_down.weight": "8daf7e8c430d183a4d6ab3eb575fafa4b5e31689f68b290c8b370411ad9d0f12",
|
||||||
|
"blk.7.ffn_gate.weight": "a2a786b45eb660994254b48e2aaf22f3e9821cfb383dee0ba04cc4350a2f8e72",
|
||||||
|
"blk.7.ffn_norm.weight": "73828bbc8c9610cc139fcf03e96272648cdc291263251fe3a67367408deb69e1",
|
||||||
|
"blk.7.ffn_up.weight": "e85dd0f63fed449ce16893c5795ea6a050a2d7a66d9534410a227e22c905dafa",
|
||||||
|
"blk.8.attn_k.bias": "91a752a6e2c364e5ee6a015770fe289aece4911ae6c6bbfe74ac52f465465f93",
|
||||||
|
"blk.8.attn_k.weight": "99c069e92c43a2efb74e23188256b3cabbbe06399878e681ce203a05d5da378a",
|
||||||
|
"blk.8.attn_norm.weight": "c76d36d3cc06aa2a9edb1abf9f602bb7ed61ac9d61f8ef7ed736a1e619abe717",
|
||||||
|
"blk.8.attn_output.weight": "ee5ff156a2625e1f203f65e69b514f9df04bd9a5e82b28e3876e16cf1c6f65c5",
|
||||||
|
"blk.8.attn_q.bias": "8fbd868a93b330c8b0418b488c5301f42a7eb0c58445a4e515d56777f1d96ed5",
|
||||||
|
"blk.8.attn_q.weight": "9f20ef86e80098ba52a3a31ebcc315bea3a614dac9cba7ac1db02f156db9b577",
|
||||||
|
"blk.8.attn_v.bias": "c4813571d5d618742183a7890c0b89cd7f18e210c758f63aad564659bc38a26d",
|
||||||
|
"blk.8.attn_v.weight": "ea88e1a4cf8bd56e9a88ada427d2b0cd352234827640757ee2a9ed594fb67a53",
|
||||||
|
"blk.8.ffn_down.weight": "b0d1a7495811580b189aaa3e20ea871d6d01ed7b6c23e59825078ef786944ff2",
|
||||||
|
"blk.8.ffn_gate.weight": "0a17c0caa0b06721c49b59b2a63a5dcbf744dd1cffa55962b404ba910c658a62",
|
||||||
|
"blk.8.ffn_norm.weight": "f15f109d4a8e9d1ff7c71fa5bc6373df7ee80c5f7d1de3fa0d4849d747e36bcb",
|
||||||
|
"blk.8.ffn_up.weight": "bbf4c5c4c5c8a0f9ae8b88e3cc8b86f81b98148722d5a350995af176c0b774f2",
|
||||||
|
"blk.9.attn_k.bias": "a7f60d962686b8ca60f69643e0e0fa8614688be738fb0b1c6bd54de35c2beb5e",
|
||||||
|
"blk.9.attn_k.weight": "dd80ce4adb00e338fc04b307e4c18a27071f4ba4397184a24d765e6e4a268ef4",
|
||||||
|
"blk.9.attn_norm.weight": "721e6487547e2b3986ab4b4e2500ceade59d908bccf4436e1e8031f246deb2bd",
|
||||||
|
"blk.9.attn_output.weight": "5a800af39107b363861e5f5173483cdcd644d8ac3b0c8a443b9c759d71285db8",
|
||||||
|
"blk.9.attn_q.bias": "0a19b4925ea8ca8067acc909b058adc327de3874cfc94cc9eb4a106d3f370123",
|
||||||
|
"blk.9.attn_q.weight": "93e84906684c0c7ede79967236d9fc8344da84a9f1daa04e8295c2c9b6b26a24",
|
||||||
|
"blk.9.attn_v.bias": "615421f812f821e230ecde4e6da35d868823248355ce7e4e51e2d650ead565f9",
|
||||||
|
"blk.9.attn_v.weight": "7f4913e289aefd9ceecbdaf9767b1e95303f5d59dd67ecb2cc15768477f4d08e",
|
||||||
|
"blk.9.ffn_down.weight": "95d1b3933221e87dc4af70dd566daec9498bf358070b8d26f1fc70766a84a152",
|
||||||
|
"blk.9.ffn_gate.weight": "530f2d04f6a1fbffaaa5f2fbc3a328ebed7b330e3af14b4fc7d8a51b13ad8d42",
|
||||||
|
"blk.9.ffn_norm.weight": "28077de416217ea1df94b96017bef4cc562ab62e51b1a03a671c70abc29ce52a",
|
||||||
|
"blk.9.ffn_up.weight": "b87b6190778aaee4695938e24ac6c90dbbee6dce7c5c2ab5bc26ba4564581822"
|
||||||
|
}
|
||||||
344
convert/testdata/c4ai-command-r-v01.json
vendored
Normal file
344
convert/testdata/c4ai-command-r-v01.json
vendored
Normal file
@@ -0,0 +1,344 @@
|
|||||||
|
{
|
||||||
|
"general.architecture": "command-r",
|
||||||
|
"general.name": "command-r",
|
||||||
|
"command-r.attention.head_count": "64",
|
||||||
|
"command-r.attention.head_count_kv": "64",
|
||||||
|
"command-r.attention.layer_norm_epsilon": "1e-05",
|
||||||
|
"command-r.block_count": "40",
|
||||||
|
"command-r.context_length": "131072",
|
||||||
|
"command-r.embedding_length": "8192",
|
||||||
|
"command-r.feed_forward_length": "22528",
|
||||||
|
"command-r.logit_scale": "0.0625",
|
||||||
|
"command-r.rope.freq_base": "8e+06",
|
||||||
|
"command-r.rope.scaling.type": "none",
|
||||||
|
"tokenizer.ggml.add_bos_token": "true",
|
||||||
|
"tokenizer.ggml.add_eos_token": "false",
|
||||||
|
"tokenizer.ggml.bos_token_id": "5",
|
||||||
|
"tokenizer.ggml.eos_token_id": "255001",
|
||||||
|
"tokenizer.ggml.merges": "902a060cac8884a5793d2a857dd2e53a259de46c8d08c4deb243c239671e1350",
|
||||||
|
"tokenizer.ggml.model": "gpt2",
|
||||||
|
"tokenizer.ggml.padding_token_id": "0",
|
||||||
|
"tokenizer.ggml.token_type": "b7a352ccd1c99d4413bcf452c2db707b0526d0e1216616b865560fab80296462",
|
||||||
|
"tokenizer.ggml.tokens": "815ac90ff23565081522d7258f46648c8a0619eb847a9c7c31b238a9b984e4ae",
|
||||||
|
"blk.0.attn_k.weight": "6fcfdb466f9ceb1229404ce4ec4e480751b8d00da12707a11783dad7256cb864",
|
||||||
|
"blk.0.attn_norm.weight": "6063317f731371864049c7704a70772f1eb632194201ebdc2ed0f8e483507c72",
|
||||||
|
"blk.0.attn_output.weight": "920f49716a1e2fc73b6794ec777947f1c122701e63ed302422ac89e90f06e9da",
|
||||||
|
"blk.0.attn_q.weight": "ddbcd7cde197e632564ac58e4f25d9e3a8ca52917329eeb6081eb41a797932ab",
|
||||||
|
"blk.0.attn_v.weight": "318fc02a189d87420f0cbf57f47f11e00c21ec1ed472ce0a2a895b44f7fa0fca",
|
||||||
|
"blk.0.ffn_down.weight": "aa71975b6eb1f4c77b03d2ac4a194cf8d95718efac741bb12f0f3ff79a27f9bc",
|
||||||
|
"blk.0.ffn_gate.weight": "42967702fa0bc738b88dc50007ace26dbe74a5a9e0978124dd093f818241a9e1",
|
||||||
|
"blk.0.ffn_up.weight": "5282c8788b086bd30f46525e7995a17464882a72703fd27165491afdd8bfd4af",
|
||||||
|
"blk.1.attn_k.weight": "cd248882e64fd2c3402c44790ebe12440133dc671b6893fdad0564c461973adc",
|
||||||
|
"blk.1.attn_norm.weight": "ba84e1c8fd30af6ec94208db4078befac8c921aad3acb887812887f3282ea2be",
|
||||||
|
"blk.1.attn_output.weight": "2efa3ef7c5666ccceb05e339b83ad680cc0d2c3ec78203f5da5959f23a80e14f",
|
||||||
|
"blk.1.attn_q.weight": "5106f2e255358a1303c22e8b5f0ec044852bb30a866c52cabefd30017a7a6b7d",
|
||||||
|
"blk.1.attn_v.weight": "a211a634a1a5df1d5f973645438be0461dd922210f9747c6b04e386c7f1ebe95",
|
||||||
|
"blk.1.ffn_down.weight": "37093afe48d32c578ec956c9ed85242cd000d6aa979e60526aafa10c822dbb10",
|
||||||
|
"blk.1.ffn_gate.weight": "469860819e9159caefb1aad0bc66db790f3393f05fd87b08e52256a7ed256543",
|
||||||
|
"blk.1.ffn_up.weight": "736742c97d35d1a011f9cafd3c0ce947ad559bb2fba6da73c816f6bfd0fa9aeb",
|
||||||
|
"blk.2.attn_k.weight": "92c219d92804d832ab404bd6dc7339c90877bb7cf405dd030c121f8b27757739",
|
||||||
|
"blk.2.attn_norm.weight": "61e4466069474b76b6d1e702566420eb669faf3556b00ff7b824784aca13a2d6",
|
||||||
|
"blk.2.attn_output.weight": "d2fb38a2b2171fd91caf037faa585a62225819aa232d86fd4f7f9d2c3c8a45e9",
|
||||||
|
"blk.2.attn_q.weight": "f6faf5cc6844e3daa4f9f68d90f5458c64879de68a7728860e38374e30c3429d",
|
||||||
|
"blk.2.attn_v.weight": "f340ef8f7341d987a6f37c0e9afe0aef5be67be00c0ce5f57612daf73319cce1",
|
||||||
|
"blk.2.ffn_down.weight": "c7be61a701d779860b621b143fb6365b607bf99ec7c0f153b07908ac8120885a",
|
||||||
|
"blk.2.ffn_gate.weight": "b64f0878187bd3392abfa4c3e8ad2f8b4c133903e54246747ff8f3b4639ad83e",
|
||||||
|
"blk.2.ffn_up.weight": "50b11c712652e90ee7428dbb45cffebb80662ac982bc72bd9eafff361b5eb5a8",
|
||||||
|
"blk.3.attn_k.weight": "2b7bcbe9ee5c9c630c8c8d7483887e78b73581016f4cbb6933db2a147a25f431",
|
||||||
|
"blk.3.attn_norm.weight": "0181dac7f4eee7252980323e8032cf339bef2046ce0a16c0fd72af7c98a8a37b",
|
||||||
|
"blk.3.attn_output.weight": "aef8843b636ce231da9e7c9acbee197883cc15df0e2887709324c6a50f16da7b",
|
||||||
|
"blk.3.attn_q.weight": "55404130fa10e81322d33eb378aa0de31a92990ce7730f1338c0ace0406bb1b1",
|
||||||
|
"blk.3.attn_v.weight": "76f7fb8040d82b957d689ce34fea2302a6640ad5bbaa0052ad2b7ebce270c33d",
|
||||||
|
"blk.3.ffn_down.weight": "648628933eff3b357c3729c33c5b1ae51c28e59b9c19acd1601a2ff7c5d5d9a5",
|
||||||
|
"blk.3.ffn_gate.weight": "6a588885d16e98d5f50ebed05af089154f680085ca9c97691e5b489088630a4a",
|
||||||
|
"blk.3.ffn_up.weight": "e12455a1d702f4986e1a663493e3d5102b367af74d45557522002a35d63ecac2",
|
||||||
|
"blk.4.attn_k.weight": "40d943380a8a85e4eab147934bf6e16f23cc8ab753f6636526382c074d182288",
|
||||||
|
"blk.4.attn_norm.weight": "4ab2c098983d4599fe540eef624c4df954adb7473faebda7471ef0ba4134814c",
|
||||||
|
"blk.4.attn_output.weight": "d14b91e40f58bf4a3c8c2eca0b12bb541de406574af39027d56f6c588a147082",
|
||||||
|
"blk.4.attn_q.weight": "e1224960a3562107488589f883fa32414bae41712fa8dbd47c5f3e3a7801452f",
|
||||||
|
"blk.4.attn_v.weight": "063f297bc4aa6e709fc32c4c32e35af7d07d80e83cb939b76adbba858006c03d",
|
||||||
|
"blk.4.ffn_down.weight": "f88a18020c5e1caaa29596895eb348e76ee5bfad27ed57651a86cd8cd1f9b5aa",
|
||||||
|
"blk.4.ffn_gate.weight": "48e7e1eed3fb52e92e61d3557dd0ec002418327090e034ce4322fd68542266f8",
|
||||||
|
"blk.4.ffn_up.weight": "1ca8a7aa17355b6ce0d9ad5539fdad3899fa47fd359c285fbfb31f19f47bf073",
|
||||||
|
"blk.5.attn_k.weight": "2bdf15f8e73d068d972380f25d207004cf0bf3b5bfa46946803ba6fba07d9175",
|
||||||
|
"blk.5.attn_norm.weight": "60448d7cde6e1b6467aa31bdea012e39cdb08c88081cee7d102dca4f93f766ef",
|
||||||
|
"blk.5.attn_output.weight": "f9f687d7c457537f9fca8a4087a59f1c3bebfaf5537b94e42c831a13224f7799",
|
||||||
|
"blk.5.attn_q.weight": "987db7a2ad68657a92625e1980effbb1f79697c2183f2b9f3b3a0570c51b0ab9",
|
||||||
|
"blk.5.attn_v.weight": "cf696891148f3e4783ad1d20f93462ae091eb8651c656bba9b662253b6263e02",
|
||||||
|
"blk.5.ffn_down.weight": "c0662b0bd0929136005fb9d691fdd9b2c33867d9ce9622339a6a456b720b059a",
|
||||||
|
"blk.5.ffn_gate.weight": "200bbdfab615d7a3a84719b6ced7751e3ce52757ef212d96f87798bc1de5e987",
|
||||||
|
"blk.5.ffn_up.weight": "df5d23e7e035fb1b9d163da7ddfdfe38da6a37e86e96534dc02ad20f011b55b3",
|
||||||
|
"blk.6.attn_k.weight": "c0dae2d272a7c5a2fa004bbb8475dbab362fc1f6d008e73d5a4434a9382ac6ba",
|
||||||
|
"blk.6.attn_norm.weight": "51c57ac8b55e04354d5dca6bb9c0cf4177639d3b038e80209e33036209688f64",
|
||||||
|
"blk.6.attn_output.weight": "229d97892c62f85bcdf431675250e01c976ad69ffa450b01fb543bf88f14a2fb",
|
||||||
|
"blk.6.attn_q.weight": "c20e49621821bd46ed156e6823864a5bda4f317750e71ab8dc54e44eb48cf7c2",
|
||||||
|
"blk.6.attn_v.weight": "53ceb1a2ee43fce3c7b5b33c58a9fc5ee7f44dc1c6f29bc9dbefc37582102dc9",
|
||||||
|
"blk.6.ffn_down.weight": "7923c943b7629d560a032d1efa210d1d75c6692140f1be94464ee7ed24f44ed0",
|
||||||
|
"blk.6.ffn_gate.weight": "57593d350361af753a6a39f53b066282634c0fb44f396f6f2966a574b01d8f8c",
|
||||||
|
"blk.6.ffn_up.weight": "327b6a7a387098b8899d3ded04a4d4e7c658ca61b80d4e7b17594be232721602",
|
||||||
|
"blk.7.attn_k.weight": "9ca48b87a10116fd8868e62b76f211d4bb91f166096be9061439ee2e1c3a5c20",
|
||||||
|
"blk.7.attn_norm.weight": "cd56cfcc4e2ad6b96e23ea7b0d32b4caf236107d99a0b22c56760b62e63c8cfd",
|
||||||
|
"blk.7.attn_output.weight": "7352b509a03cae2491ffc060e577d189341a0f861233f18c96f9d275dc4234bf",
|
||||||
|
"blk.7.attn_q.weight": "2b3791c8c008c33ddbe12bedba8191322ceea2dcce5cf0eb7a93d40ad254e672",
|
||||||
|
"blk.7.attn_v.weight": "3ae721d52466487a3d48150581e57f6d64ea1e83ab929f23b28c3d777422eeb6",
|
||||||
|
"blk.7.ffn_down.weight": "3b6fa8ececdb3c34af3a5363863d6f94289c1c95bf47fce3a3ddcf184c5f0848",
|
||||||
|
"blk.7.ffn_gate.weight": "dbd7df6c5ae5eb4adb859f0d36453813a4e289a359a1ba8f72d67fcbf21c3e22",
|
||||||
|
"blk.7.ffn_up.weight": "de68380a334b4c5cfd4c318b0e9854aec59bd79aa0f0c30af3f56414f83482b0",
|
||||||
|
"blk.8.attn_k.weight": "7303c4e4480abc72a7ee271811311199245fb5c2ea27a2bd3b8cad3a53a03c27",
|
||||||
|
"blk.8.attn_norm.weight": "2e3d1921898d1b943ce1a1b6818546c8b471d6d542da24f51a8b514b8c3dd4ef",
|
||||||
|
"blk.8.attn_output.weight": "30421520887b66bf97a18dbcdc283bc8d0b60590b612fd638a319a6eae923227",
|
||||||
|
"blk.8.attn_q.weight": "73e064d5433c9b500068a1c31744dbd53f4ade298fb450a0e8c97f62cf1f8a8d",
|
||||||
|
"blk.8.attn_v.weight": "27e21f8b9a9a8533e8178ca34a72aa1d786393d57302b7806dcdf3e51de511a8",
|
||||||
|
"blk.8.ffn_down.weight": "bf694bd8e00047982108000e7b3dee7b225db8b19abc595e5697b6bbefd92e7c",
|
||||||
|
"blk.8.ffn_gate.weight": "d55fdbf8606d9141b774b0500c58944fd1253b9e69d1f765eaa9a680b9f2ca40",
|
||||||
|
"blk.8.ffn_up.weight": "1ae3f580655e7c8e8dd6c34fa4ac574fdfc5e3f1a8536da0c5442d3a2976f0e7",
|
||||||
|
"blk.9.attn_k.weight": "b18080626012d8aabcf78542d6c7bf31c712bf55a70172fbfe173fcf34481036",
|
||||||
|
"blk.9.attn_norm.weight": "2e3620620dc09998c6d3063a7d5de5433fbbae8c11e5b00d13f145d39140e162",
|
||||||
|
"blk.9.attn_output.weight": "69c3c0e27ef1c0fc933eeb7b612b70909f18cde238873c0d576a2ba9714ef174",
|
||||||
|
"blk.9.attn_q.weight": "68330e5aa28a28873c9a6e67f032186ef651df2df5844e0f27094ba349fbe4ab",
|
||||||
|
"blk.9.attn_v.weight": "3df8d45a102be082d0793a51cb82aa62a43cd0e9d047ba4115ca0f2414b39325",
|
||||||
|
"blk.9.ffn_down.weight": "1d6cc162b73745b135b4f040a0aac3c06d5135a3dc5b2421e7ee2af48662fd7f",
|
||||||
|
"blk.9.ffn_gate.weight": "034a9d40fb1e32b534b45f4bccd65cbe43c4a6a3f5d01132bd245ca0005de5fc",
|
||||||
|
"blk.9.ffn_up.weight": "c838c38d0e1a0ac0da17eb2a66023ed31929f07d8fcfe1cc546df26096c91f0c",
|
||||||
|
"blk.10.attn_k.weight": "a78507cb72f744b86ceaa032596e74e5571c822d0226d334881169addb32cbd5",
|
||||||
|
"blk.10.attn_norm.weight": "35f48d0b28ee0e6b4cad4e983925737562d64824be5b168b3e26df3d6b260cf1",
|
||||||
|
"blk.10.attn_output.weight": "53712db06796de39b131323e7abf9a58551b6d52da6db66a471580386d396252",
|
||||||
|
"blk.10.attn_q.weight": "efe08429ba196026b81cd1c471e1c7418afd9e966659feb3936b674aa0803b58",
|
||||||
|
"blk.10.attn_v.weight": "7ec6055e134f89da0cbe79ec9f13ef2e442ac584b1f03c3e13e7d0cdad0078bd",
|
||||||
|
"blk.10.ffn_down.weight": "37e66af4bcd1f3079e841e892255b8255070655901864ea3a8c602a7f681a640",
|
||||||
|
"blk.10.ffn_gate.weight": "1825282bc34830d371c6edcc3c1e73e6ecc1e10f4aea0122dbb7acc1d6f7b1bc",
|
||||||
|
"blk.10.ffn_up.weight": "819b3b276a4d4c14a35ed6682d5ef18a5e8ed468e5ce3f12e8c75ec18ac20ec4",
|
||||||
|
"blk.11.attn_k.weight": "5327e6a2af82dfff0619a14971f5864a15553c36fead84e1af42c7630f2729c6",
|
||||||
|
"blk.11.attn_norm.weight": "fec363b3c4a43036d2c635fb8aa9e122dd87ee79811839f2f6cd955be3373e7b",
|
||||||
|
"blk.11.attn_output.weight": "ccf7b38f18ee8798b8a6a35018e2df3eb3e007de62876befb68025dd66c79763",
|
||||||
|
"blk.11.attn_q.weight": "da8c4a1c824ffe174e39f126cd72f7ef83c56aff1259d452a1212de80f98f5e9",
|
||||||
|
"blk.11.attn_v.weight": "d17ae6bb77f03982b55d341eb67acb5969e9ad3da5994b96eafc09793dcfe3a0",
|
||||||
|
"blk.11.ffn_down.weight": "a6bac521e2791345f22c57205fa1c2f2f687794dfd24d0e98d50ae0d0eb6088a",
|
||||||
|
"blk.11.ffn_gate.weight": "5ed902c488cb51ba5635f3df08258c5f84f31a679a00211ea5f9d8b824ef6d9d",
|
||||||
|
"blk.11.ffn_up.weight": "ee9f1437eb890d2cf9df2574afa1cecf20aafdd847cd75b152d7eb74419afd34",
|
||||||
|
"blk.12.attn_k.weight": "5a069c06e1019b0f889088e67458f7a11ec77fa190ada6069e46211f62219947",
|
||||||
|
"blk.12.attn_norm.weight": "194d7e5fcc8c49aea62daf1940532419cf3c505afdce6be377286b677db5db8f",
|
||||||
|
"blk.12.attn_output.weight": "6534995fd4d6fecb55e317add4b1723aba4d825e1e9471d0b08813dfdc247176",
|
||||||
|
"blk.12.attn_q.weight": "4ab51ca519b5995581fa34f846276feca3b907ef2b51f192f6cc0b3263c3f5a2",
|
||||||
|
"blk.12.attn_v.weight": "5652ca3fa81ef9a1ac1543d71fc6813f8517f8ec54b25c701f6f98061614830f",
|
||||||
|
"blk.12.ffn_down.weight": "4b2c263f54c88516b8eb273bb8d9615b01c5c8b484dc70358adb91b50b300edd",
|
||||||
|
"blk.12.ffn_gate.weight": "8f50c3c3e3e8568991d6c1b0e74b500cf4f208e7700bbb8e87c3f6a6d359b6b5",
|
||||||
|
"blk.12.ffn_up.weight": "1c1a581fec1fbe959e1427fa513f400100b5e1ee9d83932630be9905fb49c231",
|
||||||
|
"blk.13.attn_k.weight": "efd7a38c46f08d8376d82974f33c644e3a02220e142d63b1704718699a8a884c",
|
||||||
|
"blk.13.attn_norm.weight": "d28fa4f1bd75abbd063b0e622e08f579c89cd0c0c5ce63c1952ec9f944f8ee13",
|
||||||
|
"blk.13.attn_output.weight": "71e0068a639288718bdb70a6cfdefd50bc8b3ec3993347a65129e70001ca5827",
|
||||||
|
"blk.13.attn_q.weight": "b97077adc92cff07a2e07d80ee38f214ad8713571c69cd5c70ebd43dc501ac87",
|
||||||
|
"blk.13.attn_v.weight": "79b3e2749ab4b459c81e96e322b215f1e8af645eb346e176c326bd00cf6ed2fd",
|
||||||
|
"blk.13.ffn_down.weight": "9f8687d11effa1db7cfecf7bec5631734bcf2962aad74a9f519144491e08ec85",
|
||||||
|
"blk.13.ffn_gate.weight": "7d14dfa0543852e7777fe8fff29ca533744cbcf1ebcf10067e5adfc4eb345e65",
|
||||||
|
"blk.13.ffn_up.weight": "852b9527b97fdab211ff3f832a660ee1d93ccb56906144c50f01319a6e8ee615",
|
||||||
|
"blk.14.attn_k.weight": "79e926b20f36f66d58226cb358881f2f68ae7b468787d33cafae5110287a14a0",
|
||||||
|
"blk.14.attn_norm.weight": "97d481b63deb0df6142c2c6cd23043720c62eb609e390f47a7113751c79974ec",
|
||||||
|
"blk.14.attn_output.weight": "aa6e94d7176d5c79fbb89b96e5f13ce75702ce3dd23ee52986446da436a6c3d6",
|
||||||
|
"blk.14.attn_q.weight": "214becb6d1bb460da9fb8ace0f99b9a5afa9edf7aa7acc19606c7401b11d6305",
|
||||||
|
"blk.14.attn_v.weight": "488b0e6d7f1a7a2ed0972aaa6d10ef9c775ee5373460324efcf5b3e3da9311df",
|
||||||
|
"blk.14.ffn_down.weight": "29c7ad16cf9542e30996a1a01ab95b844533b28051f04cc7949c371afb796471",
|
||||||
|
"blk.14.ffn_gate.weight": "b7ef208f2b054803665b377f5a5980c122c026841809cf855c6ba06d1c3a885a",
|
||||||
|
"blk.14.ffn_up.weight": "76a5cc28100748d79c4398ce7b9176aab4d661548b6293a82f99144812e5b70e",
|
||||||
|
"blk.15.attn_k.weight": "a6b8f9e98ab878fa7ebc5d080978ebf2d050acc2ab2fa8ea9188eb10e27702c8",
|
||||||
|
"blk.15.attn_norm.weight": "a26d07a9752d6dccb68e3a8a2a49fd0752cdd0a415e05547819bc37d9ba63d5e",
|
||||||
|
"blk.15.attn_output.weight": "c63616c69048ccbee801e05be4f56d21fda21aa0cc470f41d57c31b4d9283a4d",
|
||||||
|
"blk.15.attn_q.weight": "fd595a67bf96c6ba16eb148a9d02fa52fa3c1d33ed10be28a08f851409fd6e64",
|
||||||
|
"blk.15.attn_v.weight": "1c5c9d33fa07c05d5f4ed0032c6c4aa83d863f0d31c94a66109d239dcd03cea3",
|
||||||
|
"blk.15.ffn_down.weight": "585ea62ab8aff7d7d212ea5c1a03226fda6b68370c890b776834af70c948dcbc",
|
||||||
|
"blk.15.ffn_gate.weight": "a13c63f86f879b03a573d5dd2a25cfd1f4dc73e8132e6454ecc23e538b4cdf6f",
|
||||||
|
"blk.15.ffn_up.weight": "f7112450f57c12fcd511f049e0dc0b541625a107a7901c3261ed9e984299f65c",
|
||||||
|
"blk.16.attn_k.weight": "2d2c8b11dd71fba6d1c106aa1673c113a5448653cca7eab897c8739212ed5003",
|
||||||
|
"blk.16.attn_norm.weight": "95c2ec7be9469690e18a9a1779684acb3e9da44b13e263a0da840305646fbf8a",
|
||||||
|
"blk.16.attn_output.weight": "31a65046e677f54dae654ded4e733479fcc0f7283d83076b7dc7cbcae8528230",
|
||||||
|
"blk.16.attn_q.weight": "bfc6292b9c6d49b7118d08060242a138182eb182d136ba5dfaf469437c16081d",
|
||||||
|
"blk.16.attn_v.weight": "68f81d037340217d87c7853ff4d6edfbc46d9e827ee6d5bff7c3f6238e3a95ad",
|
||||||
|
"blk.16.ffn_down.weight": "bbd6629691950cef4d5113e1c6670e91b216a9b872cb92cee02dfda4d6c4f7b8",
|
||||||
|
"blk.16.ffn_gate.weight": "63cb56f282b7401ed6c76e5bb6fdf1bf68a64f9af0c82c014209b55bcb5191d0",
|
||||||
|
"blk.16.ffn_up.weight": "b54f39a2541063cbfb6f713aa81c3b69a04100e999aa2ebbeec195dc382eceec",
|
||||||
|
"blk.17.attn_k.weight": "3d9ba49799cc56664ec30a002bcad61eb651294212a68c3ddb573eb042aef5a4",
|
||||||
|
"blk.17.attn_norm.weight": "42ee0db4b9d63257bca0012a30b12737ead1caafeb5ed3d93c8f48ffec4b46de",
|
||||||
|
"blk.17.attn_output.weight": "a38fd100f05c9041c592bc739e287de0b10d08ef2bda41a879225bdca9002f71",
|
||||||
|
"blk.17.attn_q.weight": "8a3bee285b0180a9eb35662e449ee4cbe16d992bdd48fb3a94bc4a347728cfa2",
|
||||||
|
"blk.17.attn_v.weight": "d7f8f1b8b863494ed4392a1656775912e9b264ad36016547b12e832a1d6757d6",
|
||||||
|
"blk.17.ffn_down.weight": "bb7ee58f61da8630972e25b621996fbe8ec06f4dc9ab1e268ab5b120c526ca28",
|
||||||
|
"blk.17.ffn_gate.weight": "6b652dbf167fee09a45ebfd78d500ff6548fb2756dbe5343ffec3f7e6207179f",
|
||||||
|
"blk.17.ffn_up.weight": "3b67f727e55e742715de978fab80457781e7a3762bc48f79d13b45dcb8de664c",
|
||||||
|
"blk.18.attn_k.weight": "ff7fe57c57b90c6fcc0aefc39ec24593c3a7d1ea1c23770480075a015450e0f5",
|
||||||
|
"blk.18.attn_norm.weight": "1d40faca082d2633ef0ccf19e121870dd6c7c3e2154607c7f3543fa96e99cb2d",
|
||||||
|
"blk.18.attn_output.weight": "9adfecaaa397a92db4687efd5fcabfa0daef9e6b0493763b7ff5ebc185c43a6c",
|
||||||
|
"blk.18.attn_q.weight": "ad1803eb9b291948639277afe981e666b07167eb3fcae903ba5b73bf86d8f50b",
|
||||||
|
"blk.18.attn_v.weight": "308cf23399adccf27401a4ab60d74dac6fb9d4cd4b9c5940d9145118d1881b34",
|
||||||
|
"blk.18.ffn_down.weight": "7de4ac9a561fb580619b745687dfd7ca8a69ef70471dee978741b80e9ff7bead",
|
||||||
|
"blk.18.ffn_gate.weight": "0c66970f696b33bd5ee8f1f2fbcb41fd78fa5ccabdc927e11a4d5a4089f19c69",
|
||||||
|
"blk.18.ffn_up.weight": "66a42e988e8a1f468fabf976c48e9e4bb045eaac6916ef16555ac101cd674abc",
|
||||||
|
"blk.19.attn_k.weight": "a928ab50390bacbcebe2e4b66922498134ce22d7b93beaa87d6cf4ab52eb7174",
|
||||||
|
"blk.19.attn_norm.weight": "b4a02c55b46c2a96aec9c64a254087cf48e6c1d4b6f31782c77a46fc4daebad1",
|
||||||
|
"blk.19.attn_output.weight": "b768319c641dff1eac5d1f8ceb960c9899c795bf2b24c1d6bf70aa24fda45f77",
|
||||||
|
"blk.19.attn_q.weight": "79ef3f57d187d3954a26362096e1b6c222d76f537dff73e034d6e9999935b8bc",
|
||||||
|
"blk.19.attn_v.weight": "ce13d6b13e24fcb2d5bc6a2662e5bd295b31b12db10a6d0307f86cf29b8d5001",
|
||||||
|
"blk.19.ffn_down.weight": "cf90d7e2137482cfd50934a8223ad774621d08554969da80a9712df5e6227eb0",
|
||||||
|
"blk.19.ffn_gate.weight": "71ce30150f003b6eeb3bf7464e05b6ae615f135110d8e47f0a47fd973e537c0f",
|
||||||
|
"blk.19.ffn_up.weight": "7f92aca0cc29866633feec701ec01a85a8ee2fd4e2b9630173a6cffb1d9d50ee",
|
||||||
|
"blk.20.attn_k.weight": "a2df23159d6fb74ef28e14b61028fe8b00a693a2fc9234a980be74f20b958682",
|
||||||
|
"blk.20.attn_norm.weight": "c6cd5f1b096fc5efa4eb59ca1c8c4bd28730f3dcedd59a63601663eccc6724ed",
|
||||||
|
"blk.20.attn_output.weight": "896a8a166d0f006d4b09867ae4345426303cbc3fb13a18d3d4e1bde00f16dbdf",
|
||||||
|
"blk.20.attn_q.weight": "01eb79588fe61baea0da43e99f4dc5939590e1bafd01e12dadb8326f102bfea2",
|
||||||
|
"blk.20.attn_v.weight": "bd39630fdd5a7c859ac1addaf53e63faf524c3f32f5f4896d86b6e746b1d5c06",
|
||||||
|
"blk.20.ffn_down.weight": "0304a5d39957a0e3f031c4bcc4549a135d396c8d97c8d276fd1c823ce86560c2",
|
||||||
|
"blk.20.ffn_gate.weight": "117b79d595b1dca0c8b37586beaecc4d84411507276212dc286cde7fc36c9bef",
|
||||||
|
"blk.20.ffn_up.weight": "6e799346db145c125f01783539749d3828fcc451cd4f10c5352f047a47e28714",
|
||||||
|
"blk.21.attn_k.weight": "1c37e4c0664147e775bb006b226b9553e3421140cd96288ea755f81731ab80ba",
|
||||||
|
"blk.21.attn_norm.weight": "00ae783a29000ccda5e4bdbff03df0752fb82805dc3f9b987500ebd80714476e",
|
||||||
|
"blk.21.attn_output.weight": "7588b84f9fb19f15095b5265c60b4a4e7ae74bcc47d4607dfa5d0bfab6f136cb",
|
||||||
|
"blk.21.attn_q.weight": "a65f1c0dd06d45bb97532d3e932689c1eecfe7359089b39174a96a149335cbc1",
|
||||||
|
"blk.21.attn_v.weight": "4220b77e7d5e8709b4eef33a679b5dad11f297085ef44c9977f9e54ef08f7a2d",
|
||||||
|
"blk.21.ffn_down.weight": "b8c082a0530d4b5328e67db0df84c5498f2af956de23c639fa0198ffea853950",
|
||||||
|
"blk.21.ffn_gate.weight": "cd1b656ee72d00e9835ef667c19ef89a88de261eb8eb7c0e936e0f9ddf83ef9f",
|
||||||
|
"blk.21.ffn_up.weight": "dc445f73e36ec7a3bd86884186b728f8e0187f32848c3b8b69d4d41f8571bf31",
|
||||||
|
"blk.22.attn_k.weight": "e37cf0b893ec8b9ee8c78dd139b8d9c45cb997a3bc0c3d93a70ca1c3f6af8859",
|
||||||
|
"blk.22.attn_norm.weight": "248a27838d3c46cc03a5c312facc84e2e0e2c990ef8401e93da25918497f88d1",
|
||||||
|
"blk.22.attn_output.weight": "fc191a18f6d18332c66761f7ab28008bfe295dd1f5c8741a2488442f9e00d0f5",
|
||||||
|
"blk.22.attn_q.weight": "4b193a2ab8bc2b085db18f2bf3eeba26e02b537b2cdd738160c8f14b165d0f5a",
|
||||||
|
"blk.22.attn_v.weight": "7a60ce5ccac7e045e55ba1e1e85bd2a0f93f8c781daee96c5223665e22f0c666",
|
||||||
|
"blk.22.ffn_down.weight": "e0a34fb4244e2c7168f3dbaa1904c15d339ec39999cdf27128bbaf619ee0a237",
|
||||||
|
"blk.22.ffn_gate.weight": "8bac872d4b8549c8812f927efa309f1792b524f33601095fff61b826de5a5615",
|
||||||
|
"blk.22.ffn_up.weight": "b67fa2b94dd901b6ec64c0853ce8ca2d86fe9cb1cc6d2f15fbbbe0e691c0c648",
|
||||||
|
"blk.23.attn_k.weight": "2c32e66ad01942b819ac09a197c71579fe66f02226a264fdd72ad1e02c67a27e",
|
||||||
|
"blk.23.attn_norm.weight": "825fdc94deb439cb93c713eeb077c1052b90ed658d6d464fc4ad3d611e911d48",
|
||||||
|
"blk.23.attn_output.weight": "95ca6707a95b8750b0c7c5d379d368f0f2e7ebef631954e7d4d8ec0f41f13a3a",
|
||||||
|
"blk.23.attn_q.weight": "6eccc84faca5fac015d1b26e2854501edcfd292a302228fe14cf99f5eb59a34b",
|
||||||
|
"blk.23.attn_v.weight": "b343ac3d226040f1033ee049668aa1d89b1774bc18431965682e5dbdce78ccdc",
|
||||||
|
"blk.23.ffn_down.weight": "9fc599befea8d3b1e342d564a110074f66d2542df406c4b90b6bdc5828fbb2b2",
|
||||||
|
"blk.23.ffn_gate.weight": "488556c1b0c9f0b20b0c99b4bac2e0f4046b81edb601d7b91e7e5b3bab47d667",
|
||||||
|
"blk.23.ffn_up.weight": "1088e291d7008dd9c7c2dd6830af686a8a84b724d123a016209bd5156d6898f1",
|
||||||
|
"blk.24.attn_k.weight": "a923fbe35e61e009a53927d7828818e0592bb737d6a1106c4b0b5a1efc367e07",
|
||||||
|
"blk.24.attn_norm.weight": "9b51aaaa939cefafdd9b13a7e5b74ac7fa2d603427e55a16a909d6f3f353750a",
|
||||||
|
"blk.24.attn_output.weight": "1beb2baba56f8409466434b037771248c2f620ec5f53e15f44c271d5a2d9ecf4",
|
||||||
|
"blk.24.attn_q.weight": "4b0194fe5bfae0c6bf6131dcf8cb6e2b994f6ea10b27cb03574f0f4f8cc0c950",
|
||||||
|
"blk.24.attn_v.weight": "6ac34b1ab0f66226d85bca1194a7c212cd93d384ecbc8b8395de48aec0970a61",
|
||||||
|
"blk.24.ffn_down.weight": "5508f74cb732a662c2936b32ac5e90742d172b9f961a747b0e5cba0e5906a89d",
|
||||||
|
"blk.24.ffn_gate.weight": "095e39b8584403835f9bb1ac33e0e81f54175575e4800273d281b845bff381e7",
|
||||||
|
"blk.24.ffn_up.weight": "2d43ec21637dda12973de367b0113ee9840b0d815bf6fce042f7c3f270b0b530",
|
||||||
|
"blk.25.attn_k.weight": "9e2aee029f3d2c7f67dfc7926e72c8228fb978382c8e5a4701bbf82c93801419",
|
||||||
|
"blk.25.attn_norm.weight": "220cd7164fb4cdbe22d26058e4153b26c27c7b5ce2bec8e95bf2c0ea08d23103",
|
||||||
|
"blk.25.attn_output.weight": "a17f4a5dc6aa51f03dbd75602d98e9491767c205cdc2c3a5f8667fc54bbf7c64",
|
||||||
|
"blk.25.attn_q.weight": "f60827496835c440c794bf57ce9780704d10a59d8229886bf75ebb18900ba4ef",
|
||||||
|
"blk.25.attn_v.weight": "9cac217e9e9f4f4c85f14ee51165a77c580165bd4a34b202389169bbe61a1ced",
|
||||||
|
"blk.25.ffn_down.weight": "a0f36949b663e80849581dfb71e7babcc73580793bbcb0c80ab26d5a6e000359",
|
||||||
|
"blk.25.ffn_gate.weight": "df4d1be4d50d6afe5ad3ef0d0e0fac76a33e85c963dea769641d612dd53e7d13",
|
||||||
|
"blk.25.ffn_up.weight": "992da76be762632e25ebc5ef4d03728eece1b43f7c4e31827df19ca724aea694",
|
||||||
|
"blk.26.attn_k.weight": "34199ff856ac32a500c754539d070258574192a34ecba87a182897cb59fdff52",
|
||||||
|
"blk.26.attn_norm.weight": "a8e9dfb2dae5d22b5c0aec5f3675991c0e3c3e6a44153db2579136b73f456e00",
|
||||||
|
"blk.26.attn_output.weight": "1c4f257ffb0d7db0f11cfb275e38b4af736917b43ad82de1badce3f1d227da4d",
|
||||||
|
"blk.26.attn_q.weight": "33d55786274c2e718cf61e8fbecf3dfa5ee0c208f0b716d42b061f55459acb3c",
|
||||||
|
"blk.26.attn_v.weight": "684b636939cd4ffcfec5a6238a0790ffa43d853c95783af9b9e8275e74071a7a",
|
||||||
|
"blk.26.ffn_down.weight": "89d0bf066db154e6d312b5433aed1714f6a28b40f4c52e3e1530ee07703303c8",
|
||||||
|
"blk.26.ffn_gate.weight": "393d649bebe5e2940e1b043649f6c860b4b8b9f380f30e9da1744a830f358156",
|
||||||
|
"blk.26.ffn_up.weight": "179edc85ababd9d8440cc6093eecd1004290aa1cb96434b26ecf7585b6cca17b",
|
||||||
|
"blk.27.attn_k.weight": "334841445a7f1e14731b08f56eb0b1f0938c63823d28bc6d078c4c5f05b36f19",
|
||||||
|
"blk.27.attn_norm.weight": "57344471bbda2e9deffdfdb2dd05a07aa47f8761e24de53525588639145bf551",
|
||||||
|
"blk.27.attn_output.weight": "506126af9ee54b535d49f97e36f630e74834f480329f098d6d62e96246d8d65a",
|
||||||
|
"blk.27.attn_q.weight": "dd984df1acb4783849e25ba7ae378bfd385cd9efc540fb798cd5bdd873f0118f",
|
||||||
|
"blk.27.attn_v.weight": "b4b3fe9a4455d34c297ff20a2f537b647cef424741d840a747b265f23d320ac0",
|
||||||
|
"blk.27.ffn_down.weight": "621fdb185ba0d35ba5476dae73d2c81ec1482a0e878d5bfd5c3b29fe837af013",
|
||||||
|
"blk.27.ffn_gate.weight": "e4fbab45f2ec506fa374103251a0bdb7baa6f576080bdd796f3e9db92098e08f",
|
||||||
|
"blk.27.ffn_up.weight": "a0c57e463e988002bbd6a6c6792baa21a65e6f89ae303a2c301951b0ae6e4bbe",
|
||||||
|
"blk.28.attn_k.weight": "bac36cbd52ec5056841663865e1291ddab4b47ef9a2544dd285d4503bfb0e4a0",
|
||||||
|
"blk.28.attn_norm.weight": "5774a9df2bbb2e86d1f70179c7b92d81e1f401160148b3328fb64db6646a5425",
|
||||||
|
"blk.28.attn_output.weight": "e8712622d1569557000c75f26c3f55fad267fd300463c2c2cfe3afbfa1c8f908",
|
||||||
|
"blk.28.attn_q.weight": "11677751fddee52cc739699c02836f7be54d96038be4240be5d4f53d00161608",
|
||||||
|
"blk.28.attn_v.weight": "e5ee459b8958d65e1445997b9aa1e90e2f5d17761ebcf5357313119a45322507",
|
||||||
|
"blk.28.ffn_down.weight": "3934518f9f85292da8475fe38a8edcbfc4e24ac56c351b472d6351f98750871e",
|
||||||
|
"blk.28.ffn_gate.weight": "6ba735d57e98d0847e487f25ffaa25256deaa8abec76f428cb70bd9774279d83",
|
||||||
|
"blk.28.ffn_up.weight": "977fae6e1e5353114fc645dd98429464749758765cbc6e6457593d596e57850c",
|
||||||
|
"blk.29.attn_k.weight": "8122a457307d580ad6f1e0acea09a2f593d97f595ba0d6737f5fea16d2433642",
|
||||||
|
"blk.29.attn_norm.weight": "d626f721e05aa1202439b01027031d4caf1adace61ed37870a277cb6297c77cc",
|
||||||
|
"blk.29.attn_output.weight": "7fb7122ab1b6b1e6615ca746897da27bc52c92cb70d3147183cdde61795b72b3",
|
||||||
|
"blk.29.attn_q.weight": "be43e94ff6b6e391024dc824101efa0ddf4005d5b002ac26cb03765c0c73c2fa",
|
||||||
|
"blk.29.attn_v.weight": "af93c85ebff908f74f9935b81bde0516ca487c84139868a1ce079c3ae20036b1",
|
||||||
|
"blk.29.ffn_down.weight": "39dae12340ed3120bd19c495fe0872b559613641e41fde69d02d8631900b84c0",
|
||||||
|
"blk.29.ffn_gate.weight": "36fd482439840ef197c9f3b8905d86acfcea49bcf018544106ca465d4bf8d5c7",
|
||||||
|
"blk.29.ffn_up.weight": "5243fbdfdc1e2a1dd84b6210a9869d18a014db9088897e345240cdc99990bd5d",
|
||||||
|
"blk.30.attn_k.weight": "948f263616bd3788b2b968baafd69b9c5bd1b77578665f096c4b7e247b4cea42",
|
||||||
|
"blk.30.attn_norm.weight": "e168df981e744874ff303faf2eb470e5f6868c2040ba5f383f6c5148669975e7",
|
||||||
|
"blk.30.attn_output.weight": "4cf0ccca04b792573b756655a24fc89cfb1f272da8305633f0bc66ef14990b93",
|
||||||
|
"blk.30.attn_q.weight": "21e07d6cba6c50d65350289258209717174a13c42be57e8141d69712cbaf32c1",
|
||||||
|
"blk.30.attn_v.weight": "65a8ca29c7237b3182ccf03e2fc94e84f9a53d0e160fb679ab401c853170dd9c",
|
||||||
|
"blk.30.ffn_down.weight": "8b00500a6d00d84058f6658ee1d6f06fb4fcae2f90d4341792259362923b3c13",
|
||||||
|
"blk.30.ffn_gate.weight": "5bc0e19ab7a31b50ac2118ad1b36e31055271a322cd8ff661d47c3ac0210703c",
|
||||||
|
"blk.30.ffn_up.weight": "f37a0561955725bd59ee2d064fa9f4e00a12a1b620b624db3bc3add5330bc321",
|
||||||
|
"blk.31.attn_k.weight": "9a5663edda227f5d87533897146764f8e8a7481b9e71fae197c39204f8463221",
|
||||||
|
"blk.31.attn_norm.weight": "060a4f438a1ee5e220b5b5278ad2f5c085a428bf38c515766781815597c87529",
|
||||||
|
"blk.31.attn_output.weight": "6ada5d3cad9dea4780ffbb43302bb6ccc2f24eddd0fc4f5f84c9ce0fc0c6e5dd",
|
||||||
|
"blk.31.attn_q.weight": "bb5d08c08603907981ad388d5d8b70fcc9b98034ba264b8474c8890cc0297af0",
|
||||||
|
"blk.31.attn_v.weight": "e01b4252ea9c6a889c32b21144b441a347464d04536ef4f6572425be55759796",
|
||||||
|
"blk.31.ffn_down.weight": "8ba4d679c36e93ba65ba03180385ef35ea86b3b7cdf2fded9df59369f1c09630",
|
||||||
|
"blk.31.ffn_gate.weight": "e5b41dc93645f8b5e8eebae3ada3ea43a18f97ce2654228655170b07b463ccb0",
|
||||||
|
"blk.31.ffn_up.weight": "25b88cdddc8b547af294ed107d3d1312e90b983cae87936fa6062ecd8ea02539",
|
||||||
|
"blk.32.attn_k.weight": "4bcf86dc0858c8ca2fbdf6aa76674d43eb698f78979fdc1a38f556a7af1facc4",
|
||||||
|
"blk.32.attn_norm.weight": "cdcc12f3b8b9773c6722736bfb748a2729230b21478cbcc4104859d3148df815",
|
||||||
|
"blk.32.attn_output.weight": "d43f1196822995ed89a9365c97054753a8b30ce20b6e273c8edcc42673a1e141",
|
||||||
|
"blk.32.attn_q.weight": "ebf2972bb3865cbc5be4840113a322089752038344beab2a0122c7cb4fb399b6",
|
||||||
|
"blk.32.attn_v.weight": "714db81704ff34fa137512903c1013acee7877467473e46600728b9240582eb7",
|
||||||
|
"blk.32.ffn_down.weight": "2cde3da1258bb170a79d5d3cdfe10c86a71eb34b77da46b74c5ed71e7f4fe274",
|
||||||
|
"blk.32.ffn_gate.weight": "c7e1ed792532613ff9d4e5834b6536e2e0f47df2303bc0fdaa90aac0c1f4e8db",
|
||||||
|
"blk.32.ffn_up.weight": "d8d6f13fe66a716e28f79101a29817f0c0d6f99969a6f017d51bafd1a16c600c",
|
||||||
|
"blk.33.attn_k.weight": "a0a28f6cbca88da00cab2ca37094d9b0503bf9defdae77b91895b911c408cbb6",
|
||||||
|
"blk.33.attn_norm.weight": "0251200c24cc8445607ace6dc8c5aa0566567997262b7cca53a11ac23cc564b2",
|
||||||
|
"blk.33.attn_output.weight": "b2423205bdf6a1096d43c44d8d12f1a84fcd4e1bb70fcf6dc8542b8b8a71a13c",
|
||||||
|
"blk.33.attn_q.weight": "00b425c3ef71065ce5e0234e702bf38143b4952da78a85f52ab2c2e3073d97ab",
|
||||||
|
"blk.33.attn_v.weight": "035edd2335df816c42c765a5e66b9d9b9e15a822a8dc1863508145499c942c14",
|
||||||
|
"blk.33.ffn_down.weight": "4894a923a3db75bae4496ba3ce5f28796ad31fe33996a066271fb8654964310e",
|
||||||
|
"blk.33.ffn_gate.weight": "8f6c819b8bbfbe3357fae89e1ac5a3d58be85b3b04be3bacf7b62775869046ff",
|
||||||
|
"blk.33.ffn_up.weight": "257c3544b5b544fd5d839665bf5caf107a329b59dbc3751efcaa24ae63c56179",
|
||||||
|
"blk.34.attn_k.weight": "b6cd8bba892e38dac4a2ebc3ba1bce49e71b967fc436fde30c6d76f54a18935f",
|
||||||
|
"blk.34.attn_norm.weight": "2b3c8e60a064cba9955752bbbbdd92c71ba5c2f1bd721097bdbe88b5abc68787",
|
||||||
|
"blk.34.attn_output.weight": "8cc272551c9aaca9db5a660c6927bab94a0243d74a30b2bc165f06bd577714ea",
|
||||||
|
"blk.34.attn_q.weight": "74b561eb4792484e6a94b58fe2583848c3ae28ff2f1bf3d02939a0cfdfa49990",
|
||||||
|
"blk.34.attn_v.weight": "dba19e24ff05154dc5a1f55c023729303a583d13d68732ce22ea74d4410dc8f0",
|
||||||
|
"blk.34.ffn_down.weight": "76eca5dfeb274c35774e0bf9f22ee420ed9085c8e99aa2cd5a236e4918b44c61",
|
||||||
|
"blk.34.ffn_gate.weight": "9af0862d5fcbc24732846488e653db8242a467765c0cdbc00332b3a40256b4a6",
|
||||||
|
"blk.34.ffn_up.weight": "2a03126bf73587eaba99ece2066103d12e47bcd4ce30ff6c17b2f383b81d40df",
|
||||||
|
"blk.35.attn_k.weight": "52513fc0cd4e997a842729af7d21dd09399bce0a339558374738be266d0fa2f0",
|
||||||
|
"blk.35.attn_norm.weight": "e5281fa911964263ccf1630b14762edbd41d0b9472d6ec695fc600fed4892c35",
|
||||||
|
"blk.35.attn_output.weight": "b391d6705d5dc6f48326b5fd16573f679edf64109d86fb729a498819676590ca",
|
||||||
|
"blk.35.attn_q.weight": "d16446921966db9b0e0539626ad22a2511ace780e59379d6a4162d8c5441440b",
|
||||||
|
"blk.35.attn_v.weight": "9d8cdf23ffdb0c5c74106843390b94b24c9f33ef0eb9998d39f78c73390101ea",
|
||||||
|
"blk.35.ffn_down.weight": "938eb6301f7bbf162d7dd965682a5ed11d0a4a530c6fedd7e5469ce80012fc17",
|
||||||
|
"blk.35.ffn_gate.weight": "5ad84f5a0c8edcfea1ecf1a3e3d21d85ceda0c4ad9e3c6ca68885eeff8ed3c2f",
|
||||||
|
"blk.35.ffn_up.weight": "1c4330d9dc71bf4c98812c34356c51f520f47610a534152aa6d29284b758090d",
|
||||||
|
"blk.36.attn_k.weight": "ef720655e5ca2465f13db2dfc4732fb4ef2c9d53acde52f514fd4f301e974081",
|
||||||
|
"blk.36.attn_norm.weight": "88f4b9310b3c8c2644e3029160cd35678c79dfa59280430e03f5c29a6fe84a58",
|
||||||
|
"blk.36.attn_output.weight": "aec6f915fffd7bb72cd783273e871b4f09605950089d45e72059d1316b6c4b01",
|
||||||
|
"blk.36.attn_q.weight": "72f9408a2405d42f8db6ce5fcf1d26a3660b6f225fc60e77d0277109cfcb82ed",
|
||||||
|
"blk.36.attn_v.weight": "0f3b3d851dc44b3893ef53f6cca5b4acc9658bacfe1cc2d13c3d704ddd409b67",
|
||||||
|
"blk.36.ffn_down.weight": "470aec48ce8c5129a6654d9fd26fcae72776f9fc1429a8bb05818072a876475d",
|
||||||
|
"blk.36.ffn_gate.weight": "7f5f296d09cf55679767b5d15de3eff489c456782119f25204be4b1647f18dcf",
|
||||||
|
"blk.36.ffn_up.weight": "b7ef74a1f7ffb4982711d93f1787be3a70edc3d2358d5203c41d8900508037d4",
|
||||||
|
"blk.37.attn_k.weight": "c4ffa5412e4ff2dcfe1aed991c1f54169fd171a4c7638e4b9f21a1ca64c5e1d6",
|
||||||
|
"blk.37.attn_norm.weight": "4eb6c888d841cccfacf5b963f8611120f6ff24b84af0b5714fd9ab36dcda422f",
|
||||||
|
"blk.37.attn_output.weight": "db2a7bbf9682f9f6eea672dae8e150738f1bf74dbc80edc7022017a3f040c8ac",
|
||||||
|
"blk.37.attn_q.weight": "e38c0462aff139afcbab289189823527e453abc9e541154adde5e7af88cacf0b",
|
||||||
|
"blk.37.attn_v.weight": "952eb2492ed452a72f96bcc12d4b2affad9dfdf46ee39ce4a5d7b57a5dc301e5",
|
||||||
|
"blk.37.ffn_down.weight": "25f23a8fbc44febf6dc4848fd7fe03a580e2822bd3b3b5a51f4990826bfe3e4e",
|
||||||
|
"blk.37.ffn_gate.weight": "707da5eb40118b035305d3262444382351f170a20a537386a70e90c5a83a7817",
|
||||||
|
"blk.37.ffn_up.weight": "d2d2ba5cfc4ef47338dd7384219e22bf030a5a2209e0354d88f5bbaaafd20e87",
|
||||||
|
"blk.38.attn_k.weight": "abc4bb189dedf7ce661e79028427623a4f91ac091c2cd60e31b58bc62b1cda71",
|
||||||
|
"blk.38.attn_norm.weight": "9f4803a7d03fd40fcb83d85f84eb1d5682ea4e5bb084f210c02850675d804c3d",
|
||||||
|
"blk.38.attn_output.weight": "77cb66007f1a41df7135d0e7f900ceb499c2f667dfc3f1a6ac01a3203bbd3ccf",
|
||||||
|
"blk.38.attn_q.weight": "d94a8b26cd375bf2bcaa76597e314aa8268ee50a479d00931e5e0e021feadb5d",
|
||||||
|
"blk.38.attn_v.weight": "660c907888bc5016dc69b7d35fe6f55c7ded697c93be0e2d332a2f17aff88758",
|
||||||
|
"blk.38.ffn_down.weight": "6f06173bae5b00ffaf88ef383619a8b9c6a8d0d5c6494695d17f6c1de1a68a13",
|
||||||
|
"blk.38.ffn_gate.weight": "89f99be149d03f116527bfcabe073c50001c874de40fb6e817f6619027f3cd05",
|
||||||
|
"blk.38.ffn_up.weight": "8d57557c8d5e2d2688b73f01dddf1ce8d5194990cda6358153320aea88aac7f8",
|
||||||
|
"blk.39.attn_k.weight": "21be09c988b46c8393e6c2ec9230f3b5136eb7607dd1953ba92d0811c2f0dd75",
|
||||||
|
"blk.39.attn_norm.weight": "ba7c1912dd1c4e2d16917201f62396fd0600e4a451137eaddff255548c209abd",
|
||||||
|
"blk.39.attn_output.weight": "acfaf4abb3fd27fd899b5563c3877f176b597d8f6cdb2f2fd3f3a0bd4da15ed6",
|
||||||
|
"blk.39.attn_q.weight": "e8adbc140d4c8f0db2a27ca584c5531d5b1e080555fe627e34d80d0814a92bed",
|
||||||
|
"blk.39.attn_v.weight": "92f96b0e1f724e73a0f90a76c145654418844c04a6d4b14c05eb5af8a62bf8dc",
|
||||||
|
"blk.39.ffn_down.weight": "4d9ee7c65fc16fe95d10c47b79ac6a525741947600a64b5fcea5d300a82c50de",
|
||||||
|
"blk.39.ffn_gate.weight": "7e18507989f39b32191133d2657c2ee3b74f42f070579204d727eb72215793d1",
|
||||||
|
"blk.39.ffn_up.weight": "22cda752269c9757ba918abede1df95bb0f83a5c772dea13c8deea3d5f2723d9",
|
||||||
|
"output_norm.weight": "2858cf0e39d32caf52b7861378ace076000241e147f10b9eb21d8a5cd149e3cb"
|
||||||
|
}
|
||||||
@@ -8,10 +8,10 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"io/fs"
|
"io/fs"
|
||||||
"log/slog"
|
"log/slog"
|
||||||
|
"maps"
|
||||||
"os"
|
"os"
|
||||||
"slices"
|
"slices"
|
||||||
|
"strings"
|
||||||
"golang.org/x/exp/maps"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -60,7 +60,25 @@ func parseTokenizer(fsys fs.FS, specialTokenTypes []string) (*Tokenizer, error)
|
|||||||
addedTokens[t.Content] = t
|
addedTokens[t.Content] = t
|
||||||
}
|
}
|
||||||
|
|
||||||
t.Merges = tt.Model.Merges
|
if len(tt.Model.Merges) == 0 {
|
||||||
|
// noop; merges is empty
|
||||||
|
} else if err := json.Unmarshal(tt.Model.Merges, &t.Merges); err == nil {
|
||||||
|
// noop; merges is []string
|
||||||
|
} else if merges, err := func() ([][]string, error) {
|
||||||
|
var merges [][]string
|
||||||
|
if err := json.Unmarshal(tt.Model.Merges, &merges); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return merges, nil
|
||||||
|
}(); err == nil {
|
||||||
|
t.Merges = make([]string, len(merges))
|
||||||
|
for i := range merges {
|
||||||
|
t.Merges[i] = strings.Join(merges[i], " ")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return nil, fmt.Errorf("could not parse tokenizer merges. expected []string or [][]string: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
sha256sum := sha256.New()
|
sha256sum := sha256.New()
|
||||||
for _, pt := range tt.PreTokenizer.PreTokenizers {
|
for _, pt := range tt.PreTokenizer.PreTokenizers {
|
||||||
@@ -81,6 +99,8 @@ func parseTokenizer(fsys fs.FS, specialTokenTypes []string) (*Tokenizer, error)
|
|||||||
t.Pre = "deepseek-llm"
|
t.Pre = "deepseek-llm"
|
||||||
case "21cde974d587f0d54dc8d56b183cc1e6239600172035c68fbd6d4b9f8da0576e":
|
case "21cde974d587f0d54dc8d56b183cc1e6239600172035c68fbd6d4b9f8da0576e":
|
||||||
t.Pre = "deepseek-coder"
|
t.Pre = "deepseek-coder"
|
||||||
|
case "1ff7f41064896984db5d1bb6ff64fa4bc29007d08c1b439e505b7392777a319e":
|
||||||
|
t.Pre = "qwen2"
|
||||||
case "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855":
|
case "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855":
|
||||||
// noop, empty pretokenizer
|
// noop, empty pretokenizer
|
||||||
default:
|
default:
|
||||||
@@ -89,6 +109,7 @@ func parseTokenizer(fsys fs.FS, specialTokenTypes []string) (*Tokenizer, error)
|
|||||||
}
|
}
|
||||||
|
|
||||||
if f, err := fsys.Open("tokenizer_config.json"); errors.Is(err, os.ErrNotExist) {
|
if f, err := fsys.Open("tokenizer_config.json"); errors.Is(err, os.ErrNotExist) {
|
||||||
|
// noop
|
||||||
} else if err != nil {
|
} else if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
} else {
|
} else {
|
||||||
@@ -150,6 +171,34 @@ func parseTokenizer(fsys fs.FS, specialTokenTypes []string) (*Tokenizer, error)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if f, err := fsys.Open("generation_config.json"); errors.Is(err, os.ErrNotExist) {
|
||||||
|
} else if err != nil {
|
||||||
|
return nil, err
|
||||||
|
} else {
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
var p map[string]json.RawMessage
|
||||||
|
if err := json.NewDecoder(f).Decode(&p); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, st := range specialTokenTypes {
|
||||||
|
if bts, ok := p[fmt.Sprintf("%s_token_id", st)]; ok {
|
||||||
|
var ids []int32
|
||||||
|
if err := json.Unmarshal(bts, &ids); err != nil {
|
||||||
|
// value is not a list so the existing ID is used
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if i := slices.IndexFunc(t.SpecialVocabulary, func(sv *SpecialVocabulary) bool {
|
||||||
|
return sv.Type == st
|
||||||
|
}); i >= 0 {
|
||||||
|
t.SpecialVocabulary[i].IDs = ids
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return t, nil
|
return t, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -158,7 +207,7 @@ type tokenizer struct {
|
|||||||
Model struct {
|
Model struct {
|
||||||
Type string `json:"type"`
|
Type string `json:"type"`
|
||||||
Vocab map[string]int `json:"vocab"`
|
Vocab map[string]int `json:"vocab"`
|
||||||
Merges []string `json:"merges"`
|
Merges json.RawMessage `json:"merges"`
|
||||||
} `json:"model"`
|
} `json:"model"`
|
||||||
|
|
||||||
PreTokenizer struct {
|
PreTokenizer struct {
|
||||||
@@ -210,11 +259,8 @@ func parseVocabularyFromTokenizer(fsys fs.FS) (*Vocabulary, error) {
|
|||||||
tokens[token.ID] = token
|
tokens[token.ID] = token
|
||||||
}
|
}
|
||||||
|
|
||||||
keys := maps.Keys(tokens)
|
|
||||||
slices.Sort(keys)
|
|
||||||
|
|
||||||
v := Vocabulary{Model: "gpt2"}
|
v := Vocabulary{Model: "gpt2"}
|
||||||
for _, k := range keys {
|
for _, k := range slices.Sorted(maps.Keys(tokens)) {
|
||||||
token := tokens[k]
|
token := tokens[k]
|
||||||
v.Tokens = append(v.Tokens, token.Content)
|
v.Tokens = append(v.Tokens, token.Content)
|
||||||
v.Scores = append(v.Scores, float32(token.ID))
|
v.Scores = append(v.Scores, float32(token.ID))
|
||||||
@@ -259,6 +305,9 @@ type SpecialVocabulary struct {
|
|||||||
ID int
|
ID int
|
||||||
Content string
|
Content string
|
||||||
AddToken bool
|
AddToken bool
|
||||||
|
|
||||||
|
// IDs is populated by generation_config.json
|
||||||
|
IDs []int32
|
||||||
}
|
}
|
||||||
|
|
||||||
func (sv SpecialVocabulary) Key() string {
|
func (sv SpecialVocabulary) Key() string {
|
||||||
|
|||||||
@@ -6,7 +6,9 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/fs"
|
"io/fs"
|
||||||
|
"log/slog"
|
||||||
"os"
|
"os"
|
||||||
|
"reflect"
|
||||||
"slices"
|
"slices"
|
||||||
|
|
||||||
"google.golang.org/protobuf/proto"
|
"google.golang.org/protobuf/proto"
|
||||||
@@ -15,6 +17,8 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func parseSentencePiece(fsys fs.FS) (*Vocabulary, error) {
|
func parseSentencePiece(fsys fs.FS) (*Vocabulary, error) {
|
||||||
|
slog.Debug("using spm vocabulary")
|
||||||
|
|
||||||
ast, err := parseAdditionalSpecialTokens(fsys)
|
ast, err := parseAdditionalSpecialTokens(fsys)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -43,10 +47,19 @@ func parseSentencePiece(fsys fs.FS) (*Vocabulary, error) {
|
|||||||
v.Types = append(v.Types, int32(t))
|
v.Types = append(v.Types, int32(t))
|
||||||
default:
|
default:
|
||||||
tt := int32(sentencepiece.ModelProto_SentencePiece_NORMAL)
|
tt := int32(sentencepiece.ModelProto_SentencePiece_NORMAL)
|
||||||
if slices.Contains(ast, piece.GetPiece()) {
|
|
||||||
|
// temporary fix to handle gemma3 broken configs
|
||||||
|
if slices.Contains([]string{"<end_of_turn>", "<start_of_turn>"}, piece.GetPiece()) {
|
||||||
tt = int32(sentencepiece.ModelProto_SentencePiece_CONTROL)
|
tt = int32(sentencepiece.ModelProto_SentencePiece_CONTROL)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for _, t := range ast {
|
||||||
|
if t.Content == piece.GetPiece() {
|
||||||
|
tt = int32(sentencepiece.ModelProto_SentencePiece_CONTROL)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
v.Types = append(v.Types, tt)
|
v.Types = append(v.Types, tt)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -78,10 +91,16 @@ func parseSentencePiece(fsys fs.FS) (*Vocabulary, error) {
|
|||||||
return cmp.Compare(i.id, j.id)
|
return cmp.Compare(i.id, j.id)
|
||||||
})
|
})
|
||||||
|
|
||||||
n := len(v.Tokens)
|
for _, t := range ts {
|
||||||
for i, t := range ts {
|
if t.id < len(v.Tokens) {
|
||||||
if t.id != i+n {
|
if v.Tokens[t.id] == t.content {
|
||||||
return nil, fmt.Errorf("invalid token id: %d", t.id)
|
slog.Warn("tokenizer", "duplicate token", t.content, "id", t.id)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("token mismatch: %s != %s at pos [%d]", t.content, v.Tokens[t.id], t.id)
|
||||||
|
}
|
||||||
|
if t.id != len(v.Tokens) {
|
||||||
|
return nil, fmt.Errorf("invalid token id: [%d] as pos [%d]", t.id, len(v.Tokens))
|
||||||
}
|
}
|
||||||
|
|
||||||
v.Tokens = append(v.Tokens, t.content)
|
v.Tokens = append(v.Tokens, t.content)
|
||||||
@@ -92,7 +111,15 @@ func parseSentencePiece(fsys fs.FS) (*Vocabulary, error) {
|
|||||||
return &v, nil
|
return &v, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func parseAdditionalSpecialTokens(fsys fs.FS) ([]string, error) {
|
type specialToken struct {
|
||||||
|
Content string `json:"content"`
|
||||||
|
Lstrip bool `json:"lstrip"`
|
||||||
|
Normalized bool `json:"normalized"`
|
||||||
|
Rstrip bool `json:"rstrip"`
|
||||||
|
SingleWord bool `json:"single_word"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseAdditionalSpecialTokens(fsys fs.FS) ([]specialToken, error) {
|
||||||
f, err := fsys.Open("special_tokens_map.json")
|
f, err := fsys.Open("special_tokens_map.json")
|
||||||
if errors.Is(err, os.ErrNotExist) {
|
if errors.Is(err, os.ErrNotExist) {
|
||||||
return nil, nil
|
return nil, nil
|
||||||
@@ -102,12 +129,43 @@ func parseAdditionalSpecialTokens(fsys fs.FS) ([]string, error) {
|
|||||||
defer f.Close()
|
defer f.Close()
|
||||||
|
|
||||||
var m struct {
|
var m struct {
|
||||||
AdditionalSpecialTokens []string `json:"additional_special_tokens"`
|
AdditionalSpecialTokens any `json:"additional_special_tokens"`
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := json.NewDecoder(f).Decode(&m); err != nil {
|
if err := json.NewDecoder(f).Decode(&m); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
return m.AdditionalSpecialTokens, nil
|
var ast []specialToken
|
||||||
|
|
||||||
|
switch st := m.AdditionalSpecialTokens.(type) {
|
||||||
|
case []string:
|
||||||
|
for _, s := range st {
|
||||||
|
ast = append(ast, specialToken{Content: s})
|
||||||
|
}
|
||||||
|
case []any:
|
||||||
|
for _, s := range st {
|
||||||
|
// marshal and unmarshal the object to get the special token
|
||||||
|
tMap := s.(map[string]any)
|
||||||
|
data, err := json.Marshal(tMap)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var token specialToken
|
||||||
|
err = json.Unmarshal(data, &token)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
ast = append(ast, token)
|
||||||
|
}
|
||||||
|
|
||||||
|
default:
|
||||||
|
slog.Warn("special token", "unknown token", reflect.TypeOf(st))
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Debug("spm tokenizer", "additional tokens", ast)
|
||||||
|
|
||||||
|
return ast, nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -191,6 +191,123 @@ func TestParseTokenizer(t *testing.T) {
|
|||||||
Pre: "default",
|
Pre: "default",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
name: "list string merges",
|
||||||
|
fsys: createTokenizerFS(t, t.TempDir(), map[string]io.Reader{
|
||||||
|
"tokenizer.json": strings.NewReader(`{
|
||||||
|
"model": {
|
||||||
|
"merges": [
|
||||||
|
"a b",
|
||||||
|
"c d",
|
||||||
|
"e f"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}`),
|
||||||
|
}),
|
||||||
|
want: &Tokenizer{
|
||||||
|
Vocabulary: &Vocabulary{
|
||||||
|
Model: "gpt2",
|
||||||
|
},
|
||||||
|
Merges: []string{
|
||||||
|
"a b",
|
||||||
|
"c d",
|
||||||
|
"e f",
|
||||||
|
},
|
||||||
|
Pre: "default",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "list list string merges",
|
||||||
|
fsys: createTokenizerFS(t, t.TempDir(), map[string]io.Reader{
|
||||||
|
"tokenizer.json": strings.NewReader(`{
|
||||||
|
"model": {
|
||||||
|
"merges": [
|
||||||
|
[
|
||||||
|
"a", "b"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
"c", "d"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
"e", "f"
|
||||||
|
]
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}`),
|
||||||
|
}),
|
||||||
|
want: &Tokenizer{
|
||||||
|
Vocabulary: &Vocabulary{
|
||||||
|
Model: "gpt2",
|
||||||
|
},
|
||||||
|
Merges: []string{
|
||||||
|
"a b",
|
||||||
|
"c d",
|
||||||
|
"e f",
|
||||||
|
},
|
||||||
|
Pre: "default",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "generation config eos token ids",
|
||||||
|
fsys: createTokenizerFS(t, t.TempDir(), map[string]io.Reader{
|
||||||
|
"tokenizer.json": strings.NewReader(`{
|
||||||
|
"added_tokens": [
|
||||||
|
{
|
||||||
|
"id": 0,
|
||||||
|
"content": "<bos>",
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"content": "<eos>",
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 2,
|
||||||
|
"content": "<eot>",
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 3,
|
||||||
|
"content": "<eom>",
|
||||||
|
"special": true
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"model": {
|
||||||
|
"vocab": {
|
||||||
|
"<bos>": 0,
|
||||||
|
"<eos>": 1,
|
||||||
|
"<eot>": 2,
|
||||||
|
"<eom>": 3
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}`),
|
||||||
|
"tokenizer_config.json": strings.NewReader(`{
|
||||||
|
"add_bos_token": true,
|
||||||
|
"add_eos_token": false,
|
||||||
|
"bos_token": "<bos>",
|
||||||
|
"eos_token": "<eos>"
|
||||||
|
}`),
|
||||||
|
"generation_config.json": strings.NewReader(`{
|
||||||
|
"bos_token_id": 0,
|
||||||
|
"eos_token_id": [1, 2, 3]
|
||||||
|
}`),
|
||||||
|
}),
|
||||||
|
specialTokenTypes: []string{"pad", "eos", "bos", "unk"},
|
||||||
|
want: &Tokenizer{
|
||||||
|
Vocabulary: &Vocabulary{
|
||||||
|
Model: "gpt2",
|
||||||
|
Tokens: []string{"<bos>", "<eos>", "<eot>", "<eom>"},
|
||||||
|
Scores: []float32{0, 1, 2, 3},
|
||||||
|
Types: []int32{3, 3, 3, 3},
|
||||||
|
},
|
||||||
|
SpecialVocabulary: []*SpecialVocabulary{
|
||||||
|
{Type: "eos", Content: "<eos>", ID: 1, IDs: []int32{1, 2, 3}, AddToken: false},
|
||||||
|
{Type: "bos", Content: "<bos>", ID: 0, AddToken: true},
|
||||||
|
},
|
||||||
|
Pre: "default",
|
||||||
|
},
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range cases {
|
for _, tt := range cases {
|
||||||
|
|||||||
172
discover/cpu_linux.go
Normal file
172
discover/cpu_linux.go
Normal file
@@ -0,0 +1,172 @@
|
|||||||
|
package discover
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"log/slog"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"reflect"
|
||||||
|
"regexp"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/format"
|
||||||
|
)
|
||||||
|
|
||||||
|
func GetCPUMem() (memInfo, error) {
|
||||||
|
var mem memInfo
|
||||||
|
var total, available, free, buffers, cached, freeSwap uint64
|
||||||
|
f, err := os.Open("/proc/meminfo")
|
||||||
|
if err != nil {
|
||||||
|
return mem, err
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
s := bufio.NewScanner(f)
|
||||||
|
for s.Scan() {
|
||||||
|
line := s.Text()
|
||||||
|
switch {
|
||||||
|
case strings.HasPrefix(line, "MemTotal:"):
|
||||||
|
_, err = fmt.Sscanf(line, "MemTotal:%d", &total)
|
||||||
|
case strings.HasPrefix(line, "MemAvailable:"):
|
||||||
|
_, err = fmt.Sscanf(line, "MemAvailable:%d", &available)
|
||||||
|
case strings.HasPrefix(line, "MemFree:"):
|
||||||
|
_, err = fmt.Sscanf(line, "MemFree:%d", &free)
|
||||||
|
case strings.HasPrefix(line, "Buffers:"):
|
||||||
|
_, err = fmt.Sscanf(line, "Buffers:%d", &buffers)
|
||||||
|
case strings.HasPrefix(line, "Cached:"):
|
||||||
|
_, err = fmt.Sscanf(line, "Cached:%d", &cached)
|
||||||
|
case strings.HasPrefix(line, "SwapFree:"):
|
||||||
|
_, err = fmt.Sscanf(line, "SwapFree:%d", &freeSwap)
|
||||||
|
default:
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return mem, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
mem.TotalMemory = total * format.KibiByte
|
||||||
|
mem.FreeSwap = freeSwap * format.KibiByte
|
||||||
|
if available > 0 {
|
||||||
|
mem.FreeMemory = available * format.KibiByte
|
||||||
|
} else {
|
||||||
|
mem.FreeMemory = (free + buffers + cached) * format.KibiByte
|
||||||
|
}
|
||||||
|
return mem, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
const CpuInfoFilename = "/proc/cpuinfo"
|
||||||
|
|
||||||
|
type linuxCpuInfo struct {
|
||||||
|
ID string `cpuinfo:"processor"`
|
||||||
|
VendorID string `cpuinfo:"vendor_id"`
|
||||||
|
ModelName string `cpuinfo:"model name"`
|
||||||
|
PhysicalID string `cpuinfo:"physical id"`
|
||||||
|
Siblings string `cpuinfo:"siblings"`
|
||||||
|
CoreID string `cpuinfo:"core id"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func GetCPUDetails() []CPU {
|
||||||
|
file, err := os.Open(CpuInfoFilename)
|
||||||
|
if err != nil {
|
||||||
|
slog.Warn("failed to get CPU details", "error", err)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
return linuxCPUDetails(file)
|
||||||
|
}
|
||||||
|
|
||||||
|
func linuxCPUDetails(file io.Reader) []CPU {
|
||||||
|
reColumns := regexp.MustCompile("\t+: ")
|
||||||
|
scanner := bufio.NewScanner(file)
|
||||||
|
cpuInfos := []linuxCpuInfo{}
|
||||||
|
cpu := &linuxCpuInfo{}
|
||||||
|
for scanner.Scan() {
|
||||||
|
line := scanner.Text()
|
||||||
|
if sl := reColumns.Split(line, 2); len(sl) > 1 {
|
||||||
|
t := reflect.TypeOf(cpu).Elem()
|
||||||
|
s := reflect.ValueOf(cpu).Elem()
|
||||||
|
for i := range t.NumField() {
|
||||||
|
field := t.Field(i)
|
||||||
|
tag := field.Tag.Get("cpuinfo")
|
||||||
|
if tag == sl[0] {
|
||||||
|
s.FieldByName(field.Name).SetString(sl[1])
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else if strings.TrimSpace(line) == "" && cpu.ID != "" {
|
||||||
|
cpuInfos = append(cpuInfos, *cpu)
|
||||||
|
cpu = &linuxCpuInfo{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if cpu.ID != "" {
|
||||||
|
cpuInfos = append(cpuInfos, *cpu)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Process the sockets/cores/threads
|
||||||
|
socketByID := map[string]*CPU{}
|
||||||
|
coreBySocket := map[string]map[string]struct{}{}
|
||||||
|
threadsByCoreBySocket := map[string]map[string]int{}
|
||||||
|
for _, c := range cpuInfos {
|
||||||
|
if _, found := socketByID[c.PhysicalID]; !found {
|
||||||
|
socketByID[c.PhysicalID] = &CPU{
|
||||||
|
ID: c.PhysicalID,
|
||||||
|
VendorID: c.VendorID,
|
||||||
|
ModelName: c.ModelName,
|
||||||
|
}
|
||||||
|
coreBySocket[c.PhysicalID] = map[string]struct{}{}
|
||||||
|
threadsByCoreBySocket[c.PhysicalID] = map[string]int{}
|
||||||
|
}
|
||||||
|
if c.CoreID != "" {
|
||||||
|
coreBySocket[c.PhysicalID][c.PhysicalID+":"+c.CoreID] = struct{}{}
|
||||||
|
threadsByCoreBySocket[c.PhysicalID][c.PhysicalID+":"+c.CoreID]++
|
||||||
|
} else {
|
||||||
|
coreBySocket[c.PhysicalID][c.PhysicalID+":"+c.ID] = struct{}{}
|
||||||
|
threadsByCoreBySocket[c.PhysicalID][c.PhysicalID+":"+c.ID]++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Tally up the values from the tracking maps
|
||||||
|
for id, s := range socketByID {
|
||||||
|
s.CoreCount = len(coreBySocket[id])
|
||||||
|
s.ThreadCount = 0
|
||||||
|
|
||||||
|
// This only works if HT is enabled, consider a more reliable model, maybe cache size comparisons?
|
||||||
|
efficiencyCoreCount := 0
|
||||||
|
for _, threads := range threadsByCoreBySocket[id] {
|
||||||
|
s.ThreadCount += threads
|
||||||
|
if threads == 1 {
|
||||||
|
efficiencyCoreCount++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if efficiencyCoreCount == s.CoreCount {
|
||||||
|
// 1:1 mapping means they're not actually efficiency cores, but regular cores
|
||||||
|
s.EfficiencyCoreCount = 0
|
||||||
|
} else {
|
||||||
|
s.EfficiencyCoreCount = efficiencyCoreCount
|
||||||
|
}
|
||||||
|
}
|
||||||
|
keys := make([]string, 0, len(socketByID))
|
||||||
|
result := make([]CPU, 0, len(socketByID))
|
||||||
|
for k := range socketByID {
|
||||||
|
keys = append(keys, k)
|
||||||
|
}
|
||||||
|
sort.Strings(keys)
|
||||||
|
for _, k := range keys {
|
||||||
|
result = append(result, *socketByID[k])
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
func IsNUMA() bool {
|
||||||
|
ids := map[string]any{}
|
||||||
|
packageIds, _ := filepath.Glob("/sys/devices/system/cpu/cpu*/topology/physical_package_id")
|
||||||
|
for _, packageId := range packageIds {
|
||||||
|
id, err := os.ReadFile(packageId)
|
||||||
|
if err == nil {
|
||||||
|
ids[strings.TrimSpace(string(id))] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return len(ids) > 1
|
||||||
|
}
|
||||||
2094
discover/cpu_linux_test.go
Normal file
2094
discover/cpu_linux_test.go
Normal file
File diff suppressed because it is too large
Load Diff
212
discover/cpu_windows.go
Normal file
212
discover/cpu_windows.go
Normal file
@@ -0,0 +1,212 @@
|
|||||||
|
package discover
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"log/slog"
|
||||||
|
"syscall"
|
||||||
|
"unsafe"
|
||||||
|
)
|
||||||
|
|
||||||
|
type MEMORYSTATUSEX struct {
|
||||||
|
length uint32
|
||||||
|
MemoryLoad uint32
|
||||||
|
TotalPhys uint64
|
||||||
|
AvailPhys uint64
|
||||||
|
TotalPageFile uint64
|
||||||
|
AvailPageFile uint64
|
||||||
|
TotalVirtual uint64
|
||||||
|
AvailVirtual uint64
|
||||||
|
AvailExtendedVirtual uint64
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
k32 = syscall.NewLazyDLL("kernel32.dll")
|
||||||
|
globalMemoryStatusExProc = k32.NewProc("GlobalMemoryStatusEx")
|
||||||
|
sizeofMemoryStatusEx = uint32(unsafe.Sizeof(MEMORYSTATUSEX{}))
|
||||||
|
GetLogicalProcessorInformationEx = k32.NewProc("GetLogicalProcessorInformationEx")
|
||||||
|
)
|
||||||
|
|
||||||
|
func GetCPUMem() (memInfo, error) {
|
||||||
|
memStatus := MEMORYSTATUSEX{length: sizeofMemoryStatusEx}
|
||||||
|
r1, _, err := globalMemoryStatusExProc.Call(uintptr(unsafe.Pointer(&memStatus)))
|
||||||
|
if r1 == 0 {
|
||||||
|
return memInfo{}, fmt.Errorf("GlobalMemoryStatusEx failed: %w", err)
|
||||||
|
}
|
||||||
|
return memInfo{TotalMemory: memStatus.TotalPhys, FreeMemory: memStatus.AvailPhys, FreeSwap: memStatus.AvailPageFile}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type LOGICAL_PROCESSOR_RELATIONSHIP uint32
|
||||||
|
|
||||||
|
const (
|
||||||
|
RelationProcessorCore LOGICAL_PROCESSOR_RELATIONSHIP = iota
|
||||||
|
RelationNumaNode
|
||||||
|
RelationCache
|
||||||
|
RelationProcessorPackage
|
||||||
|
RelationGroup
|
||||||
|
RelationProcessorDie
|
||||||
|
RelationNumaNodeEx
|
||||||
|
RelationProcessorModule
|
||||||
|
)
|
||||||
|
const RelationAll LOGICAL_PROCESSOR_RELATIONSHIP = 0xffff
|
||||||
|
|
||||||
|
type GROUP_AFFINITY struct {
|
||||||
|
Mask uintptr // KAFFINITY
|
||||||
|
Group uint16
|
||||||
|
Reserved [3]uint16
|
||||||
|
}
|
||||||
|
|
||||||
|
type PROCESSOR_RELATIONSHIP struct {
|
||||||
|
Flags byte
|
||||||
|
EfficiencyClass byte
|
||||||
|
Reserved [20]byte
|
||||||
|
GroupCount uint16
|
||||||
|
GroupMask [1]GROUP_AFFINITY // len GroupCount
|
||||||
|
}
|
||||||
|
|
||||||
|
// Omitted unused structs: NUMA_NODE_RELATIONSHIP CACHE_RELATIONSHIP GROUP_RELATIONSHIP
|
||||||
|
|
||||||
|
type SYSTEM_LOGICAL_PROCESSOR_INFORMATION_EX struct {
|
||||||
|
Relationship LOGICAL_PROCESSOR_RELATIONSHIP
|
||||||
|
Size uint32
|
||||||
|
U [1]byte // Union len Size
|
||||||
|
// PROCESSOR_RELATIONSHIP
|
||||||
|
// NUMA_NODE_RELATIONSHIP
|
||||||
|
// CACHE_RELATIONSHIP
|
||||||
|
// GROUP_RELATIONSHIP
|
||||||
|
}
|
||||||
|
|
||||||
|
func (group *GROUP_AFFINITY) IsMember(target *GROUP_AFFINITY) bool {
|
||||||
|
if group == nil || target == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return group.Mask&target.Mask != 0
|
||||||
|
}
|
||||||
|
|
||||||
|
type winPackage struct {
|
||||||
|
groups []*GROUP_AFFINITY
|
||||||
|
coreCount int // performance cores = coreCount - efficiencyCoreCount
|
||||||
|
efficiencyCoreCount int
|
||||||
|
threadCount int
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pkg *winPackage) IsMember(target *GROUP_AFFINITY) bool {
|
||||||
|
for _, group := range pkg.groups {
|
||||||
|
if group.IsMember(target) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func getLogicalProcessorInformationEx() ([]byte, error) {
|
||||||
|
buf := make([]byte, 1024)
|
||||||
|
bufSize := len(buf)
|
||||||
|
var err error
|
||||||
|
for range 3 {
|
||||||
|
var ret uintptr
|
||||||
|
ret, _, err = GetLogicalProcessorInformationEx.Call(
|
||||||
|
uintptr(RelationAll),
|
||||||
|
uintptr(unsafe.Pointer(&buf[0])),
|
||||||
|
uintptr(unsafe.Pointer(&bufSize)),
|
||||||
|
)
|
||||||
|
if ret == 1 && bufSize <= len(buf) {
|
||||||
|
return buf, nil
|
||||||
|
}
|
||||||
|
buf = make([]byte, bufSize)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unable to determine CPU details: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func processSystemLogicalProcessorInforationList(buf []byte) []*winPackage {
|
||||||
|
var slpi *SYSTEM_LOGICAL_PROCESSOR_INFORMATION_EX
|
||||||
|
// Find all the packages first
|
||||||
|
packages := []*winPackage{}
|
||||||
|
for bufOffset := 0; bufOffset < len(buf); bufOffset += int(slpi.Size) {
|
||||||
|
slpi = (*SYSTEM_LOGICAL_PROCESSOR_INFORMATION_EX)(unsafe.Pointer(&buf[bufOffset]))
|
||||||
|
if slpi.Relationship != RelationProcessorPackage {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
pr := (*PROCESSOR_RELATIONSHIP)(unsafe.Pointer(&slpi.U[0]))
|
||||||
|
pkg := &winPackage{}
|
||||||
|
ga0 := unsafe.Pointer(&pr.GroupMask[0])
|
||||||
|
for j := range pr.GroupCount {
|
||||||
|
gm := (*GROUP_AFFINITY)(unsafe.Pointer(uintptr(ga0) + uintptr(j)*unsafe.Sizeof(GROUP_AFFINITY{})))
|
||||||
|
pkg.groups = append(pkg.groups, gm)
|
||||||
|
}
|
||||||
|
packages = append(packages, pkg)
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("packages", "count", len(packages))
|
||||||
|
|
||||||
|
// To identify efficiency cores we have to compare the relative values
|
||||||
|
// Larger values are "less efficient" (aka, more performant)
|
||||||
|
var maxEfficiencyClass byte
|
||||||
|
for bufOffset := 0; bufOffset < len(buf); bufOffset += int(slpi.Size) {
|
||||||
|
slpi = (*SYSTEM_LOGICAL_PROCESSOR_INFORMATION_EX)(unsafe.Pointer(&buf[bufOffset]))
|
||||||
|
if slpi.Relationship != RelationProcessorCore {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
pr := (*PROCESSOR_RELATIONSHIP)(unsafe.Pointer(&slpi.U[0]))
|
||||||
|
if pr.EfficiencyClass > maxEfficiencyClass {
|
||||||
|
maxEfficiencyClass = pr.EfficiencyClass
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if maxEfficiencyClass > 0 {
|
||||||
|
slog.Info("efficiency cores detected", "maxEfficiencyClass", maxEfficiencyClass)
|
||||||
|
}
|
||||||
|
|
||||||
|
// then match up the Cores to the Packages, count up cores, threads and efficiency cores
|
||||||
|
for bufOffset := 0; bufOffset < len(buf); bufOffset += int(slpi.Size) {
|
||||||
|
slpi = (*SYSTEM_LOGICAL_PROCESSOR_INFORMATION_EX)(unsafe.Pointer(&buf[bufOffset]))
|
||||||
|
if slpi.Relationship != RelationProcessorCore {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
pr := (*PROCESSOR_RELATIONSHIP)(unsafe.Pointer(&slpi.U[0]))
|
||||||
|
ga0 := unsafe.Pointer(&pr.GroupMask[0])
|
||||||
|
for j := range pr.GroupCount {
|
||||||
|
gm := (*GROUP_AFFINITY)(unsafe.Pointer(uintptr(ga0) + uintptr(j)*unsafe.Sizeof(GROUP_AFFINITY{})))
|
||||||
|
for _, pkg := range packages {
|
||||||
|
if pkg.IsMember(gm) {
|
||||||
|
pkg.coreCount++
|
||||||
|
if pr.Flags == 0 {
|
||||||
|
pkg.threadCount++
|
||||||
|
} else {
|
||||||
|
pkg.threadCount += 2
|
||||||
|
}
|
||||||
|
if pr.EfficiencyClass < maxEfficiencyClass {
|
||||||
|
pkg.efficiencyCoreCount++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Summarize the results
|
||||||
|
for i, pkg := range packages {
|
||||||
|
slog.Info("", "package", i, "cores", pkg.coreCount, "efficiency", pkg.efficiencyCoreCount, "threads", pkg.threadCount)
|
||||||
|
}
|
||||||
|
|
||||||
|
return packages
|
||||||
|
}
|
||||||
|
|
||||||
|
func GetCPUDetails() []CPU {
|
||||||
|
buf, err := getLogicalProcessorInformationEx()
|
||||||
|
if err != nil {
|
||||||
|
slog.Warn("failed to get CPU details", "error", err)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
packages := processSystemLogicalProcessorInforationList(buf)
|
||||||
|
cpus := make([]CPU, len(packages))
|
||||||
|
|
||||||
|
for i, pkg := range packages {
|
||||||
|
cpus[i].CoreCount = pkg.coreCount
|
||||||
|
cpus[i].EfficiencyCoreCount = pkg.efficiencyCoreCount
|
||||||
|
cpus[i].ThreadCount = pkg.threadCount
|
||||||
|
}
|
||||||
|
return cpus
|
||||||
|
}
|
||||||
|
|
||||||
|
func IsNUMA() bool {
|
||||||
|
// numa support in ggml is linux only
|
||||||
|
return false
|
||||||
|
}
|
||||||
77
discover/cpu_windows_test.go
Normal file
77
discover/cpu_windows_test.go
Normal file
File diff suppressed because one or more lines are too long
207
discover/gpu.go
Normal file
207
discover/gpu.go
Normal file
@@ -0,0 +1,207 @@
|
|||||||
|
package discover
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"log/slog"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"regexp"
|
||||||
|
"runtime"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/format"
|
||||||
|
"github.com/ollama/ollama/ml"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Jetson devices have JETSON_JETPACK="x.y.z" factory set to the Jetpack version installed.
|
||||||
|
// Included to drive logic for reducing Ollama-allocated overhead on L4T/Jetson devices.
|
||||||
|
var CudaTegra string = os.Getenv("JETSON_JETPACK")
|
||||||
|
|
||||||
|
func GetCPUInfo() GpuInfo {
|
||||||
|
mem, err := GetCPUMem()
|
||||||
|
if err != nil {
|
||||||
|
slog.Warn("error looking up system memory", "error", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return GpuInfo{
|
||||||
|
memInfo: mem,
|
||||||
|
DeviceID: ml.DeviceID{
|
||||||
|
Library: "cpu",
|
||||||
|
ID: "0",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func GetGPUInfo(ctx context.Context, runners []FilteredRunnerDiscovery) GpuInfoList {
|
||||||
|
devs := GPUDevices(ctx, runners)
|
||||||
|
return devInfoToInfoList(devs)
|
||||||
|
}
|
||||||
|
|
||||||
|
func devInfoToInfoList(devs []ml.DeviceInfo) GpuInfoList {
|
||||||
|
resp := []GpuInfo{}
|
||||||
|
// Our current packaging model places ggml-hip in the main directory
|
||||||
|
// but keeps rocm in an isolated directory. We have to add it to
|
||||||
|
// the [LD_LIBRARY_]PATH so ggml-hip will load properly
|
||||||
|
rocmDir := filepath.Join(LibOllamaPath, "rocm")
|
||||||
|
if _, err := os.Stat(rocmDir); err != nil {
|
||||||
|
rocmDir = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, dev := range devs {
|
||||||
|
info := GpuInfo{
|
||||||
|
DeviceID: dev.DeviceID,
|
||||||
|
filterID: dev.FilteredID,
|
||||||
|
Name: dev.Description,
|
||||||
|
memInfo: memInfo{
|
||||||
|
TotalMemory: dev.TotalMemory,
|
||||||
|
FreeMemory: dev.FreeMemory,
|
||||||
|
},
|
||||||
|
// TODO can we avoid variant
|
||||||
|
DependencyPath: dev.LibraryPath,
|
||||||
|
DriverMajor: dev.DriverMajor,
|
||||||
|
DriverMinor: dev.DriverMinor,
|
||||||
|
ComputeMajor: dev.ComputeMajor,
|
||||||
|
ComputeMinor: dev.ComputeMinor,
|
||||||
|
}
|
||||||
|
if dev.Library == "CUDA" || dev.Library == "ROCm" {
|
||||||
|
info.MinimumMemory = 457 * format.MebiByte
|
||||||
|
}
|
||||||
|
if dev.Library == "ROCm" && rocmDir != "" {
|
||||||
|
info.DependencyPath = append(info.DependencyPath, rocmDir)
|
||||||
|
}
|
||||||
|
// TODO any special processing of Vulkan devices?
|
||||||
|
resp = append(resp, info)
|
||||||
|
}
|
||||||
|
if len(resp) == 0 {
|
||||||
|
mem, err := GetCPUMem()
|
||||||
|
if err != nil {
|
||||||
|
slog.Warn("error looking up system memory", "error", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
resp = append(resp, GpuInfo{
|
||||||
|
memInfo: mem,
|
||||||
|
DeviceID: ml.DeviceID{
|
||||||
|
Library: "cpu",
|
||||||
|
ID: "0",
|
||||||
|
},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return resp
|
||||||
|
}
|
||||||
|
|
||||||
|
// Given the list of GPUs this instantiation is targeted for,
|
||||||
|
// figure out the visible devices environment variable
|
||||||
|
//
|
||||||
|
// If different libraries are detected, the first one is what we use
|
||||||
|
func (l GpuInfoList) GetVisibleDevicesEnv() []string {
|
||||||
|
if len(l) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
res := []string{}
|
||||||
|
envVar := rocmGetVisibleDevicesEnv(l)
|
||||||
|
if envVar != "" {
|
||||||
|
res = append(res, envVar)
|
||||||
|
}
|
||||||
|
envVar = vkGetVisibleDevicesEnv(l)
|
||||||
|
if envVar != "" {
|
||||||
|
res = append(res, envVar)
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func rocmGetVisibleDevicesEnv(gpuInfo []GpuInfo) string {
|
||||||
|
ids := []string{}
|
||||||
|
for _, info := range gpuInfo {
|
||||||
|
if info.Library != "ROCm" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
// If the devices requires a numeric ID, for filtering purposes, we use the unfiltered ID number
|
||||||
|
if info.filterID != "" {
|
||||||
|
ids = append(ids, info.filterID)
|
||||||
|
} else {
|
||||||
|
ids = append(ids, info.ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(ids) == 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
envVar := "ROCR_VISIBLE_DEVICES="
|
||||||
|
if runtime.GOOS != "linux" {
|
||||||
|
envVar = "HIP_VISIBLE_DEVICES="
|
||||||
|
}
|
||||||
|
// There are 3 potential env vars to use to select GPUs.
|
||||||
|
// ROCR_VISIBLE_DEVICES supports UUID or numeric but does not work on Windows
|
||||||
|
// HIP_VISIBLE_DEVICES supports numeric IDs only
|
||||||
|
// GPU_DEVICE_ORDINAL supports numeric IDs only
|
||||||
|
return envVar + strings.Join(ids, ",")
|
||||||
|
}
|
||||||
|
|
||||||
|
func vkGetVisibleDevicesEnv(gpuInfo []GpuInfo) string {
|
||||||
|
ids := []string{}
|
||||||
|
for _, info := range gpuInfo {
|
||||||
|
if info.Library != "Vulkan" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if info.filterID != "" {
|
||||||
|
ids = append(ids, info.filterID)
|
||||||
|
} else {
|
||||||
|
ids = append(ids, info.ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(ids) == 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
envVar := "GGML_VK_VISIBLE_DEVICES="
|
||||||
|
return envVar + strings.Join(ids, ",")
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSystemInfo returns the last cached state of the GPUs on the system
|
||||||
|
func GetSystemInfo() SystemInfo {
|
||||||
|
deviceMu.Lock()
|
||||||
|
defer deviceMu.Unlock()
|
||||||
|
gpus := devInfoToInfoList(devices)
|
||||||
|
if len(gpus) == 1 && gpus[0].Library == "cpu" {
|
||||||
|
gpus = []GpuInfo{}
|
||||||
|
}
|
||||||
|
|
||||||
|
return SystemInfo{
|
||||||
|
System: CPUInfo{
|
||||||
|
CPUs: GetCPUDetails(),
|
||||||
|
GpuInfo: GetCPUInfo(),
|
||||||
|
},
|
||||||
|
GPUs: gpus,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func cudaJetpack() string {
|
||||||
|
if runtime.GOARCH == "arm64" && runtime.GOOS == "linux" {
|
||||||
|
if CudaTegra != "" {
|
||||||
|
ver := strings.Split(CudaTegra, ".")
|
||||||
|
if len(ver) > 0 {
|
||||||
|
return "jetpack" + ver[0]
|
||||||
|
}
|
||||||
|
} else if data, err := os.ReadFile("/etc/nv_tegra_release"); err == nil {
|
||||||
|
r := regexp.MustCompile(` R(\d+) `)
|
||||||
|
m := r.FindSubmatch(data)
|
||||||
|
if len(m) != 2 {
|
||||||
|
slog.Info("Unexpected format for /etc/nv_tegra_release. Set JETSON_JETPACK to select version")
|
||||||
|
} else {
|
||||||
|
if l4t, err := strconv.Atoi(string(m[1])); err == nil {
|
||||||
|
// Note: mapping from L4t -> JP is inconsistent (can't just subtract 30)
|
||||||
|
// https://developer.nvidia.com/embedded/jetpack-archive
|
||||||
|
switch l4t {
|
||||||
|
case 35:
|
||||||
|
return "jetpack5"
|
||||||
|
case 36:
|
||||||
|
return "jetpack6"
|
||||||
|
default:
|
||||||
|
// Newer Jetson systems use the SBSU runtime
|
||||||
|
slog.Debug("unrecognized L4T version", "nv_tegra_release", string(data))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
54
discover/gpu_darwin.go
Normal file
54
discover/gpu_darwin.go
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
package discover
|
||||||
|
|
||||||
|
/*
|
||||||
|
#cgo CFLAGS: -x objective-c
|
||||||
|
#cgo LDFLAGS: -framework Foundation -framework CoreGraphics -framework Metal
|
||||||
|
#include "gpu_info_darwin.h"
|
||||||
|
*/
|
||||||
|
import "C"
|
||||||
|
|
||||||
|
import (
|
||||||
|
"log/slog"
|
||||||
|
"syscall"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/format"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
metalMinimumMemory = 512 * format.MebiByte
|
||||||
|
)
|
||||||
|
|
||||||
|
func GetCPUMem() (memInfo, error) {
|
||||||
|
return memInfo{
|
||||||
|
TotalMemory: uint64(C.getPhysicalMemory()),
|
||||||
|
FreeMemory: uint64(C.getFreeMemory()),
|
||||||
|
// FreeSwap omitted as Darwin uses dynamic paging
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func GetCPUDetails() []CPU {
|
||||||
|
query := "hw.perflevel0.physicalcpu"
|
||||||
|
perfCores, err := syscall.SysctlUint32(query)
|
||||||
|
if err != nil {
|
||||||
|
slog.Warn("failed to discover physical CPU details", "query", query, "error", err)
|
||||||
|
}
|
||||||
|
query = "hw.perflevel1.physicalcpu"
|
||||||
|
efficiencyCores, _ := syscall.SysctlUint32(query) // On x86 xeon this wont return data
|
||||||
|
|
||||||
|
// Determine thread count
|
||||||
|
query = "hw.logicalcpu"
|
||||||
|
logicalCores, _ := syscall.SysctlUint32(query)
|
||||||
|
|
||||||
|
return []CPU{
|
||||||
|
{
|
||||||
|
CoreCount: int(perfCores + efficiencyCores),
|
||||||
|
EfficiencyCoreCount: int(efficiencyCores),
|
||||||
|
ThreadCount: int(logicalCores),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func IsNUMA() bool {
|
||||||
|
// numa support in ggml is linux only
|
||||||
|
return false
|
||||||
|
}
|
||||||
56
discover/path.go
Normal file
56
discover/path.go
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
package discover
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"runtime"
|
||||||
|
)
|
||||||
|
|
||||||
|
// LibPath is a path to lookup dynamic libraries
|
||||||
|
// in development it's usually 'build/lib/ollama'
|
||||||
|
// in distribution builds it's 'lib/ollama' on Windows
|
||||||
|
// '../lib/ollama' on Linux and the executable's directory on macOS
|
||||||
|
// note: distribution builds, additional GPU-specific libraries are
|
||||||
|
// found in subdirectories of the returned path, such as
|
||||||
|
// 'cuda_v12', 'rocm', etc.
|
||||||
|
var LibOllamaPath string = func() string {
|
||||||
|
exe, err := os.Executable()
|
||||||
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
if eval, err := filepath.EvalSymlinks(exe); err == nil {
|
||||||
|
exe = eval
|
||||||
|
}
|
||||||
|
|
||||||
|
var libPath string
|
||||||
|
switch runtime.GOOS {
|
||||||
|
case "windows":
|
||||||
|
libPath = filepath.Join(filepath.Dir(exe), "lib", "ollama")
|
||||||
|
case "linux":
|
||||||
|
libPath = filepath.Join(filepath.Dir(exe), "..", "lib", "ollama")
|
||||||
|
case "darwin":
|
||||||
|
libPath = filepath.Dir(exe)
|
||||||
|
}
|
||||||
|
|
||||||
|
cwd, err := os.Getwd()
|
||||||
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
paths := []string{
|
||||||
|
libPath,
|
||||||
|
|
||||||
|
// build paths for development
|
||||||
|
filepath.Join(filepath.Dir(exe), "build", "lib", "ollama"),
|
||||||
|
filepath.Join(cwd, "build", "lib", "ollama"),
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, p := range paths {
|
||||||
|
if _, err := os.Stat(p); err == nil {
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return filepath.Dir(exe)
|
||||||
|
}()
|
||||||
600
discover/runner.go
Normal file
600
discover/runner.go
Normal file
@@ -0,0 +1,600 @@
|
|||||||
|
package discover
|
||||||
|
|
||||||
|
// Runner based GPU discovery
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"log/slog"
|
||||||
|
"math/rand"
|
||||||
|
"net"
|
||||||
|
"net/http"
|
||||||
|
"os"
|
||||||
|
"os/exec"
|
||||||
|
"path/filepath"
|
||||||
|
"runtime"
|
||||||
|
"sort"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/envconfig"
|
||||||
|
"github.com/ollama/ollama/format"
|
||||||
|
"github.com/ollama/ollama/logutil"
|
||||||
|
"github.com/ollama/ollama/ml"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
deviceMu sync.Mutex
|
||||||
|
devices []ml.DeviceInfo
|
||||||
|
libDirs map[string]struct{}
|
||||||
|
rocmDir string
|
||||||
|
exe string
|
||||||
|
bootstrapped bool
|
||||||
|
)
|
||||||
|
|
||||||
|
func GPUDevices(ctx context.Context, runners []FilteredRunnerDiscovery) []ml.DeviceInfo {
|
||||||
|
deviceMu.Lock()
|
||||||
|
defer deviceMu.Unlock()
|
||||||
|
startDiscovery := time.Now()
|
||||||
|
msg := "overall device VRAM discovery took"
|
||||||
|
defer func() {
|
||||||
|
slog.Debug(msg, "duration", time.Since(startDiscovery))
|
||||||
|
}()
|
||||||
|
|
||||||
|
if !bootstrapped {
|
||||||
|
msg = "GPU bootstrap discovery took"
|
||||||
|
libDirs = make(map[string]struct{})
|
||||||
|
var err error
|
||||||
|
exe, err = os.Executable()
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("unable to lookup executable path", "error", err)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if eval, err := filepath.EvalSymlinks(exe); err == nil {
|
||||||
|
exe = eval
|
||||||
|
}
|
||||||
|
files, err := filepath.Glob(filepath.Join(LibOllamaPath, "*", "*ggml-*"))
|
||||||
|
if err != nil {
|
||||||
|
slog.Debug("unable to lookup runner library directories", "error", err)
|
||||||
|
}
|
||||||
|
for _, file := range files {
|
||||||
|
libDirs[filepath.Dir(file)] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Our current packaging model places ggml-hip in the main directory
|
||||||
|
// but keeps rocm in an isolated directory. We have to add it to
|
||||||
|
// the [LD_LIBRARY_]PATH so ggml-hip will load properly
|
||||||
|
rocmDir = filepath.Join(LibOllamaPath, "rocm")
|
||||||
|
if _, err := os.Stat(rocmDir); err != nil {
|
||||||
|
rocmDir = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(libDirs) == 0 {
|
||||||
|
libDirs[""] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("discovering available GPUs...")
|
||||||
|
requested := envconfig.LLMLibrary()
|
||||||
|
jetpack := cudaJetpack()
|
||||||
|
|
||||||
|
// For our initial discovery pass, we gather all the known GPUs through
|
||||||
|
// all the libraries that were detected. This pass may include GPUs that
|
||||||
|
// are enumerated, but not actually supported.
|
||||||
|
// We run this in serial to avoid potentially initializing a GPU multiple
|
||||||
|
// times concurrently leading to memory contention
|
||||||
|
// TODO refactor so we group the lib dirs and do serial per version, but parallel for different libs
|
||||||
|
for dir := range libDirs {
|
||||||
|
var dirs []string
|
||||||
|
if dir != "" {
|
||||||
|
if requested != "" && filepath.Base(dir) != requested {
|
||||||
|
slog.Debug("skipping available library at users request", "requested", requested, "libDir", dir)
|
||||||
|
continue
|
||||||
|
} else if jetpack != "" && filepath.Base(dir) != "cuda_"+jetpack {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if dir == "" {
|
||||||
|
dirs = []string{LibOllamaPath}
|
||||||
|
} else {
|
||||||
|
dirs = []string{LibOllamaPath, dir}
|
||||||
|
}
|
||||||
|
// Typically bootstrapping takes < 1s, but on some systems, with devices
|
||||||
|
// in low power/idle mode, initialization can take multiple seconds. We
|
||||||
|
// set a long timeout just for bootstrap discovery to reduce the chance
|
||||||
|
// of giving up too quickly
|
||||||
|
ctx1stPass, cancel := context.WithTimeout(ctx, 30*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
// For this pass, we retain duplicates in case any are incompatible with some libraries
|
||||||
|
devices = append(devices, bootstrapDevices(ctx1stPass, dirs, nil)...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// In the second pass, we more deeply initialize the GPUs to weed out devices that
|
||||||
|
// aren't supported by a given library. We run this phase in parallel to speed up discovery.
|
||||||
|
slog.Debug("filtering out unsupported or overlapping GPU library combinations", "count", len(devices))
|
||||||
|
ctx2ndPass, cancel := context.WithTimeout(ctx, 30*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
needsDelete := make([]bool, len(devices))
|
||||||
|
supportedMu := sync.Mutex{}
|
||||||
|
supported := make(map[string]map[string]map[string]int) // [Library][libDir][ID] = pre-deletion devices index
|
||||||
|
for i := range devices {
|
||||||
|
libDir := devices[i].LibraryPath[len(devices[i].LibraryPath)-1]
|
||||||
|
if devices[i].Library == "Metal" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
slog.Debug("verifying GPU is supported", "library", libDir, "description", devices[i].Description, "compute", devices[i].Compute(), "pci_id", devices[i].PCIID)
|
||||||
|
wg.Add(1)
|
||||||
|
go func(i int) {
|
||||||
|
defer wg.Done()
|
||||||
|
var envVar string
|
||||||
|
id := devices[i].ID
|
||||||
|
if devices[i].Library == "ROCm" {
|
||||||
|
if runtime.GOOS != "linux" {
|
||||||
|
envVar = "HIP_VISIBLE_DEVICES"
|
||||||
|
} else {
|
||||||
|
envVar = "ROCR_VISIBLE_DEVICES"
|
||||||
|
}
|
||||||
|
} else if devices[i].Library == "CUDA" {
|
||||||
|
envVar = "CUDA_VISIBLE_DEVICES"
|
||||||
|
} else if devices[i].Library == "Vulkan" {
|
||||||
|
id = devices[i].FilteredID
|
||||||
|
envVar = "GGML_VK_VISIBLE_DEVICES"
|
||||||
|
} else {
|
||||||
|
slog.Error("Unknown Library:" + devices[i].Library)
|
||||||
|
}
|
||||||
|
|
||||||
|
extraEnvs := []string{
|
||||||
|
"GGML_CUDA_INIT=1", // force deep initialization to trigger crash on unsupported GPUs
|
||||||
|
envVar + "=" + id, // Filter to just this one GPU
|
||||||
|
}
|
||||||
|
if len(bootstrapDevices(ctx2ndPass, devices[i].LibraryPath, extraEnvs)) == 0 {
|
||||||
|
needsDelete[i] = true
|
||||||
|
} else {
|
||||||
|
supportedMu.Lock()
|
||||||
|
if _, ok := supported[devices[i].Library]; !ok {
|
||||||
|
supported[devices[i].Library] = make(map[string]map[string]int)
|
||||||
|
}
|
||||||
|
if _, ok := supported[devices[i].Library][libDir]; !ok {
|
||||||
|
supported[devices[i].Library][libDir] = make(map[string]int)
|
||||||
|
}
|
||||||
|
supported[devices[i].Library][libDir][devices[i].ID] = i
|
||||||
|
supportedMu.Unlock()
|
||||||
|
}
|
||||||
|
}(i)
|
||||||
|
}
|
||||||
|
wg.Wait()
|
||||||
|
logutil.Trace("supported GPU library combinations", "supported", supported)
|
||||||
|
|
||||||
|
filterOutVulkanThatAreSupportedByOtherGPU(needsDelete)
|
||||||
|
|
||||||
|
// Mark for deletion any overlaps - favoring the library version that can cover all GPUs if possible
|
||||||
|
filterOverlapByLibrary(supported, needsDelete)
|
||||||
|
|
||||||
|
// TODO if we ever support multiple ROCm library versions this algorithm will need to be adjusted to keep the rocmID numeric value correct
|
||||||
|
rocmID := 0
|
||||||
|
for i := 0; i < len(needsDelete); i++ {
|
||||||
|
if needsDelete[i] {
|
||||||
|
logutil.Trace("removing unsupported or overlapping GPU combination", "libDir", devices[i].LibraryPath[len(devices[i].LibraryPath)-1], "description", devices[i].Description, "compute", devices[i].Compute(), "pci_id", devices[i].PCIID)
|
||||||
|
devices = append(devices[:i], devices[i+1:]...)
|
||||||
|
needsDelete = append(needsDelete[:i], needsDelete[i+1:]...)
|
||||||
|
i--
|
||||||
|
} else if devices[i].Library == "ROCm" {
|
||||||
|
if _, err := strconv.Atoi(devices[i].ID); err == nil {
|
||||||
|
// Replace the numeric ID with the post-filtered IDs
|
||||||
|
devices[i].FilteredID = devices[i].ID
|
||||||
|
devices[i].ID = strconv.Itoa(rocmID)
|
||||||
|
}
|
||||||
|
rocmID++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Now filter out any overlap with different libraries (favor CUDA/HIP over others)
|
||||||
|
for i := 0; i < len(devices); i++ {
|
||||||
|
for j := i + 1; j < len(devices); j++ {
|
||||||
|
// For this pass, we only drop exact duplicates
|
||||||
|
switch devices[i].Compare(devices[j]) {
|
||||||
|
case ml.SameBackendDevice:
|
||||||
|
// Same library and device, skip it
|
||||||
|
devices = append(devices[:j], devices[j+1:]...)
|
||||||
|
j--
|
||||||
|
continue
|
||||||
|
case ml.DuplicateDevice:
|
||||||
|
// Different library, choose based on priority
|
||||||
|
var droppedDevice ml.DeviceInfo
|
||||||
|
if devices[i].Library == "CUDA" || devices[i].Library == "ROCm" {
|
||||||
|
droppedDevice = devices[j]
|
||||||
|
} else {
|
||||||
|
droppedDevice = devices[i]
|
||||||
|
devices[i] = devices[j]
|
||||||
|
}
|
||||||
|
devices = append(devices[:j], devices[j+1:]...)
|
||||||
|
j--
|
||||||
|
|
||||||
|
typeStr := "discrete"
|
||||||
|
if droppedDevice.Integrated {
|
||||||
|
typeStr = "iGPU"
|
||||||
|
}
|
||||||
|
slog.Debug("dropping duplicate device",
|
||||||
|
"id", droppedDevice.ID,
|
||||||
|
"library", droppedDevice.Library,
|
||||||
|
"compute", droppedDevice.Compute(),
|
||||||
|
"name", droppedDevice.Name,
|
||||||
|
"description", droppedDevice.Description,
|
||||||
|
"libdirs", strings.Join(droppedDevice.LibraryPath, ","),
|
||||||
|
"driver", droppedDevice.Driver(),
|
||||||
|
"pci_id", droppedDevice.PCIID,
|
||||||
|
"type", typeStr,
|
||||||
|
"total", format.HumanBytes2(droppedDevice.TotalMemory),
|
||||||
|
"available", format.HumanBytes2(droppedDevice.FreeMemory),
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reset the libDirs to what we actually wind up using for future refreshes
|
||||||
|
libDirs = make(map[string]struct{})
|
||||||
|
for _, dev := range devices {
|
||||||
|
dir := dev.LibraryPath[len(dev.LibraryPath)-1]
|
||||||
|
if dir != LibOllamaPath {
|
||||||
|
libDirs[dir] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(libDirs) == 0 {
|
||||||
|
libDirs[""] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
bootstrapped = true
|
||||||
|
} else {
|
||||||
|
if runtime.GOOS == "darwin" && runtime.GOARCH == "arm64" {
|
||||||
|
// metal never updates free VRAM
|
||||||
|
return devices
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Debug("refreshing free memory")
|
||||||
|
updated := make([]bool, len(devices))
|
||||||
|
allDone := func() bool {
|
||||||
|
allDone := true
|
||||||
|
for _, done := range updated {
|
||||||
|
if !done {
|
||||||
|
allDone = false
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return allDone
|
||||||
|
}
|
||||||
|
|
||||||
|
// First try to use existing runners to refresh VRAM since they're already
|
||||||
|
// active on GPU(s)
|
||||||
|
for _, runner := range runners {
|
||||||
|
if runner == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
deviceIDs := runner.GetActiveDeviceIDs()
|
||||||
|
if len(deviceIDs) == 0 {
|
||||||
|
// Skip this runner since it doesn't have active GPU devices
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check to see if this runner is active on any devices that need a refresh
|
||||||
|
skip := true
|
||||||
|
devCheck:
|
||||||
|
for _, dev := range deviceIDs {
|
||||||
|
for i := range devices {
|
||||||
|
if dev == devices[i].DeviceID {
|
||||||
|
if !updated[i] {
|
||||||
|
skip = false
|
||||||
|
break devCheck
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if skip {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Typical refresh on existing runner is ~500ms but allow longer if the system
|
||||||
|
// is under stress before giving up and using stale data.
|
||||||
|
ctx, cancel := context.WithTimeout(ctx, 3*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
start := time.Now()
|
||||||
|
updatedDevices := runner.GetDeviceInfos(ctx)
|
||||||
|
slog.Debug("existing runner discovery took", "duration", time.Since(start))
|
||||||
|
for _, u := range updatedDevices {
|
||||||
|
for i := range devices {
|
||||||
|
if u.DeviceID == devices[i].DeviceID {
|
||||||
|
updated[i] = true
|
||||||
|
devices[i].FreeMemory = u.FreeMemory
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Short circuit if we've updated all the devices
|
||||||
|
if allDone() {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !allDone() {
|
||||||
|
slog.Debug("unable to refresh all GPUs with existing runners, performing bootstrap discovery")
|
||||||
|
|
||||||
|
// Bootstrapping may take longer in some cases (AMD windows), but we
|
||||||
|
// would rather use stale free data to get the model running sooner
|
||||||
|
ctx, cancel := context.WithTimeout(ctx, 3*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
for dir := range libDirs {
|
||||||
|
updatedDevices := bootstrapDevices(ctx, []string{LibOllamaPath, dir}, nil)
|
||||||
|
for _, u := range updatedDevices {
|
||||||
|
for i := range devices {
|
||||||
|
if u.DeviceID == devices[i].DeviceID {
|
||||||
|
updated[i] = true
|
||||||
|
devices[i].FreeMemory = u.FreeMemory
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// TODO - consider evaluating if new devices have appeared (e.g. hotplug)
|
||||||
|
}
|
||||||
|
if allDone() {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !allDone() {
|
||||||
|
slog.Warn("unable to refresh free memory, using old values")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return devices
|
||||||
|
}
|
||||||
|
|
||||||
|
func filterOutVulkanThatAreSupportedByOtherGPU(needsDelete []bool) {
|
||||||
|
// Filter out Vulkan devices that share a PCI ID with a non-Vulkan device that is not marked for deletion
|
||||||
|
for i := range devices {
|
||||||
|
if devices[i].Library != "Vulkan" || needsDelete[i] {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if devices[i].PCIID == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
for j := range devices {
|
||||||
|
if i == j {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if devices[j].PCIID == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if devices[j].PCIID == devices[i].PCIID && devices[j].Library != "Vulkan" && !needsDelete[j] {
|
||||||
|
needsDelete[i] = true
|
||||||
|
slog.Debug("dropping Vulkan duplicate by PCI ID",
|
||||||
|
"vulkan_id", devices[i].ID,
|
||||||
|
"vulkan_libdir", devices[i].LibraryPath[len(devices[i].LibraryPath)-1],
|
||||||
|
"pci_id", devices[i].PCIID,
|
||||||
|
"kept_library", devices[j].Library,
|
||||||
|
"kept_id", devices[j].ID,
|
||||||
|
)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func filterOverlapByLibrary(supported map[string]map[string]map[string]int, needsDelete []bool) {
|
||||||
|
// For multi-GPU systems, use the newest version that supports all the GPUs
|
||||||
|
for _, byLibDirs := range supported {
|
||||||
|
libDirs := make([]string, 0, len(byLibDirs))
|
||||||
|
for libDir := range byLibDirs {
|
||||||
|
libDirs = append(libDirs, libDir)
|
||||||
|
}
|
||||||
|
sort.Sort(sort.Reverse(sort.StringSlice(libDirs)))
|
||||||
|
anyMissing := false
|
||||||
|
var newest string
|
||||||
|
for _, newest = range libDirs {
|
||||||
|
for _, libDir := range libDirs {
|
||||||
|
if libDir == newest {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if len(byLibDirs[newest]) != len(byLibDirs[libDir]) {
|
||||||
|
anyMissing = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
for dev := range byLibDirs[newest] {
|
||||||
|
if _, found := byLibDirs[libDir][dev]; !found {
|
||||||
|
anyMissing = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !anyMissing {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Now we can mark overlaps for deletion
|
||||||
|
for _, libDir := range libDirs {
|
||||||
|
if libDir == newest {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
for dev, i := range byLibDirs[libDir] {
|
||||||
|
if _, found := byLibDirs[newest][dev]; found {
|
||||||
|
needsDelete[i] = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type bootstrapRunner struct {
|
||||||
|
port int
|
||||||
|
cmd *exec.Cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *bootstrapRunner) GetPort() int {
|
||||||
|
return r.port
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *bootstrapRunner) HasExited() bool {
|
||||||
|
if r.cmd != nil && r.cmd.ProcessState != nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func bootstrapDevices(ctx context.Context, ollamaLibDirs []string, extraEnvs []string) []ml.DeviceInfo {
|
||||||
|
// TODO DRY out with llm/server.go
|
||||||
|
slog.Debug("spawning runner with", "OLLAMA_LIBRARY_PATH", ollamaLibDirs, "extra_envs", extraEnvs)
|
||||||
|
start := time.Now()
|
||||||
|
defer func() {
|
||||||
|
slog.Debug("bootstrap discovery took", "duration", time.Since(start), "OLLAMA_LIBRARY_PATH", ollamaLibDirs, "extra_envs", extraEnvs)
|
||||||
|
}()
|
||||||
|
port := 0
|
||||||
|
if a, err := net.ResolveTCPAddr("tcp", "localhost:0"); err == nil {
|
||||||
|
var l *net.TCPListener
|
||||||
|
if l, err = net.ListenTCP("tcp", a); err == nil {
|
||||||
|
port = l.Addr().(*net.TCPAddr).Port
|
||||||
|
l.Close()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if port == 0 {
|
||||||
|
slog.Debug("ResolveTCPAddr failed, using random port")
|
||||||
|
port = rand.Intn(65535-49152) + 49152 // get a random port in the ephemeral range
|
||||||
|
}
|
||||||
|
params := []string{"runner", "--ollama-engine", "--port", strconv.Itoa(port)}
|
||||||
|
var pathEnv string
|
||||||
|
switch runtime.GOOS {
|
||||||
|
case "windows":
|
||||||
|
pathEnv = "PATH"
|
||||||
|
case "darwin":
|
||||||
|
pathEnv = "DYLD_LIBRARY_PATH"
|
||||||
|
default:
|
||||||
|
pathEnv = "LD_LIBRARY_PATH"
|
||||||
|
}
|
||||||
|
libraryPaths := append([]string{LibOllamaPath}, ollamaLibDirs...)
|
||||||
|
if rocmDir != "" {
|
||||||
|
libraryPaths = append(libraryPaths, rocmDir)
|
||||||
|
}
|
||||||
|
// Note: we always put our dependency paths first
|
||||||
|
// since these are the exact version we compiled/linked against
|
||||||
|
if libraryPath, ok := os.LookupEnv(pathEnv); ok {
|
||||||
|
libraryPaths = append(libraryPaths, filepath.SplitList(libraryPath)...)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd := exec.Command(exe, params...)
|
||||||
|
cmd.Env = os.Environ()
|
||||||
|
if envconfig.LogLevel() == logutil.LevelTrace {
|
||||||
|
cmd.Stdout = os.Stdout
|
||||||
|
cmd.Stderr = os.Stderr
|
||||||
|
}
|
||||||
|
|
||||||
|
// cmd.SysProcAttr = llm.LlamaServerSysProcAttr // circular dependency - bring back once refactored
|
||||||
|
pathEnvVal := strings.Join(libraryPaths, string(filepath.ListSeparator))
|
||||||
|
pathNeeded := true
|
||||||
|
ollamaPathNeeded := true
|
||||||
|
extraDone := make([]bool, len(extraEnvs))
|
||||||
|
for i := range cmd.Env {
|
||||||
|
cmp := strings.SplitN(cmd.Env[i], "=", 2)
|
||||||
|
if strings.EqualFold(cmp[0], pathEnv) {
|
||||||
|
cmd.Env[i] = pathEnv + "=" + pathEnvVal
|
||||||
|
pathNeeded = false
|
||||||
|
} else if strings.EqualFold(cmp[0], "OLLAMA_LIBRARY_PATH") {
|
||||||
|
cmd.Env[i] = "OLLAMA_LIBRARY_PATH=" + strings.Join(ollamaLibDirs, string(filepath.ListSeparator))
|
||||||
|
ollamaPathNeeded = false
|
||||||
|
} else {
|
||||||
|
for j := range extraEnvs {
|
||||||
|
if extraDone[j] {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
extra := strings.SplitN(extraEnvs[j], "=", 2)
|
||||||
|
if cmp[0] == extra[0] {
|
||||||
|
cmd.Env[i] = extraEnvs[j]
|
||||||
|
extraDone[j] = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if pathNeeded {
|
||||||
|
cmd.Env = append(cmd.Env, pathEnv+"="+pathEnvVal)
|
||||||
|
}
|
||||||
|
if ollamaPathNeeded {
|
||||||
|
cmd.Env = append(cmd.Env, "OLLAMA_LIBRARY_PATH="+strings.Join(ollamaLibDirs, string(filepath.ListSeparator)))
|
||||||
|
}
|
||||||
|
for i := range extraDone {
|
||||||
|
if !extraDone[i] {
|
||||||
|
cmd.Env = append(cmd.Env, extraEnvs[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
logutil.Trace("starting runner for device discovery", "env", cmd.Env, "cmd", cmd)
|
||||||
|
if err := cmd.Start(); err != nil {
|
||||||
|
slog.Warn("unable to start discovery subprocess", "cmd", cmd, "error", err)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
go func() {
|
||||||
|
cmd.Wait() // exit status ignored
|
||||||
|
}()
|
||||||
|
|
||||||
|
defer cmd.Process.Kill()
|
||||||
|
devices, err := GetDevicesFromRunner(ctx, &bootstrapRunner{port: port, cmd: cmd})
|
||||||
|
if err != nil {
|
||||||
|
if cmd.ProcessState != nil && cmd.ProcessState.ExitCode() >= 0 {
|
||||||
|
// Expected during bootstrapping while we filter out unsupported AMD GPUs
|
||||||
|
logutil.Trace("runner exited", "OLLAMA_LIBRARY_PATH", ollamaLibDirs, "extra_envs", extraEnvs, "code", cmd.ProcessState.ExitCode())
|
||||||
|
} else {
|
||||||
|
slog.Info("failure during GPU discovery", "OLLAMA_LIBRARY_PATH", ollamaLibDirs, "extra_envs", extraEnvs, "error", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
logutil.Trace("runner enumerated devices", "OLLAMA_LIBRARY_PATH", ollamaLibDirs, "devices", devices)
|
||||||
|
|
||||||
|
return devices
|
||||||
|
}
|
||||||
|
|
||||||
|
func GetDevicesFromRunner(ctx context.Context, runner BaseRunner) ([]ml.DeviceInfo, error) {
|
||||||
|
var moreDevices []ml.DeviceInfo
|
||||||
|
port := runner.GetPort()
|
||||||
|
tick := time.Tick(10 * time.Millisecond)
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil, fmt.Errorf("failed to finish discovery before timeout")
|
||||||
|
case <-tick:
|
||||||
|
r, err := http.NewRequestWithContext(ctx, http.MethodGet, fmt.Sprintf("http://127.0.0.1:%d/info", port), nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create request: %w", err)
|
||||||
|
}
|
||||||
|
r.Header.Set("Content-Type", "application/json")
|
||||||
|
|
||||||
|
resp, err := http.DefaultClient.Do(r)
|
||||||
|
if err != nil {
|
||||||
|
// slog.Warn("failed to send request", "error", err)
|
||||||
|
if runner.HasExited() {
|
||||||
|
return nil, fmt.Errorf("runner crashed")
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
if resp.StatusCode == http.StatusNotFound {
|
||||||
|
// old runner, fall back to bootstrapping model
|
||||||
|
return nil, fmt.Errorf("llamarunner free vram reporting not supported")
|
||||||
|
}
|
||||||
|
|
||||||
|
body, err := io.ReadAll(resp.Body)
|
||||||
|
if err != nil {
|
||||||
|
slog.Warn("failed to read response", "error", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if resp.StatusCode != 200 {
|
||||||
|
logutil.Trace("runner failed to discover free VRAM", "status", resp.StatusCode, "response", body)
|
||||||
|
return nil, fmt.Errorf("runner error: %s", string(body))
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := json.Unmarshal(body, &moreDevices); err != nil {
|
||||||
|
slog.Warn("unmarshal encode response", "error", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return moreDevices, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
108
discover/runner_test.go
Normal file
108
discover/runner_test.go
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
package discover
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/app/lifecycle"
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
lifecycle.InitLogging()
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFilterOverlapByLibrary(t *testing.T) {
|
||||||
|
type testcase struct {
|
||||||
|
name string
|
||||||
|
inp map[string]map[string]map[string]int
|
||||||
|
exp []bool
|
||||||
|
}
|
||||||
|
for _, tc := range []testcase{
|
||||||
|
{
|
||||||
|
name: "empty",
|
||||||
|
inp: map[string]map[string]map[string]int{},
|
||||||
|
exp: []bool{}, // needs deletion
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "single no overlap",
|
||||||
|
inp: map[string]map[string]map[string]int{
|
||||||
|
"CUDA": {
|
||||||
|
"cuda_v12": {
|
||||||
|
"GPU-d7b00605-c0c8-152d-529d-e03726d5dc52": 0,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
exp: []bool{false},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "100% overlap pick 2nd",
|
||||||
|
inp: map[string]map[string]map[string]int{
|
||||||
|
"CUDA": {
|
||||||
|
"cuda_v12": {
|
||||||
|
"GPU-d7b00605-c0c8-152d-529d-e03726d5dc52": 0,
|
||||||
|
"GPU-cd6c3216-03d2-a8eb-8235-2ffbf571712e": 1,
|
||||||
|
},
|
||||||
|
"cuda_v13": {
|
||||||
|
"GPU-d7b00605-c0c8-152d-529d-e03726d5dc52": 2,
|
||||||
|
"GPU-cd6c3216-03d2-a8eb-8235-2ffbf571712e": 3,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
exp: []bool{true, true, false, false},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "100% overlap pick 1st",
|
||||||
|
inp: map[string]map[string]map[string]int{
|
||||||
|
"CUDA": {
|
||||||
|
"cuda_v13": {
|
||||||
|
"GPU-d7b00605-c0c8-152d-529d-e03726d5dc52": 0,
|
||||||
|
"GPU-cd6c3216-03d2-a8eb-8235-2ffbf571712e": 1,
|
||||||
|
},
|
||||||
|
"cuda_v12": {
|
||||||
|
"GPU-d7b00605-c0c8-152d-529d-e03726d5dc52": 2,
|
||||||
|
"GPU-cd6c3216-03d2-a8eb-8235-2ffbf571712e": 3,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
exp: []bool{false, false, true, true},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "partial overlap pick older",
|
||||||
|
inp: map[string]map[string]map[string]int{
|
||||||
|
"CUDA": {
|
||||||
|
"cuda_v13": {
|
||||||
|
"GPU-d7b00605-c0c8-152d-529d-e03726d5dc52": 0,
|
||||||
|
},
|
||||||
|
"cuda_v12": {
|
||||||
|
"GPU-d7b00605-c0c8-152d-529d-e03726d5dc52": 1,
|
||||||
|
"GPU-cd6c3216-03d2-a8eb-8235-2ffbf571712e": 2,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
exp: []bool{true, false, false},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "no overlap",
|
||||||
|
inp: map[string]map[string]map[string]int{
|
||||||
|
"CUDA": {
|
||||||
|
"cuda_v13": {
|
||||||
|
"GPU-d7b00605-c0c8-152d-529d-e03726d5dc52": 0,
|
||||||
|
},
|
||||||
|
"cuda_v12": {
|
||||||
|
"GPU-cd6c3216-03d2-a8eb-8235-2ffbf571712e": 1,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
exp: []bool{false, false},
|
||||||
|
},
|
||||||
|
} {
|
||||||
|
t.Run(tc.name, func(t *testing.T) {
|
||||||
|
needsDelete := make([]bool, len(tc.exp))
|
||||||
|
filterOverlapByLibrary(tc.inp, needsDelete)
|
||||||
|
for i, exp := range tc.exp {
|
||||||
|
if needsDelete[i] != exp {
|
||||||
|
t.Fatalf("expected: %v\ngot: %v", tc.exp, needsDelete)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
214
discover/types.go
Normal file
214
discover/types.go
Normal file
@@ -0,0 +1,214 @@
|
|||||||
|
package discover
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"log/slog"
|
||||||
|
"path/filepath"
|
||||||
|
"runtime"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/ollama/ollama/format"
|
||||||
|
"github.com/ollama/ollama/ml"
|
||||||
|
)
|
||||||
|
|
||||||
|
type memInfo struct {
|
||||||
|
TotalMemory uint64 `json:"total_memory,omitempty"`
|
||||||
|
FreeMemory uint64 `json:"free_memory,omitempty"`
|
||||||
|
FreeSwap uint64 `json:"free_swap,omitempty"` // TODO split this out for system only
|
||||||
|
}
|
||||||
|
|
||||||
|
// Beginning of an `ollama info` command
|
||||||
|
type GpuInfo struct { // TODO better name maybe "InferenceProcessor"?
|
||||||
|
ml.DeviceID
|
||||||
|
memInfo
|
||||||
|
|
||||||
|
// Optional variant to select (e.g. versions, cpu feature flags)
|
||||||
|
Variant string `json:"variant"`
|
||||||
|
|
||||||
|
// MinimumMemory represents the minimum memory required to use the GPU
|
||||||
|
MinimumMemory uint64 `json:"-"`
|
||||||
|
|
||||||
|
// Any extra PATH/LD_LIBRARY_PATH dependencies required for the Library to operate properly
|
||||||
|
DependencyPath []string `json:"lib_path,omitempty"`
|
||||||
|
|
||||||
|
// Set to true if we can NOT reliably discover FreeMemory. A value of true indicates
|
||||||
|
// the FreeMemory is best effort, and may over or under report actual memory usage
|
||||||
|
// False indicates FreeMemory can generally be trusted on this GPU
|
||||||
|
UnreliableFreeMemory bool
|
||||||
|
|
||||||
|
// GPU information
|
||||||
|
filterID string // AMD/Vulkan Workaround: The numeric ID of the device used to filter out other devices
|
||||||
|
Name string `json:"name"` // user friendly name if available
|
||||||
|
ComputeMajor int `json:"compute_major"` // Compute Capability or gfx
|
||||||
|
ComputeMinor int `json:"compute_minor"`
|
||||||
|
|
||||||
|
// Driver Information - TODO no need to put this on each GPU
|
||||||
|
DriverMajor int `json:"driver_major,omitempty"`
|
||||||
|
DriverMinor int `json:"driver_minor,omitempty"`
|
||||||
|
|
||||||
|
// TODO other performance capability info to help in scheduling decisions
|
||||||
|
}
|
||||||
|
|
||||||
|
func (gpu GpuInfo) RunnerName() string {
|
||||||
|
if gpu.Variant != "" {
|
||||||
|
return gpu.Library + "_" + gpu.Variant
|
||||||
|
}
|
||||||
|
return gpu.Library
|
||||||
|
}
|
||||||
|
|
||||||
|
type CPUInfo struct {
|
||||||
|
GpuInfo
|
||||||
|
CPUs []CPU
|
||||||
|
}
|
||||||
|
|
||||||
|
// CPU type represents a CPU Package occupying a socket
|
||||||
|
type CPU struct {
|
||||||
|
ID string `cpuinfo:"processor"`
|
||||||
|
VendorID string `cpuinfo:"vendor_id"`
|
||||||
|
ModelName string `cpuinfo:"model name"`
|
||||||
|
CoreCount int
|
||||||
|
EfficiencyCoreCount int // Performance = CoreCount - Efficiency
|
||||||
|
ThreadCount int
|
||||||
|
}
|
||||||
|
|
||||||
|
type GpuInfoList []GpuInfo
|
||||||
|
|
||||||
|
func (l GpuInfoList) ByLibrary() []GpuInfoList {
|
||||||
|
resp := []GpuInfoList{}
|
||||||
|
libs := []string{}
|
||||||
|
for _, info := range l {
|
||||||
|
found := false
|
||||||
|
requested := info.Library
|
||||||
|
if info.Variant != "" {
|
||||||
|
requested += "_" + info.Variant
|
||||||
|
}
|
||||||
|
for i, lib := range libs {
|
||||||
|
if lib == requested {
|
||||||
|
resp[i] = append(resp[i], info)
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
|
libs = append(libs, requested)
|
||||||
|
resp = append(resp, []GpuInfo{info})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return resp
|
||||||
|
}
|
||||||
|
|
||||||
|
func LogDetails(devices []ml.DeviceInfo) {
|
||||||
|
for _, dev := range devices {
|
||||||
|
var libs []string
|
||||||
|
for _, dir := range dev.LibraryPath {
|
||||||
|
if strings.Contains(dir, filepath.Join("lib", "ollama")) {
|
||||||
|
libs = append(libs, filepath.Base(dir))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
typeStr := "discrete"
|
||||||
|
if dev.Integrated {
|
||||||
|
typeStr = "iGPU"
|
||||||
|
}
|
||||||
|
slog.Info("inference compute",
|
||||||
|
"id", dev.ID,
|
||||||
|
"library", dev.Library,
|
||||||
|
"compute", dev.Compute(),
|
||||||
|
"name", dev.Name,
|
||||||
|
"description", dev.Description,
|
||||||
|
"libdirs", strings.Join(libs, ","),
|
||||||
|
"driver", dev.Driver(),
|
||||||
|
"pci_id", dev.PCIID,
|
||||||
|
"type", typeStr,
|
||||||
|
"total", format.HumanBytes2(dev.TotalMemory),
|
||||||
|
"available", format.HumanBytes2(dev.FreeMemory),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
// CPU inference
|
||||||
|
if len(devices) == 0 {
|
||||||
|
dev, _ := GetCPUMem()
|
||||||
|
slog.Info("inference compute",
|
||||||
|
"id", "cpu",
|
||||||
|
"library", "cpu",
|
||||||
|
"compute", "",
|
||||||
|
"name", "cpu",
|
||||||
|
"description", "cpu",
|
||||||
|
"libdirs", "ollama",
|
||||||
|
"driver", "",
|
||||||
|
"pci_id", "",
|
||||||
|
"type", "",
|
||||||
|
"total", format.HumanBytes2(dev.TotalMemory),
|
||||||
|
"available", format.HumanBytes2(dev.FreeMemory),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort by Free Space
|
||||||
|
type ByFreeMemory []GpuInfo
|
||||||
|
|
||||||
|
func (a ByFreeMemory) Len() int { return len(a) }
|
||||||
|
func (a ByFreeMemory) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
|
||||||
|
func (a ByFreeMemory) Less(i, j int) bool { return a[i].FreeMemory < a[j].FreeMemory }
|
||||||
|
|
||||||
|
type SystemInfo struct {
|
||||||
|
System CPUInfo `json:"system"`
|
||||||
|
GPUs []GpuInfo `json:"gpus"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Return the optimal number of threads to use for inference
|
||||||
|
func (si SystemInfo) GetOptimalThreadCount() int {
|
||||||
|
if len(si.System.CPUs) == 0 {
|
||||||
|
// Fall back to Go's num CPU
|
||||||
|
return runtime.NumCPU()
|
||||||
|
}
|
||||||
|
|
||||||
|
coreCount := 0
|
||||||
|
for _, c := range si.System.CPUs {
|
||||||
|
coreCount += c.CoreCount - c.EfficiencyCoreCount
|
||||||
|
}
|
||||||
|
|
||||||
|
return coreCount
|
||||||
|
}
|
||||||
|
|
||||||
|
// For each GPU, check if it does NOT support flash attention
|
||||||
|
func (l GpuInfoList) FlashAttentionSupported() bool {
|
||||||
|
for _, gpu := range l {
|
||||||
|
supportsFA := gpu.Library == "cpu" ||
|
||||||
|
gpu.Name == "Metal" || gpu.Library == "Metal" ||
|
||||||
|
(gpu.Library == "CUDA" && gpu.DriverMajor >= 7 && !(gpu.ComputeMajor == 7 && gpu.ComputeMinor == 2)) || // We don't have kernels for Jetson Xavier
|
||||||
|
gpu.Library == "ROCm" ||
|
||||||
|
gpu.Library == "Vulkan"
|
||||||
|
|
||||||
|
if !supportsFA {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
type BaseRunner interface {
|
||||||
|
// GetPort returns the localhost port number the runner is running on
|
||||||
|
GetPort() int
|
||||||
|
|
||||||
|
// HasExited indicates if the runner is no longer running. This can be used during
|
||||||
|
// bootstrap to detect if a given filtered device is incompatible and triggered an assert
|
||||||
|
HasExited() bool
|
||||||
|
}
|
||||||
|
|
||||||
|
type RunnerDiscovery interface {
|
||||||
|
BaseRunner
|
||||||
|
|
||||||
|
// GetDeviceInfos will perform a query of the underlying device libraries
|
||||||
|
// for device identification and free VRAM information
|
||||||
|
// During bootstrap scenarios, this routine may take seconds to complete
|
||||||
|
GetDeviceInfos(ctx context.Context) []ml.DeviceInfo
|
||||||
|
}
|
||||||
|
|
||||||
|
type FilteredRunnerDiscovery interface {
|
||||||
|
RunnerDiscovery
|
||||||
|
|
||||||
|
// GetActiveDeviceIDs returns the filtered set of devices actively in
|
||||||
|
// use by this runner for running models. If the runner is a bootstrap runner, no devices
|
||||||
|
// will be active yet so no device IDs are returned.
|
||||||
|
// This routine will not query the underlying device and will return immediately
|
||||||
|
GetActiveDeviceIDs() []ml.DeviceID
|
||||||
|
}
|
||||||
@@ -2,8 +2,9 @@
|
|||||||
|
|
||||||
### Getting Started
|
### Getting Started
|
||||||
* [Quickstart](../README.md#quickstart)
|
* [Quickstart](../README.md#quickstart)
|
||||||
* [Examples](../examples)
|
* [Examples](./examples.md)
|
||||||
* [Importing models](./import.md)
|
* [Importing models](./import.md)
|
||||||
|
* [MacOS Documentation](./macos.md)
|
||||||
* [Linux Documentation](./linux.md)
|
* [Linux Documentation](./linux.md)
|
||||||
* [Windows Documentation](./windows.md)
|
* [Windows Documentation](./windows.md)
|
||||||
* [Docker Documentation](./docker.md)
|
* [Docker Documentation](./docker.md)
|
||||||
|
|||||||
812
docs/api.md
812
docs/api.md
File diff suppressed because it is too large
Load Diff
40
docs/cloud.md
Normal file
40
docs/cloud.md
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
# Cloud
|
||||||
|
|
||||||
|
| Ollama's cloud is currently in preview. For full documentation, see [Ollama's documentation](https://docs.ollama.com/cloud).
|
||||||
|
|
||||||
|
## Cloud Models
|
||||||
|
|
||||||
|
[Cloud models](https://ollama.com/cloud) are a new kind of model in Ollama that can run without a powerful GPU. Instead, cloud models are automatically offloaded to Ollama's cloud while offering the same capabilities as local models, making it possible to keep using your local tools while running larger models that wouldn’t fit on a personal computer.
|
||||||
|
|
||||||
|
Ollama currently supports the following cloud models, with more coming soon:
|
||||||
|
|
||||||
|
- `gpt-oss:20b-cloud`
|
||||||
|
- `gpt-oss:120b-cloud`
|
||||||
|
- `deepseek-v3.1:671b-cloud`
|
||||||
|
- `qwen3-coder:480b-cloud`
|
||||||
|
|
||||||
|
### Get started
|
||||||
|
|
||||||
|
To run a cloud model, open the terminal and run:
|
||||||
|
|
||||||
|
```
|
||||||
|
ollama run gpt-oss:120b-cloud
|
||||||
|
```
|
||||||
|
|
||||||
|
To run cloud models with integrations that work with Ollama, first download the cloud model:
|
||||||
|
|
||||||
|
```
|
||||||
|
ollama pull qwen3-coder:480b-cloud
|
||||||
|
```
|
||||||
|
|
||||||
|
Then sign in to Ollama:
|
||||||
|
|
||||||
|
```
|
||||||
|
ollama signin
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, access the model using the model name `qwen3-coder:480b-cloud` via Ollama's local API or tooling.
|
||||||
|
|
||||||
|
## Cloud API access
|
||||||
|
|
||||||
|
Cloud models can also be accessed directly on ollama.com's API. For more information, see the [docs](https://docs.ollama.com/cloud).
|
||||||
@@ -1,150 +1,163 @@
|
|||||||
# Development
|
# Development
|
||||||
|
|
||||||
Install required tools:
|
Install prerequisites:
|
||||||
|
|
||||||
- cmake version 3.24 or higher
|
- [Go](https://go.dev/doc/install)
|
||||||
- go version 1.22 or higher
|
- C/C++ Compiler e.g. Clang on macOS, [TDM-GCC](https://github.com/jmeubank/tdm-gcc/releases/latest) (Windows amd64) or [llvm-mingw](https://github.com/mstorsjo/llvm-mingw) (Windows arm64), GCC/Clang on Linux.
|
||||||
- gcc version 11.4.0 or higher
|
|
||||||
|
|
||||||
### MacOS
|
Then build and run Ollama from the root directory of the repository:
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
brew install go cmake gcc
|
go run . serve
|
||||||
```
|
```
|
||||||
|
|
||||||
Optionally enable debugging and more verbose logging:
|
> [!NOTE]
|
||||||
|
> Ollama includes native code compiled with CGO. From time to time these data structures can change and CGO can get out of sync resulting in unexpected crashes. You can force a full build of the native code by running `go clean -cache` first.
|
||||||
|
|
||||||
```bash
|
|
||||||
# At build time
|
|
||||||
export CGO_CFLAGS="-g"
|
|
||||||
|
|
||||||
# At runtime
|
## macOS (Apple Silicon)
|
||||||
export OLLAMA_DEBUG=1
|
|
||||||
|
macOS Apple Silicon supports Metal which is built-in to the Ollama binary. No additional steps are required.
|
||||||
|
|
||||||
|
## macOS (Intel)
|
||||||
|
|
||||||
|
Install prerequisites:
|
||||||
|
|
||||||
|
- [CMake](https://cmake.org/download/) or `brew install cmake`
|
||||||
|
|
||||||
|
Then, configure and build the project:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cmake -B build
|
||||||
|
cmake --build build
|
||||||
```
|
```
|
||||||
|
|
||||||
Get the required libraries and build the native LLM code:
|
Lastly, run Ollama:
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
go generate ./...
|
go run . serve
|
||||||
```
|
```
|
||||||
|
|
||||||
Then build ollama:
|
## Windows
|
||||||
|
|
||||||
```bash
|
Install prerequisites:
|
||||||
go build .
|
|
||||||
|
- [CMake](https://cmake.org/download/)
|
||||||
|
- [Visual Studio 2022](https://visualstudio.microsoft.com/downloads/) including the Native Desktop Workload
|
||||||
|
- (Optional) AMD GPU support
|
||||||
|
- [ROCm](https://rocm.docs.amd.com/en/latest/)
|
||||||
|
- [Ninja](https://github.com/ninja-build/ninja/releases)
|
||||||
|
- (Optional) NVIDIA GPU support
|
||||||
|
- [CUDA SDK](https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64&target_version=11&target_type=exe_network)
|
||||||
|
|
||||||
|
Then, configure and build the project:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cmake -B build
|
||||||
|
cmake --build build --config Release
|
||||||
```
|
```
|
||||||
|
|
||||||
Now you can run `ollama`:
|
> [!IMPORTANT]
|
||||||
|
> Building for ROCm requires additional flags:
|
||||||
|
> ```
|
||||||
|
> cmake -B build -G Ninja -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++
|
||||||
|
> cmake --build build --config Release
|
||||||
|
> ```
|
||||||
|
|
||||||
```bash
|
|
||||||
./ollama
|
Lastly, run Ollama:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
go run . serve
|
||||||
```
|
```
|
||||||
|
|
||||||
### Linux
|
## Windows (ARM)
|
||||||
|
|
||||||
#### Linux CUDA (NVIDIA)
|
Windows ARM does not support additional acceleration libraries at this time. Do not use cmake, simply `go run` or `go build`.
|
||||||
|
|
||||||
_Your operating system distribution may already have packages for NVIDIA CUDA. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!_
|
## Linux
|
||||||
|
|
||||||
Install `cmake` and `golang` as well as [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
|
Install prerequisites:
|
||||||
development and runtime packages.
|
|
||||||
|
|
||||||
Typically the build scripts will auto-detect CUDA, however, if your Linux distro
|
- [CMake](https://cmake.org/download/) or `sudo apt install cmake` or `sudo dnf install cmake`
|
||||||
or installation approach uses unusual paths, you can specify the location by
|
- (Optional) AMD GPU support
|
||||||
specifying an environment variable `CUDA_LIB_DIR` to the location of the shared
|
- [ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/quick-start.html)
|
||||||
libraries, and `CUDACXX` to the location of the nvcc compiler. You can customize
|
- (Optional) NVIDIA GPU support
|
||||||
a set of target CUDA architectures by setting `CMAKE_CUDA_ARCHITECTURES` (e.g. "50;60;70")
|
- [CUDA SDK](https://developer.nvidia.com/cuda-downloads)
|
||||||
|
|
||||||
Then generate dependencies:
|
> [!IMPORTANT]
|
||||||
|
> Ensure prerequisites are in `PATH` before running CMake.
|
||||||
|
|
||||||
```
|
|
||||||
go generate ./...
|
Then, configure and build the project:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cmake -B build
|
||||||
|
cmake --build build
|
||||||
```
|
```
|
||||||
|
|
||||||
Then build the binary:
|
Lastly, run Ollama:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
go build .
|
go run . serve
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Linux ROCm (AMD)
|
## Docker
|
||||||
|
|
||||||
_Your operating system distribution may already have packages for AMD ROCm and CLBlast. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!_
|
```shell
|
||||||
|
docker build .
|
||||||
Install [CLBlast](https://github.com/CNugteren/CLBlast/blob/master/doc/installation.md) and [ROCm](https://rocm.docs.amd.com/en/latest/) development packages first, as well as `cmake` and `golang`.
|
|
||||||
|
|
||||||
Typically the build scripts will auto-detect ROCm, however, if your Linux distro
|
|
||||||
or installation approach uses unusual paths, you can specify the location by
|
|
||||||
specifying an environment variable `ROCM_PATH` to the location of the ROCm
|
|
||||||
install (typically `/opt/rocm`), and `CLBlast_DIR` to the location of the
|
|
||||||
CLBlast install (typically `/usr/lib/cmake/CLBlast`). You can also customize
|
|
||||||
the AMD GPU targets by setting AMDGPU_TARGETS (e.g. `AMDGPU_TARGETS="gfx1101;gfx1102"`)
|
|
||||||
|
|
||||||
```
|
|
||||||
go generate ./...
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Then build the binary:
|
### ROCm
|
||||||
|
|
||||||
```
|
```shell
|
||||||
go build .
|
docker build --build-arg FLAVOR=rocm .
|
||||||
```
|
```
|
||||||
|
|
||||||
ROCm requires elevated privileges to access the GPU at runtime. On most distros you can add your user account to the `render` group, or run as root.
|
## Running tests
|
||||||
|
|
||||||
#### Advanced CPU Settings
|
To run tests, use `go test`:
|
||||||
|
|
||||||
By default, running `go generate ./...` will compile a few different variations
|
```shell
|
||||||
of the LLM library based on common CPU families and vector math capabilities,
|
go test ./...
|
||||||
including a lowest-common-denominator which should run on almost any 64 bit CPU
|
|
||||||
somewhat slowly. At runtime, Ollama will auto-detect the optimal variation to
|
|
||||||
load. If you would like to build a CPU-based build customized for your
|
|
||||||
processor, you can set `OLLAMA_CUSTOM_CPU_DEFS` to the llama.cpp flags you would
|
|
||||||
like to use. For example, to compile an optimized binary for an Intel i9-9880H,
|
|
||||||
you might use:
|
|
||||||
|
|
||||||
```
|
|
||||||
OLLAMA_CUSTOM_CPU_DEFS="-DGGML_AVX=on -DGGML_AVX2=on -DGGML_F16C=on -DGGML_FMA=on" go generate ./...
|
|
||||||
go build .
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Containerized Linux Build
|
> NOTE: In rare circumstances, you may need to change a package using the new
|
||||||
|
> "synctest" package in go1.24.
|
||||||
|
>
|
||||||
|
> If you do not have the "synctest" package enabled, you will not see build or
|
||||||
|
> test failures resulting from your change(s), if any, locally, but CI will
|
||||||
|
> break.
|
||||||
|
>
|
||||||
|
> If you see failures in CI, you can either keep pushing changes to see if the
|
||||||
|
> CI build passes, or you can enable the "synctest" package locally to see the
|
||||||
|
> failures before pushing.
|
||||||
|
>
|
||||||
|
> To enable the "synctest" package for testing, run the following command:
|
||||||
|
>
|
||||||
|
> ```shell
|
||||||
|
> GOEXPERIMENT=synctest go test ./...
|
||||||
|
> ```
|
||||||
|
>
|
||||||
|
> If you wish to enable synctest for all go commands, you can set the
|
||||||
|
> `GOEXPERIMENT` environment variable in your shell profile or by using:
|
||||||
|
>
|
||||||
|
> ```shell
|
||||||
|
> go env -w GOEXPERIMENT=synctest
|
||||||
|
> ```
|
||||||
|
>
|
||||||
|
> Which will enable the "synctest" package for all go commands without needing
|
||||||
|
> to set it for all shell sessions.
|
||||||
|
>
|
||||||
|
> The synctest package is not required for production builds.
|
||||||
|
|
||||||
If you have Docker available, you can build linux binaries with `./scripts/build_linux.sh` which has the CUDA and ROCm dependencies included. The resulting binary is placed in `./dist`
|
## Library detection
|
||||||
|
|
||||||
### Windows
|
Ollama looks for acceleration libraries in the following paths relative to the `ollama` executable:
|
||||||
|
|
||||||
Note: The Windows build for Ollama is still under development.
|
* `./lib/ollama` (Windows)
|
||||||
|
* `../lib/ollama` (Linux)
|
||||||
|
* `.` (macOS)
|
||||||
|
* `build/lib/ollama` (for development)
|
||||||
|
|
||||||
First, install required tools:
|
If the libraries are not found, Ollama will not run with any acceleration libraries.
|
||||||
|
|
||||||
- MSVC toolchain - C/C++ and cmake as minimal requirements
|
|
||||||
- Go version 1.22 or higher
|
|
||||||
- MinGW (pick one variant) with GCC.
|
|
||||||
- [MinGW-w64](https://www.mingw-w64.org/)
|
|
||||||
- [MSYS2](https://www.msys2.org/)
|
|
||||||
- The `ThreadJob` Powershell module: `Install-Module -Name ThreadJob -Scope CurrentUser`
|
|
||||||
|
|
||||||
Then, build the `ollama` binary:
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
$env:CGO_ENABLED="1"
|
|
||||||
go generate ./...
|
|
||||||
go build .
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Windows CUDA (NVIDIA)
|
|
||||||
|
|
||||||
In addition to the common Windows development tools described above, install CUDA after installing MSVC.
|
|
||||||
|
|
||||||
- [NVIDIA CUDA](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html)
|
|
||||||
|
|
||||||
|
|
||||||
#### Windows ROCm (AMD Radeon)
|
|
||||||
|
|
||||||
In addition to the common Windows development tools described above, install AMDs HIP package after installing MSVC.
|
|
||||||
|
|
||||||
- [AMD HIP](https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html)
|
|
||||||
- [Strawberry Perl](https://strawberryperl.com/)
|
|
||||||
|
|
||||||
Lastly, add `ninja.exe` included with MSVC to the system path (e.g. `C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\Ninja`).
|
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
### CPU only
|
### CPU only
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -11,50 +11,57 @@ Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-
|
|||||||
|
|
||||||
#### Install with Apt
|
#### Install with Apt
|
||||||
1. Configure the repository
|
1. Configure the repository
|
||||||
```bash
|
|
||||||
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \
|
```shell
|
||||||
|
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \
|
||||||
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
|
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
|
||||||
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
|
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
|
||||||
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
|
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
|
||||||
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
|
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
|
||||||
sudo apt-get update
|
sudo apt-get update
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Install the NVIDIA Container Toolkit packages
|
2. Install the NVIDIA Container Toolkit packages
|
||||||
```bash
|
|
||||||
sudo apt-get install -y nvidia-container-toolkit
|
```shell
|
||||||
```
|
sudo apt-get install -y nvidia-container-toolkit
|
||||||
|
```
|
||||||
|
|
||||||
#### Install with Yum or Dnf
|
#### Install with Yum or Dnf
|
||||||
1. Configure the repository
|
1. Configure the repository
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \
|
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \
|
||||||
| sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
|
| sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Install the NVIDIA Container Toolkit packages
|
2. Install the NVIDIA Container Toolkit packages
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
sudo yum install -y nvidia-container-toolkit
|
sudo yum install -y nvidia-container-toolkit
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Configure Docker to use Nvidia driver
|
#### Configure Docker to use Nvidia driver
|
||||||
```
|
|
||||||
|
```shell
|
||||||
sudo nvidia-ctk runtime configure --runtime=docker
|
sudo nvidia-ctk runtime configure --runtime=docker
|
||||||
sudo systemctl restart docker
|
sudo systemctl restart docker
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Start the container
|
#### Start the container
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> If you're running on an NVIDIA JetPack system, Ollama can't automatically discover the correct JetPack version. Pass the environment variable JETSON_JETPACK=5 or JETSON_JETPACK=6 to the container to select version 5 or 6.
|
||||||
|
|
||||||
### AMD GPU
|
### AMD GPU
|
||||||
|
|
||||||
To run Ollama using Docker with AMD GPUs, use the `rocm` tag and the following command:
|
To run Ollama using Docker with AMD GPUs, use the `rocm` tag and the following command:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm
|
docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -62,8 +69,8 @@ docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 114
|
|||||||
|
|
||||||
Now you can run a model:
|
Now you can run a model:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
docker exec -it ollama ollama run llama3.1
|
docker exec -it ollama ollama run llama3.2
|
||||||
```
|
```
|
||||||
|
|
||||||
### Try different models
|
### Try different models
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user