Files
ollama-for-amd/llama/patches/0031-report-LoadLibrary-failures.patch
Daniel Hiltgen ba2253dc30 win: more verbose load failures (#12683)
When loading the dynamic libraries, if something goes wrong report some
details.  Unfortunately this wont explain which dependencies are missing,
but this breadcrumb in the logs should help us diagnose GPU discovery
failures.
2025-10-17 17:13:16 -07:00

33 lines
1.3 KiB
Diff

From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Daniel Hiltgen <daniel@ollama.com>
Date: Fri, 17 Oct 2025 14:17:00 -0700
Subject: [PATCH] report LoadLibrary failures
---
ggml/src/ggml-backend-reg.cpp | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/ggml/src/ggml-backend-reg.cpp b/ggml/src/ggml-backend-reg.cpp
index f794d9cfa..3a855ab2e 100644
--- a/ggml/src/ggml-backend-reg.cpp
+++ b/ggml/src/ggml-backend-reg.cpp
@@ -118,6 +118,18 @@ static dl_handle * dl_load_library(const fs::path & path) {
SetErrorMode(old_mode | SEM_FAILCRITICALERRORS);
HMODULE handle = LoadLibraryW(path.wstring().c_str());
+ if (!handle) {
+ DWORD error_code = GetLastError();
+ std::string msg;
+ LPSTR lpMsgBuf = NULL;
+ DWORD bufLen = FormatMessageA(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS,
+ NULL, error_code, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), (LPSTR)&lpMsgBuf, 0, NULL);
+ if (bufLen) {
+ msg = lpMsgBuf;
+ LocalFree(lpMsgBuf);
+ GGML_LOG_INFO("%s unable to load library %s: %s\n", __func__, path_str(path).c_str(), msg.c_str());
+ }
+ }
SetErrorMode(old_mode);