Skip to content

Commit

Permalink
Maxwell/Pascal GPU support and crash fix (nomic-ai#1895)
Browse files Browse the repository at this point in the history
Signed-off-by: Jared Van Bortel <[email protected]>
  • Loading branch information
cebtenzzre authored Jan 31, 2024
1 parent b11c3f6 commit 0a40e71
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 2 deletions.
2 changes: 1 addition & 1 deletion gpt4all-backend/llama.cpp-mainline
1 change: 1 addition & 0 deletions gpt4all-chat/chatlistmodel.h
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,7 @@ class ChatListModel : public QAbstractListModel
m_newChat = nullptr;
m_serverChat = nullptr;
m_currentChat = nullptr;
for (auto * chat: m_chats) { delete chat; }
m_chats.clear();
}

Expand Down
4 changes: 3 additions & 1 deletion gpt4all-chat/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -63,9 +63,11 @@ int main(int argc, char *argv[])
}
#endif

int res = app.exec();

// Make sure ChatLLM threads are joined before global destructors run.
// Otherwise, we can get a heap-use-after-free inside of llama.cpp.
ChatListModel::globalInstance()->clearChats();

return app.exec();
return res;
}

0 comments on commit 0a40e71

Please sign in to comment.