Skip to content

Custom libtorchpath#1561

Open
haytham2597 wants to merge 3 commits intodotnet:mainfrom
haytham2597:custom_libtorchpath
Open

Custom libtorchpath#1561
haytham2597 wants to merge 3 commits intodotnet:mainfrom
haytham2597:custom_libtorchpath

Conversation

@haytham2597
Copy link
Copy Markdown

@haytham2597 haytham2597 commented Apr 26, 2026

This able to developer for build a custom libtorch path of any version. For example if you download from official Pytorch the Libtorch 2.11.0 CUDA 13.0, after that you will unzip anywhere and then in CMD on TorchSharp's directory run dotnet build /p:CustomLibTorchFullPath="{Full route where your download the libtorch from site}\libtorch-win-shared-with-deps-debug-2.11.0+cu130\libtorch\share\cmake\Torch"
This work debug, release, cpu and cuda.

EDIT 1: If you not put the parameter CustomLibTorchFullPath this will work normally like originally repo.

EDIT 2: Take in mind if you use CustomLibTorchFullPath will not copy all necesary .dll on build target path like originally repo. But able for you compile LibTorchSharp.dll

@alinpahontu2912
Copy link
Copy Markdown
Member

Are you sure it will work when using libtorch with different cuda version ? Usually with every new version, the cuda dlls slightly change. Some new ones might be added, others deleted, their name might change, also teir size. Similar to the libtorch dlls, but I think these ones don't change as much

@haytham2597
Copy link
Copy Markdown
Author

Are you sure it will work when using libtorch with different cuda version ? Usually with every new version, the cuda dlls slightly change. Some new ones might be added, others deleted, their name might change, also teir size. Similar to the libtorch dlls, but I think these ones don't change as much

About cuda usually windows take from Environment path like CUDA_HOME (i don't remember if CUDA Toolkit set the path by default i believed also in linux) that is a full path of CUDA SDK for example "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vX.X", the cuda's version have backward compatibility, anyway we can use a MACRO in c++ for write a code that is for specific, less or greater version. In cuda Toolkit the version is specified on CUDA_VERSION macro in cuda.h.

Libtorch usually are written very well for take in mind new and old version of CUDA.

Now about LibTorch in TorchSharp some function can be changed for example in below of 2.6 like in my branch autocast i used the macro TORCH_VERSION_MAJOR and TORCH_VERSION_MINOR for take in mind the version because above 2.6 is torch::linalg_cholesky and below is torch::linalg::cholesky like this:

#if IS_260_OR_NEWER
    CATCH_TENSOR(torch::linalg_cholesky(*tensor))
#else
    CATCH_TENSOR(torch::linalg::cholesky(*tensor))
#endif

Also in my some branch of TorchSharp i used a automatic DefineConstants for versions CUDA and i think that i do the same for Torch version.

I compiled it on both an older and a newer version of libtorch, and even with different versions of CUDA. For optimal performance, it's always advisable to use the DLLs provided by libtorch from their original location. I think I compiled it on a newer version several times but used older torch.dll, torch_cuda.dll, etc., and it worked anyway. You can try it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants