Key | Value |
---|---|
FileName | ./usr/lib/arm-linux-gnueabi/cmake/dlpack/dlpackConfig.cmake |
FileSize | 1354 |
MD5 | F4646C0BC807B59C36CF3B22052DE5CE |
SHA-1 | A0BA147D75A20EE6CF0E8E27F581053EE760685E |
SHA-256 | D5AA080B11EF94216BCA09B189CEED93D61C949EEE3C6D4492D3EB27E467D792 |
SSDEEP | 24:73aKk6GFLDEZv6gkeAjugJXR2FZ1j5X0wQbG0qQqz2K5aG2cRtzW2v:7XGpwV6VeAiOh2FZXMbGrzbk2t3 |
TLSH | T1332132B427991C608397C55072A6B41F08C6497FBEB34840FA8FD28923DD1B05A833FA |
hashlookup:parent-total | 2 |
hashlookup:trust | 60 |
The searched file hash is included in 2 parent files which include package known and seen by metalookup. A sample is included below:
Key | Value |
---|---|
FileSize | 6348 |
MD5 | 21F778DA5B65693A51EAA364153CBCA4 |
PackageDescription | Open In Memory Tensor Structure DLPack is an open in-memory tensor structure to for sharing tensor among frameworks. DLPack enables . * Easier sharing of operators between deep learning frameworks. * Easier wrapping of vendor level operator implementations, allowing collaboration when introducing new devices/ops. * Quick swapping of backend implementations, like different version of BLAS * For final users, this could bring more operators, and possibility of mixing usage between frameworks. . DLPack do not intend to implement of Tensor and Ops, but instead use this as common bridge to reuse tensor and ops across frameworks. |
PackageMaintainer | Debian Deep Learning Team <debian-science-maintainers@lists.alioth.debian.org> |
PackageName | libdlpack-dev |
PackageSection | science |
PackageVersion | 0.0~git20200217.3ec0443-2 |
SHA-1 | E83DDD4BEB89D07C3986DC679B6B48C69D30EDB3 |
SHA-256 | 00B2C3982F382AA43CDFB8BB2873DEA4CE4B75F1D47EEE7AFCA2360ADFDC7B65 |
Key | Value |
---|---|
FileSize | 7064 |
MD5 | ADC59C55D1F2C04350B26B911C2C2E22 |
PackageDescription | Open In Memory Tensor Structure DLPack is an open in-memory tensor structure to for sharing tensor among frameworks. DLPack enables . * Easier sharing of operators between deep learning frameworks. * Easier wrapping of vendor level operator implementations, allowing collaboration when introducing new devices/ops. * Quick swapping of backend implementations, like different version of BLAS * For final users, this could bring more operators, and possibility of mixing usage between frameworks. . DLPack do not intend to implement of Tensor and Ops, but instead use this as common bridge to reuse tensor and ops across frameworks. |
PackageMaintainer | Debian Deep Learning Team <debian-science-maintainers@lists.alioth.debian.org> |
PackageName | libdlpack-dev |
PackageSection | science |
PackageVersion | 0.6-1+b1 |
SHA-1 | B09DB0B5F82F6CA55EE712FE02ABF1C836C70E14 |
SHA-256 | 3EC59929CC56768D9B87CFB372489D85EEEAE88FACED50C037E2DFF8813FFE4E |