by amrith on 3/18/22, 3:45 PM with 0 comments
But, as I was coding up crc32c for something, I noticed that the CRC32 instruction has a mode that returns a 64 bit value for 64 bit source operands, and a 64 bit value for an 8 bit source operand (in addition to 32 bit values for all the others).
https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-instruction-set-reference-manual-325383.pdf
See page 2A 3-225 and 226.
This means there is a strange
unsigned __int64 _mm_crc32_u64() in the Intel intrinsics (see https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#ig_expand=1494,1494&techs=SSE4_2).
It isn't a big deal, return a 64 bit value with the top two bytes set to zero, the result is correct. But, I'm sure there was some reason why Intel did this, has anyone found a good explanation for this somewhere - I'm asking only out of curiosity, not asserting that it is wrong or something.
I would have thought unsigned __int32 _mm_crc32_u64() would be more reasonable at least for the intrinsic, no?