Difference in Device memory allocation pattern between tesla v100 and k80 in tensorflow

Hello all,
I am new here posting first time on these forums. I was using tesla k80 gpus earlier and now using v100 gpus. I have been working on the memory allocation on gpus. I can see the memory allocation pattern is different in both v100 and k80 gpus. And I observed that the memory allocation in v100 is at random areas and in k80 it is uniform. So can anybody help me the differences in memory allocation between v100 and k80 gpus? Thanks in advance