Hadoop is smart enough, it will not let you waste disk space.

The remaining 78MB would be utilized for some other file. Assume you put or copyFromLocal of 2 small files each with size 20MB and your block size is 128MB. Now HDFS calculates the available space (suppose previously you have saved a file of 50 MB in a 128MB block it includes these remaining 78MB as well)in the file system(not available blocks) and gives a report in terms of block. Since you have 2 files, with replication factor as 3, so a total of 6 blocks would be allocated for your files even if your file size is less than the block size. If the cluster doesn't have 6 blocks(6*64MB) then the put process would fail. Since the report is fetched in terms of space not in terms of blocks, you would never run out of blocks. The only time files are measured in blocks is at block allocation time.