Lossless is compressed, Uncompressed is not compressed. Just google it next time you are not sure of something don't spread it out please.
Yup of course lossless = compressed and uncompressed = not compressed.
However, compression can be lossy (e.g. JPEG, meaning that you
are losing information in the process, which cannot be recovered anymore at a later stage) or -- you know -- lossless (e.g. PNG is a lossless compressed image format; or think of ZIP archives when you compress your documents: you are not losing information of course, otherwise your Word document would not be recoverable when you deflate your compressed archive).
Lossless means exactly this, it is not losing (altering) information, it is just achieving a smarter way to store it in a smaller space.
So lossless = uncompressed in terms of recoverable information. I.e. if you see a difference, that's your brains.
Banding happens when you downscale, if Im wrong please prove me wrong.
Negative.
Banding happens, for instance, when you have a small palette of different blues (say, 2^10 different levels of blue instead of 2^14) and need to paint a sky with a very uniform tone of blue, showing only a very small and slow gradient between very similar tones of blue.
Let's say that your blues can go from 0 (black) to 1 (full brightness blue) and somewhere in your still / video frame your sky passes from 0.5 to 0.5006 over 11 pixels. With 14 bit, you are storing enough details to go like this:
0.50000 0.50006 0.50012 0.50018 0.50024 0.50031 0.50037 0.50043 0.50049 0.50055 0.50060
To realize this, just multiply the above by 2^14 = 16384, and you will see that this corresponds to different integers <= 2^14, that thus can be stored using 14 bit:
8192 8193 8194 8195 8196 8197 8198 8198 8199 8200 8201
This also means that every pixel, in that row of 11 pixels, will display a slightly different blue, providing for a very smooth and pleasant color gradient from 0.5 to 0.5006.
On the contrary, if you had only 10 bit, you could only go like
0.50000 0.50000 0.50000 0.50000 0.50000 0.50000 0.50000 0.50000 0.50098 0.50098 0.50098
Once again, multiply the above values by 2^10 = 1024 and you will realize that these become
512 512 512 512 512 512 512 512 513 513 513
So in the 10-bit case, you see that you are not representing the color gradient as nicely as in the 14-bit case: basically, you have only 2 levels of blue available (one of which is even brighter than the brightest level you are willing to represent). The rendered image will thus show a harsh change between the first 8 pixels being represented at a level of blue of 0.50000 and the last 3 pixels being represented @0.50098.
This is called banding.
HTH
Mr Google
