**correct me as I'm going, I may be understand things completely wrong
1- about samplers "like LD, random sampler, MLT, etc,...", what I understand is that this is used only to get a good (x,y) position inside a pixel to initialize a camera path, is this correct?
2- about filters "I read about them from PBRT" like "box filter, gaussian, etc...", what I understand (from image processing) is that they are a fast post process to clean the image (may be for anti aliasing) , is this correct? //I sense I can't understand filters so far
3- how we can do anti aliasing? "from commercial render engines, I see anti aliasing is increasing render time almost linearly, so I guess it is just similar to rendering a higher res image and averaging result into smaller image
4- how we can simulate motion blur? "I tried to figure out how to intersect a ray hitting a triangle at time T, how is this possible? we store multiple BVHs for each time T?", and how we treat the results with respect to physical camera shutter speed
5- how we can simulate DoF? "from what I understand we are shooting from the same pixel multiple rays in a conic angle (instead of discrete direction in pinhole camera), is this correct? how we treat the result with respect to physical camera FStop?
6- in BDPT, we sample also lights, how do we do this? (I only encountered MIS and MLT) "so LD may be used also?? like how we treat camera samples"
7- for BRDFs, is there any special treatment with any sampler? "what I understand is that it takes an incident ray (hit,direction,power) and with probabilities/BRDF function gets outgoing ray (direction and power)
feel free to let me understand how things works(hope you won't get angry as the questions looks very bad
