• GAMES101 - Overview of Computer Graphics

        Lecturer: Lingqi Yan

        Lecture Videos | Course Site | HW Site


         

        Linear Algebra (Lec. 2)

        Cross Product

        Matrices

        Product can be operated: AB=(M×N)(N×P) (OK)

         

        Transformation (Lec. 3-4)

        Transformation

        1. Scale Matrix: Ratio s:

          image-20210715010441369

          x=sx;y=sy[xy]=[sx00sy][xy]
        2. Reflection Matrix:

          image-20210715010427112

          {x=xy=y[xy]=[1001][xy]
        3. Shear Matrix:

          image-20210715010400583

          [xy]=[1a01][xy]
        4. Rotate (2D) around (0, 0), counter-clock

          image-20210715010533272

          Rθ=[cosθsinθsinθcosθ]
        5. Translation (Not linear transformation)

          image-20210715011601111

          {x=x+txy=y+ty[xy]=[abcd][xy]+[txty]

        Homogeneous Coordinates

        (Add a third coordinate - w-coordinate)

        Affine Transformation

        仿射

        Affine map = linear map + translation

        (xy1)=(abtxcdty001)(xy1) Dimension rises (costs)

        Inverse Transformation

        MM1

        Composite Transform

        Transformation Ordering Matters

        Notice that the matrices are applied right to left(无交换律 但有结合律)

        e.g. T(1,0)R45(xy1): R45T(1,0); An(....(A2(A1(x)))=An...A2A1(xy1)

        Pre-multiply n matrices to obtain a single matrix representing combined transform (for performance)

        Example: (Decomposing)

        Rotate around point C:

        Representation: T(c)RT(c) (Move to the original point - rotate - move back (-c))

        3D Transformation

        Homogeneous Coordinates

        (xyz1)=(abctxdeftyghitz0001)(xyz1)

        Order: Linear transformation first, then translation(先线性变换,再平移)

        Rotate around Axis

        x-axis: (y×z=x) Rx(α)=(10000cosαsinα00sinαcosα00001) z-axis: (x×y=z) Rz(α)=(cosαsinα00sinαcosα0000100001)

        y-axis: (z×x=y) Ry(α)=(cosα0sinα00100sinα0cosα00001) (y 轴方向是相反的)

        Compose any 3D Rotations from Rx, Ry, Rz

        Rxyz(α,β,γ)=Rx(α)Ry(β)Rz(γ)

        Viewing Transformations

        View / Camera Transformation(视图变换)

        MVP: Model-View Projection

        (Model transformation: placing objects; View transformation: placing camera; Projection transformation)

        Define camera first:

        image-20210715110858353

        Key Observation: If the camera and all objects move together, the “photo” will be the same => Transform to: the origin with up @ Y, look @ -Z

        Transformation Matrix Mview in math: Mview=RviewTview (Transformed to a std. coordinate)

        Tview=[100xe010ye001ze0001]Rview =[xg^×t^yg^×t^zg^×t^0xtytzt0xgygzg00001]

        (Also known as model-view transformation)

        Projection Transformation

        Orthographic Projection

        image-20210715112050817

        Camera @ 0, Up @ Y, Look @ -Z. Translate & Scale the resulting rectangle to [-1, 1]2

        (Looking at / along -Z is making near and far not intuitive (n > f). OpenGL uses left hand coordinates)

        In general, map a cuboid [l, r] x [b, t] x [f, n] to canonical cube [-1, 1]3

        Transformation Matrix: Mortho =[2rl00002tb00002nf00001][100r+l2010t+b2001n+f20001] (Translate center 0; Scale 2)

        Perspective Projection (Most common)

        image-20210715112333094 (Not parallel)

        image-20210715121619362 (Similar triangle y=nzy)

        Process: Frustum(视锥)- (n - n, f - f) - Cuboid - Orthographic Proj. (Mo)

        (近平面不变,远平面中心不变)

        In homogeneous coordinates:

        (xyz1)(nx/zny/z unknown 1) mult.  by z ==(nxny still unknown z); Replace z with n; Mpo=(n0000n0000n+fnf0010)

        Mp=MoMpo

        Vertical Field of View (fovY) (Assuming symmetry: l = -r; b = -t)

        image-20210716002006747

        image-20210716002132476

        tanfovY2=t|n|; Aspect=rt

         

        Rasterization (Lec. 5-7)

        Rasterize

        Rasterize = Draw onto the screen

        Define Screen Space: Pixel: (0, 0) - (Width - 1, Height - 1)(均匀小方块); Centered @ (x + 0.5, y + 0.5) (Irrelavent to z)

        image-20210716003103499

        Mviewport=(width200width20height20height200100001)

        Sampling (Approximate a Triangle)

        Rasterization as 2D sampling

        For triangles, each pixel is whether inside / outside should be checked.

        Evaluating inside(tri, x, y)

        3 cross products: AB, BC, CA (Same symbol = inside)

        (Want: All required points (pixels) inside the triangle)

        Edge cases: covered by both tri. 1 and 2: Not process / specific

        Instead of checking all pixels on the screen, using incremental triangle traversal / a bouding box (AABB) can sometimes be faster

        image-20210716004224152 Suitable for thin and rotated triangles

        Artifacts

        (Error / Mistakes / Inaccuracies)

        Jaggies (Too low sampling rate) / Noire / Wagon wheel effect (Signal changing too fast for sampling)

        Idea: Blur - Sample

        Frequency Domain

        In freq. domain: f=1T

        Fourier Transform:

        f(x)=A2+2Acos(tω)π3Acos(3tω)3π+2Acos(5tω)5π2Acos(7tω)7π+...

        (Higher freq. needs faster sampling. Or samples erroneously a low freq. signal)

        Filtering: Getting rid of certain freq. contents (high / low / band / ...)

        Filter out high freq. (Blur) - Low pass filter (e.g. Box function)

        Theorem: Spatial domain · Filter (Convolution Kernel) = ...

        F F(ω)=f(x)e2πiωxdx Inverse Fourier Transform

        Fourier domain × Fourier filter = ...

        Sampling = Repeating Frequency Contents

        Solution: Convolve f(x.y) by a 1-pixel box-blur & Sample pixel's center

        Antialiasing Techniques

        MSAA (Multisample Antialiasing)

        By supersampling (same effects as blur, then sampling) (1 pixel -> 2x2 / 4x4 & Average)

        Cost: Increase the computation work (4x4 = 16 times) - Key pixel distribution

        FXAA (Fast Approximation AA)

        后期处理降锯齿,使用边界

        TAA (Temporal AA)

        Use the previous one frame for AA

        Super Resolution (DLSS - Deep Learning Supersampling)

        Visibility / Occlusion

        Z (Depth) Buffering

        深度缓存

        Point Algorithm(由远到近 - 不好处理相互重叠的关系)

        Z-buffer: Frame buffer for color values; z-buffer for depth (Smaller z - closer (darker); Larger z - farther (lighter))

        Algorithm during Rasterization

        Complexity: O(n)

        For n triangles, only check. No other relativity.

        Order doesn't matter

        Supplement: Shadow Mapping

        After Lec. 12

        An image-space algorithm, widely used for early animations and every 3D video game

        Key idea: The point not in shadow must be seen both by the light and the camera

        Hard shadow: Use 0 & 1 only to represent in shadow or not

        (between 0 and 1 -> soft shadow: by multiple light sources / source size)

        Approaches

        Problems with Shadow Maps

         

        Shading (Lec. 7-10)

        Apply a material to an object

        Shading Model (Blinn-Phong Reflectance Model)

        Light reflected towards camera.

        image-20210721175148434

        Viewer direction, v; Surface normal, n; Light direction, l (for each of many lights); Surface parameters (color, shininess, …)

        Shading is local - No shadows will be generated

        Diffuse Reflection

        Light is scattered uniformly in all directions (Surface color is the same for all viewing direction)

        Light Falloff (Beer Lambert's Law)

        image-20210721175802013

        Lambertian (Diffuse) Shading

        Shading independent of view direction

        Ld=kd(I/r2)max(0,nl)

        (Ld​ - diffusely reflected light; kd​ - diffuse coefficient (color), kd=0: black (all absorbed), 1: white; (I/r2)​ - energy arrived at the shading point; max(0,nl)​ - energy received by the shading point​; irrelavent to v^)

        Specular

        Intensity depends on view direction (Bright near mirror reflection direction)

        v^ closes to mirror direction - half vector near normal

        image-20210722001128103

        h=bisector(v,l)=v+l||v+l||​ - 半程向量

        image-20210722001625078

        Ls=ks(I/r2)max(0,cosα)p=ks(I/r2)max(0,nh)p

        (Ls​ - specularly reflected light; ks​ - specular coefficient; p - narrows the reflection lobe)

        p 越大,反射高光面积越小;ks​ 越大,高光越明显

        (Only use R and l (mirror) - Phong relax model)

        Ambient

        Not depend on anything

        Fake approximation

        La=kaIa (La​ - reflected ambient light; ka - ambient coefficient)

        image-20210722003002721

        Summary

        L=La+Ld+Ls=kaIa+kd(I/r2)max(0,nl)+ks(I/r2)max(0,nh)p

        image-20210722003122186

        Shading Frequencies

        (Face / Vertex / Pixel)

        Shading Methods

        image-20210726162249468 image-20210726162302011 image-20210726162311304

        image-20210726163228332

        Pre-Vertex Normal Vectors

        Vertex normal - Average surrounding face normals Nv=iNi||iNi||

        image-20210726163313740

        ~ Barycentric interpolation of vertex normals (Need to normalize)

        Graphics (Real-time Rendering) Pipeline

        ~ GPU

        image-20210726163726899

        Texture Mapping

        纹理映射 - UV ((u,v) coordinate) (u,v[0,1])

        Barycentric Coordinates

        Interpolation Access Triangles

        Specific values @ vertices / smoothly varing across triangles

        Barycentric Coordinates

        A coordinate system for triangles (α,β,γ)​​​​ (every point could be represented in this form). Inside the triangle if all three coordinates are non-negative

        The barycentric coordinate of A​ is: (α,β,γ)=(1,0,0)​ ; (x,y)=αA+βB+γC=A

        image-20210726174026150 (x,y)=αA+βB+γC ; α+β+γ=1 α1,β1,γ1

        Geometric viewpoint — proportional areas: α=AAAA+AB+AC​​​ ; β=ABAA+AB+AC​​​ ; γ=ACAA+AB+AC​​​

        Centroid: (α,β,γ)=(13,13,13) ; (x,y)=13A+13B+13C (AA=AB=AC=13)

        Express any point:

        α=(xxB)(yCyB)+(yyB)(xCxB)(xAxB)(yCyB)+(yAyB)(xCxB)β=(xxC)(yAyC)+(yyC)(xAxC)(xBxC)(yAyC)+(yByC)(xAxC)γ=1αβ

        Color interpolation for every point: linear interpolation V=αVA+βVB+γVC (could be any property)

        Disadvantage: Not invariant under projection (depth matters)

        (3D - Use 3D interpolation OK; 2D - rather than use projection)

        Apply: Diffuse Color

        Texture Magnification (AA)

        (used for insufficient resolution)

        Texel: A pixel on a texture(纹理元素 / 纹素)

        Easy Cases

        Bilinear Magnification

        image-20210726224355412

        Lerp - Linear Interpolation

        Hard Cases

        Problem of point sampling textures

        (can be solved by supersampling but costly)

        Near: Jaggies (Minification / Downsampling for too high resolution) ; Far: Moire (Upsampling)

        image-image-20210726225139847

        Mipmap

        Allowing (fast, approx., square) range queries

        image-20210726225937700

        (Image Pyramid) "Mip Hierachy": level = D

        Near - range quires in low D level Mipmap; Far - high D level

        If want more continuous results: interpolation between levels

        Trilinear Interpolation

        image-20210726230541800

        Linear interpolation (lerp) based on continuous D value

        Limitations

        In far and complex region - Overblur (box)

        Applications of Textures

        Environmental Map

        Textures Affecting Shading

        Bump / Normal Mapping

        Adding surface details without adding more triangles (Perturb surface normal per pixel)

        未改变几何 - 产生凹凸错觉

        Displacement Mapping

        Uses the same texture as in bumping mapping, but actually moves vertices

        image-20210730162847529

         

        Geometry (Lec. 10-12)

        Representing Ways

        Curves

        Bézier Curves (Explicit)

        (De Casteljau’s Algorithm)

        image-20210803163059425

        Evaluating Bézier Curves

        Algebraic Formula

        b01(t)=(1t)b0+tb1b11(t)=(1t)b1+tb2b02(t)=(1t)b01+tb11b02(t)=(1t)2b0+2t(1t)b1+t2b2

        Bernstein form of a Bézier curve of order n:

        bn(t)=b0n(t)=j=0nbjBjn(t)

        (bn​​ - Bezier curve order n (vector polynomial degree n); bj​​ - Bezier control points (vector in RN​​); Bjn​ - Bernstein polynomial (scalar polynomial of degree n))

        where Bernstein polynomials:

        Bin(t)=(ni)ti(1t)ni

        image-20210803164056094

        At anywhere, sum of bi3=1.

        B-Splines

        Could be adjusted partially

        For C2 cont. -> NURBS (Non-Uniform Rational B-Splines)

        Surfaces

        Bézier Surfaces

        Extend Bézier curves to surfaces

        Bicubic Bézier Surface Patch (4x4 array of control points)

        image-20211217113416287

        Evaluating Surface Position for Parameters (u,v)

        For bicubic Bézier surface patch:

        Input: 4x4 control points; Output: 2D surface parameterized by (u,v) in [0,1]2

        Method: Separable 1D de Casteljau Algorithm

        image-20211217113608676image-20211217114450300

        (u,v)-separable application of de Casteljau algorithm:

        Mesh Operations: Geometry Processing

        image-20211217115232686

        Subdivision (Upsampling)

        Increase resolution

        Loop Subdivision

        Only for triangle faces

        Intro more triangle meshes: Split triangles -> Update old / new vertices differently (according to weights)

        image-20211217115523249image-20211217115809222

        Catmull-Clack Subdivision (General Mesh)

        Can be used for various types of faces

        image-20211217120725036

        After several steps of Catmull-Clack subdivision:

        image-20211217131904295image-20211217131914303image-20211217131953950

        Vertex Updating Rules (Quad Mesh)

        image-20211217162608447 image-20211217162651822

        using the average to make it smoother => similar to the method of blur

        Simplification (Downsampling)

        Reduce the mesh amount to enhance performance

        Edge Collapse

        image-20211217171519375

        Quadric Error Metrics (geometric error introduced)

        New vertex should minimize its sum of square distance (L2 distance) to previously related triangle planes

        image-20211217171718104

        -> could cause error changes of other edges => dynamically update the other affected (prior queue / …)

        => Greedy algorithm for optimization (easy for flat, hard to curvature)

        Regularization (Same #triangles)

        Improve quality

         

        Ray Tracing (Lec. 13-16)

        Rasterization cannot handle global effects well (fast approximation with low quality, but real time). However, soft shadows / especially for light bounces more than once

        Ray Tracing is usually off-line, accurate but very slow

        Ray Tracing Basis

        Light Rays

        Ray Casting

        Pinhole Camera Model

        Generating Eye Rays

        image-20211217210338484

        Shading Pixels (Local Only)

        image-20211217210537187

        Recursive (Whitted-Style) Ray Tracing

        Ray can travel through some (transparent / semi-transparent) media, but need to consider the energy loss

        image-20211217210627861

        Ray-Surface Intersection

        Ray Equation

        Ray is defined by its origin and a direction vector

        (r - point along ray’; t - time; o - origin; d - normalized direction)

        r(t)=o+td0t<

        Ray Intersection

        With Sphere

        Sphere: p: (pc)2R=0

        => Solve for the intersection: (o+tdc)2R2=0

        image-20211217211448934 image-20211217215517277

        for a second order equation: at2+bt+c=0

        where a=dd; b=2(oc)d; c=(oc)(oc)R2; t=(b±b24ac)/2a

        With Implicit Surface

        Solve for real, positive roots:

        image-20211217215718979image-20211217215731943

        With Triangle Mesh

        Idea: Just intersect ray with each triangle (too slow) => can have 0, 1 intersections (ignoring multiple intersections)

        -> Triangle is a plane => Ray-plane intersection + Test if inside triangle

        image-20211217221558145

        Möller Trumbore Algorithm (for triangle problem)

        A faster approach, giving barycentric coordinate directly

        O+tD=(1b1b2)P0+b1P1+b2P2 (where b1,b2,(1b1b2) are barycentric coordinates)

        RHS: represent any point in the plane (sum of the coefficients is 1)

        [tb1b2]=1S1E1[S2E2S1SS2D]where: {E1=P1P0E2=P2P0S=OP0S1=D×E2S2=S×E1

        Costs = 1 div, 27 mul, 17 add

        Need to check if t[0,) & inside triangle as well (b1,b2,(1b1b2)>0)

        Accelerating Ray-Surface (Triangle) Intersection

        Problem: Naive algorithm (for every triangle) = #pixels x #triangles (x #bounces) (very slow)

        Bounding Volumes

        Ray-AABB (Box) Intersection

        Understanding: Box is the intersection of 3 pairs of slabs

        Specifically: Axis-Aligned Bounding Box (AABB)

        image-20211217224311518

        2D example: Compute intersections with slabs and take intersection of tmin/tmax intervals

        image-20211217224550654

        Acceleration with AABBs

        Uniform Spatial Partitions (Grids)

        Uniform Grids

        Preprocess - Build Acceleration Grid
        1. Find bounding box
        2. Create grid
        3. Store each object in overlaping cells

        image-20211217233859037

        Ray-Scene Intersection

        Find the first intersection (with the object) first. If found that the ray passes through a box in which there’s an obj -> test if intersects with the obj (NOT necessary)

        (Ray intersect with boxes - fast; with obj - slower)

        Step through grid in ray traversal order (-> similar to the method of drawing a line in the rasterization pipeline)

        image-20211217234016772

        For each cell: test intersection with all objects stored at that cell

        Resolution

        Acceleration effects: One cell - no speedup; Too many cells - Inefficiency due to extraneous grid traversal

        Heuristic: #cell = C * #objs; where C ≈ 27 in 3D

        Usage

        Spatial Partitions

        Discretize the Bounding Boxes (in 3D)

        image-20211217234739898

        KD-Tree
        KD-Tree Pre-processing

        Actually in every branch, the split should be applied every time (in the following plot, only show one of the splitted branch)

        image-20211217234829391

        Data Structure for KD-Trees
        Traversing a KD-Tree

        For the outer box have intersection (internal node) -> consider the sub-nodes (split1)

        image-20211218114736334

        In sub-nodes: Leaf 1 (LHS) & internal (RHS) both have intersections -> internal node splits -> … -> intersections found (after traversing all nodes)

        image-20211218171638062

        When have intersection with a leaf node of the box, the ray should find intersection with all the objects in this node

        Problem:

        Object Partitioning & Bounding Volume Hierarchy (BVH)

        According to objects other than space, very popular

        Main Idea

        image-20211219003646638

        Can introduce some spec. stopping criteria

        Property: No triangle-box intersections / No obj appears in more than one boxes

        Problem: Boxes can overlap -> more researches to split obj

        Summary: Building BVHs

        Subdivide a node:

        Termination criteria:

        BVH Traversal (Pseudo Code)

        image-20211219005028450

        Radiometry

        Measurement system and units for illumination

        Accurately measure the spatial properties of light

        Terms: Radiant flux(辐射通量), intensity, irradiance(辐射照度), radiance(辐射亮度)

        -> Light emitted from a source (radiant intensity) / light falling on a surface (irradiance) / light traveling along a ray (radiance)

        Perform lighting calculations in a physically correct manner

        Radiant Energy and Flux (Power)

        Radiant Intensity

        The radient (luminous) intensity is the power per unit solid angle(立体角) emitted by a point light source (candela is one of the SI units) (sr - solid radius)

        I(ω)dΦdω[Wsr] [lmsr=cd=candela]

        Angles and Solid Angles

        Differential Solid Angles: (The unit area of the differential retangular)

        dA=(r dθ)(rsinθ dϕ)=r2 sinθ dθ dϕ ;dω=dAr2=sinθ dθ dϕSphere: S2: Ω=S2dω=02π0πsinθ dθ dϕ=4π

        image-20211219105542204 image-20211219110246823

        ω as a Direction Vector: Use ω to denote a dir vector (unit length)

        Isotropic Point Source

        image-20211219111038437

        Φ=S2I dω=4πI ;I=Φ4π

        Irradiance

        The irradiance is the power per (perpendicular/projected) unit area incident on a surface point

        E(x)dΦ(x)dA[Wm2] [lmm2=lux]

        image-20211219163300389

        Lambert’s Cosine Law

        -> Remind Blinn-Phong model / Seasons occur

        Irradiance at surface is proportional to cosine of angle between light dir and surface normal

        image-20211219163535729

        Irradiance Falloff

        Recall -> Blinn-Phong’s Model

        image-20210721175802013

        Assume light is emitting power Φ in a uniform angular distribution. The irradiance decays, while the intensity is constant

        Radiance

        Radiance is the fundamental field quantity that describes the distribution of light in an environment

        • Quantity associated with a ray
        • All about computing radiance

        The radiance (luminance) is the power emitted, reflected, transmitted or received by a surface, per unit solid angle, per projected unit area (two derivatives) -> the cosθ accounts for projected surface area (p - reflection point)

        image-20211219164500412

        L(p,ω)d2Φ(p,ω)dω dA cosθ[Wsr m2] [cdm2=lmsr m2=nit]

        -> Recall:

        Then: Radiance: Irradiance per solid angle / Intensity per projected unit area

        Incident Radiance

        Incident radiance is the irradiance per unit solid angle arriving at the surface

        => it is the light arriving at the surface along a given ray (point on surface and incident direction)

        image-20211219183620789

        L(p,ω)=dEdωcosθ

        Exiting Radiance

        Exiting surface radiance is the intensity per unit projected area leaving the surface

        image-20211219164500412

        L(p,ω)=dI(p,ω)dAcosθ

        e.g., for an area light it is the light emitted along a given ray (point on surface and exit direction).

        Irradiance vs. Radiance

        dE(p,ω)=Li(p,ω)cosθ dωE(p)=H2Li(p,ω)cosθ dω

        image-20211219184153512

        (H2 - Unit Hemisphere)

        Bidirectional Reflectance Distribution Function (BRDF)

        The function to indicate the property of reflection (smooth surface / rough surface)

        Reflection at a Point

        Radiance from direction ωi turns into the power E that dA receives Then power E will become the radiance to any other direction ωo

        image-20211219184826927

        BRDF

        The BRDF represents how much light is reflected into each outgoing dir ωr from each incoming dir => the energy distribution in different dir

        image-20211219185514242

        fr(ωiωr)=dLr(ωr)dEi(ωi)=dLr(ωr)Li(ωi)cosθi dωi[1sr]

        BRDF defines the material

        The Reflection Equation

        image-20211219185922952

        Lr(p,ωr)=H2fr(p,ωiωr)Li(p,ωi)cosθ dωi

        Challenge: Recursive Equation

        Reflected radiance depends on incoming radiance; but incoming radiance depends on reflected radiance (at another point in the scene) <- the lights will not just bounce once

        The Rendering Equation

        For objects can emit light -> adding an emission term to make the reflection function general (Ω+ and H2 both represent the hemisphere)

        Lo(p,ωo)Reflected Light(Output image)=Le(p,ωo)Emission+Ω+Li(p,ωi)Incident Light(from source)fr(p,ωi,ωo)BRDF(nωi)Cosine ofInc Angle dωi

        Note: now assume that all dir are pointing outwards

        Understanding the Rendering Equation

        image-20211219205847794

        Yellow - Large light source; Blue - Surfaces (interreflection)

        Lr(x,ωr) & Lr(x,ωi) are unknown

        Lr(x,ωr)=Le(x,ωr)+ΩLr(x,ωi)f(x,ωi,ωr)cosθi dωi

        is a Fredholm Integral Equation of second kind [extensively studied numerically] with canonical form

        The kernel of equation replaced with the Light Transport Operator

        l(u)=e(u)+l(v) K(u,v) dvkernel of equationL=E+KL

        => solve the rendering equation: discretized to a simple matrix equation [or system of simultaneous linear equations: L, E are vectors, K is the light transport matrix] => Applying binomial theorem

        L=E+KLILKL=EL=(IK)1EL=(I+K+K2+K3+)EL=E+KE+K2E+K3E+

        Global Illumination

        In ray tracing => Global Illumination (Higher order K)

        Monte Carlo Path Tracing

        Probability Review

        Random Variables

        TypeDescription
        XRandom variable, represents a distribution of potential values
        Xp(x)Probability density function (PDF), describing relative probability of a random process choosing value x

        Uniform PDF: all values over a domain are equally likely

        Probabilities

        n discrete values xi with probability pi

        Requirements of a probability distribution: pi>0 && pi=1

        Expected Value

        Expected value of X (drawn from distribution above): E[X]=xipi

        Continuous Case: Probability Density Function

        Conditions on p(x): p(x)0 and p(x) dx=1

        Expected value: E[X]=xp(x) dx

        Function of a Random Value

        Xp(x) & Y=f(X)

        => Expected value: E[Y]=E[f(x)]=f(x) p(x) dx

        Monte Carlo Integration

        We want to solve an integral but it can be too hard to solve analytically => Monte Carlo (numerical method)

        Definite integral abf(x) dx & Random variable Xip(x)

        Monte Carlo estimator:

        FN=1Ni=1Nf(Xi)p(Xi)Xip(x)

        Path Tracing

        Motivation: Whitted-Style RT

        Whitted-style ray tracing

        Problems => Not 100% reasonable

        The Whitted-Style is not that correct -> But the rendering equation is correct

        But involves: solving an integral over the hemisphere (=> Monte Carlo) and recursive execution

        A Simple Monte Carlo Solution

        Suppose we want to render one pixel (point) in the following scene for direct illumination only

        For the reflection equation:

        Lo(p,ωo)=Ω+Li(p,ωi)fr(p,ωi,ωo)(nωi) dωi

        Consider the direction as the random variable, want to compute the radiance at p towards the camera

        Monte Carlo integration:

        abf(x) dx1Nk=1Nf(Xk)p(Xk)Xkp(x) ;where f(x)=Li(p,ωi)fr(p,ωi,ωo)(nωi)

        and pdf: p(ωi)=1/2π (assuming uniformly sampling the hemisphere)

        => In general:

        Lo(p,ωo)=Ω+Li(p,ωi)fr(p,ωi,ωo)(nωi) dωi1Ni=1NLi(p,ωi)fr(p,ωi,ωo)(nωi)p(ωi)

        Algorithm: (Direct illumination)

        Introducing Global Illumination

        Further step: what if a ray hits an object?

        image-20211220212833583

        Q also refects light to P => = the dir illum at Q

        New Algorithm:

        Problems:

        Path Tracing Algorithm: (Recursive)

        Ray Generation

        Tracing more paths through each pixel and average the radiance (similar to ray casting in ray tracing)

        image-20211220213906682

        Rusian Roulette (RR)

        Basic Idea:

        Previously, always shoot a ray at a shading point and get the result Lo

        Suppose we can manually set a probability P (0 < P < 1):

        => the expected value is still Lo: E=P(Lo/P)+(1P)0=Lo (discrete)

        Algorithm with RR: (real correct version)

        Samples Per Pixel (SPP) => low (left): noisy

        image-20211220221554200

        Problem:

        Sampling the Light

        For area light sources, the ones with bigger areas have higher probability to get hit by the the “ray” if uniformly sample the hemisphere at the shading point => waste

        image-20211220221952108

        Monte Carlo method allows any sampling method => make the waste less

        image-20211220222027952

        Assume uniformly sampling on the light: PDF=1/A (because PDF dA=1)

        But the rendering equation integrates on the solid angle: Lo=Lifrcos dω => integral on the light since we sample on the light => make dA from dω

        Projected area on the unit sphere: dω=dAcosθxx2 (not θ)

        Rewrite the rendering equation with dA:

        Lo(x,ωo)=Ω+Li(x,ωi)fr(x,ωi,ωo)cosθ dωi=ALi(x,ωi)fr(x,ωi,ωo)cosθcosθxx2 dA

        Now integrate on the light => PDF=1/A

        image-20211220222927005

        Now we consider the radiance coming from 2 parts:

        Light Sampling Algorithm:

        Final: how to know if the sample on the light is not block or not?

         

        Materials and Appearances (Lec. 17)

        Material == BRDF

        Materials

        Diffuse / Lambertian Material

        image-20211223103243032

        Light is equally reflected in each output dir (fr=c)

        image-20211223102648064

        Suppose the incident light is uniform (identical radiance). According to energy balace, incident and exiting radiance are the same.

        fr=ρπ (ρ - albedo, diffuse rate (color))

        Lo(ωo)=H2frLi(ωi)cosθ dωi=frLiH2(ωi)cosθ dωi=πfrLi

        Glossy Material (BRDF)

        Air <-> copper / aluminum

        image-20211223103308484

        image-20211223103318631

        Ideal Reflective / Refractive Material (BSDF*)

        Air <-> water interface / glass interface (with partial abs)

        -> the “S” in “BSDF” is for “Scatter” => both reflection and refraction are ok

        image-20211223103409301

        image-20211223103423187

        Reflection & Refraction

        Perfect Specular Reflection

        image-20211223103644847 image-20211223103706268

        Left: θ=θo=θi ; Right: ϕo=(ϕi+π) mod 2π (top-down view)

        ωo+ωi=2cosθ n=2(ωin) n  ωo=ωi+2(ωi+n) n

        Specular Refraction

        Light refracts when it enters a new medium

        Snell’s Law

        Transmitted angle depends on: Index of refraction (IOR) for incident and exiting ray

        image-20211223110612197 image-20211223110630488 image-20211223110759024

        Left: ηisinθi=ηtsinθt; Right: φt=φi±π (sim to reflection)

        *Diamond has a very high refraction rate, which means that the light will be refracted heavily inside the diamonds => shiny with various colors

        ηisinθi=ηtsinθtcosθt=1sin2θt=1(ηiηt)2sin2θi=1(ηiηt)2(1cos2θi)

        Want a reasonable real number to have the refraction occurred (need cosθt exist) => ηi<ηt

        Total internal reflection: The internal media has a higher refraction rate than outside: ηi/ηt>1 => Light incident on boundary from large enough angle will not exit medium (e.g., Snell’s window / circle)

        image-20211223111633375 image-20211223111641582

        Fresnel Reflection / Term

        Reflectance depends on incident angle (and polarization of light)

        e.g., reflectance increases with grazing angle

        Formulae:

        Accurate: (considering the polarization)

        Reff=12(Rs+Rp){Rs=|n1cosθin2cosθtn1cosθi+n2cosθt|2=|n1cosθin21(n1n2sinθi)2n1cosθi+n21(n1n2sinθi)2|2Rp=|n1cosθtn2cosθin1cosθt+n2cosθi|2=|n11(n1n2sinθi)2n2cosθin11(n1n2sinθi)2+n2cosθi|2

        Approximation: Schlick’s approximation

        R(θ)=R0+(1R0)(1cosθ)5R0=(n1n2n1+n2)2

        Microfacet Material

        Microfacet Theory

        View from far away: material and appearance; from nearby: geometry

        Rough Suface:

        Individual elements of surface act like mirrors

        image-20211223113238287

        Microfacet BRDF

        Key: the distribution of microfacets’ normals => the roughness

        Microfacets are mirrors

        image-20211223114726490

        (F - Fresnel term; G - shadowing-masking term (occlude with each other, usually occur when grazing angle light); D - distribution of normals)

        f(i,o)=F(i,h)G(i,o,h)D(h)4(n,i)(n,o)

        Isotropic / Anisotropic Materials (BRDFs)

        Key: directionality of underlying surface (following are the surface normals and the BRDF with fixed wi and varied wo)

        Properties of BRDFs

        Measuring BRDFs

        Image-Based BRDF Measurement

        image-20211223130441963

        Instrument: gonioreflectometer

        General approach: (curse of dimensionality)

        for each outgoing dir wo move light to illuminate surface with a thin beam from wo for each incoming dir wi move sensor to be at dir wi from surface measure the incident radiance

        Improving efficiency:

        -> Important lib for BRDFs: MERL BRDF Database

         

        Advanced Topics in Rendering (Lec. 18)

        Advanced light transport and materials

        Advaced Light Transport

        Biased vs. Unbiased Monte Carlo Estimators

        Unbiased Light Transport Methods

        Bidirectional Path Tracing (BDPT)

        A path connects the camera and the light

        BDPT: Traces sub-paths from both the cam and the light; connects the end pts from both sub-paths

        image-20211223220006646

        Properties:

        Metropolis Light Transport (MLT)

        A Markov Chain Monte Carlo (MCMC) application

        Jumping from the current samplke to the next with some PDF

        Key idea: Locally perturb an existing path to get a new path

        Good at locally exploring difficult light paths

        image-20211223221502771

        Properties:

        Biased Light Transport Methods

        Photon Mapping

        A biased approach & A two-stage method -> Good at handling Specular-Diffuse-Specular (SDS) paths and generating caustics

        Approach (variations apply):

        Calculation: Local density estimation

        Areas with more photons should be brighter. For each shading point find the nearest N photon. Take the surface area they over. (Density = Num / Area)

        image-20211225105208322

        Smaller N -> noisy; Larger N -> blurry

        Biased method problem: local density est: dN/dA != ΔN/ΔA; But in the sense of limit: more photons emitted -> the same N photons cover a smaller ΔA -> ΔAdA (Biased but consistent)

        => Bisaed == blurry; Consistent == not blurry with inf #samples

        Vertex Connection and Merging (VCM)

        A Combination of BDPT and Photon Mapping

        Key: Not waste the sub-paths in BDPT if their end pt cannot be connected but can be merged; use photon mapping to handle the mergeing of nearby photons

        image-20211225112233746

        Instant Radiosity (IR)

        = Many-light approaches

        Key: Lit surfaces can be treated as light sources

        Approach:

        image-20211225112704836

        Properties:

        Advanced Appearance Modeling

        Non-Surface Models

        Participating Media

        At any point as light travels through a participating medium, it can be (partially) absorbed and scattered (cloud / …)

        image-20211225113516813

        Use Phase Function (how to scatter) to describe the angular distribution of light scattering at any point x within participating media

        image-20211225113626278

        Rendering:

        image-20211225113834027

        Hair Appearance

        Light not on a surface but on a thin cylinder (Hightlight has 2 types: color and colorless)

        Kajiya-Kay Model

        diffuse + scatter (form a cone zone) => not real

        image-20211225114627302

        Marschner Model

        (widely used model) => very good results

        some reflected R and some penantrated (refraction) T (-> TT / TRT /…)

        image-20211225115036874

        Glass-like cylinder (black -> absorb more; brown / bronde -> absorb less => color) + 3 types of light interactions (R / TT / TRT)

        image-20211225161647989 image-20211225161805983

        => extremely high costs

        Fur Appearance

        cannot just simply apply human hairs => different in biological structures

        Common: 3 layer structure

        image-20211225162152287 image-20211225162211761

        Difference: fur in animal has much bigger medulla (髓质) than human hair => more complex refraction inside (Need to simulate medulla for animal fur)

        Double Cylinder Model

        image-20211225162523306 image-20211225162549901

        Add 2 scattered model => TTs and TRTs

        Granular Material

        avoid explicit modeling => procedural definition

        Surface Models

        Translucent Materials

        Jellyfish / Jade / …

        Subsurface Scattering

        light exiting at different points than it enters

        Actually violates a fundamental assumption of the BRDF (on the diffuse surface with BRDF, light reflects at the same point)

        image-20211225163955908

        Scattering functions:

        BSSRDF: generalization of BRDF; exitant radiance at one point due to incident differential irradiance at another point: S(xi,ωi,xo,ωo)

        => integrating over all points on the surface (area) and all directions

        L(xo,ωo)=AH2S(xi,ωi,xo,ωo)Li(xi,ωi)cosθi dωi dA

        Dipole Approximation

        Approximate light diffusion by introducing two point sources

        image-20211225164155969

        Cloth

        A collection of twisted fibers (different levels of twist: fibers -> ply -> yarn)

        image-20211225164546984

        Render as Surface

        Given the weaving pattern, calculate the overall behavior (using BRDF)

        image-20211225164655172

        => limitation: anisotropic cloth

        Render as Participating Media

        Properties of individual fibers & their distribution -> scattering parameters

        => Really high costs

        Render as Actual Fibers

        Render every fiber explicitly

        => Similar to hair, extremely high costs

        Detailed Appearance

        The now renderers are not good => too perfect results (reality: straches / pores / …)

        Recap: Microfacet BRDF

        Surface = Specular microfacets + Statistical normals

        f(i,o)=F(i,h)G(i,o,h)D(h)4(n,i)(n,o)

        (D - NDF: Normal Distribution Function, we have the left normal dist, but want the right - noises)

        image-20211225165457893

        Define Details: required very high resolution normal maps

        => too difficult for rendering: for bumpy specular surface -> hard to catch from both cam or light source

        Solution: BRDF over a pixel

        image-20211225165818917

        p-NDFs have sharp features

        => ocean waves / scratch / …

        Recent Trend: Wave optics (for too micro / short time duration) other than geometric optics

        Procedural Appearance

        define details without textures => compute a noise function on the fly

         

        Cameras, Lenses and Lightings (Lec. 19)

        Imaging = Synthesis + Capture

        The sensor records the irradiance

        Field of View (FOV)

        Effect of Focal Lenght on FOV

        image-20211225210551845 image-20211225210604243

        For a fixed sensor size, decreasing the focal length increases the field of view (assuming the sensor is fully used)

        FOV=2arctan(h2f)

        The referred focal lengh of a lens used on a 35mm-format film (36 x 24 mm)

        -> e.g., 17 mm is wide angle 104°; 50 mm is a “normal” lens 47°; 200 mm is a telephoto lens 12°

        Effect of Sensor Size on FOV

        image-20211225211019286

        Maintain FOV on smaller sensor? => shorter focal length

        image-20211225211115882

        Exposure

        Exposure H = Time (T) × Irradiance (E)

        Exposure Controls

        Some pairs

        F-stop1.42.02.84.05.68.011.016.022.032.0
        Shutter1/5001/2501/1251/601/301/151/81/41/21

        Fast and Slow Photography

        Thin Lens Approximation

        Real Lens Elements Are Not Ideal – Aberrations

        image-20211225220453824

        Ideal Thin Lens – Focal Point

        image-20211225220523732

        The Thin Lens Equation

        image-20211225220644027

        Gaussian Thin Lens Equation

        1f=1zi+1zo

        Defocus Blur

        Computing Circle of Confusion (CoC) Size

        If not at the focal plane -> not projected at the sensor plane -> blurry

        image-20211225220925821

        Circle of confusion is proportional to the size of the aperture

        CA=dzi=|zszi|zi

        Ray Tracing for Defocus Blur (Thin Lenses)

        image-20211225223534104

        (One possible) Setup:

        Rendering:

        Depth of Field

        Circle of Confusion for Depth of Field

        Depth range in a scene where the corresponding CoC is considered small enough

        image-20211225223930501

        Depth of Field

        image-20211225224005805

        DOF=DFDNDF=DSf2f2NC(DSf)DN=DSf2f2+NC(DSf)

        Light Field / Lumigraph

        The Plenoptic Function

        Function

        The Plenoptic Function(全光函数) (Adelson & Bergen): The set of all things that we can ever see

        Start with a stationary person and parameterize everything that can be seen

        Sampling Plenotic Function (Top View)

        image-20220112235959511

        Ray

        Ignore the time and color at the moment: 5D (3D position + 2D direction) P(θ,ϕ,VX,VY,VZ)

        Ray Reuse: Infinite line: assuming light is const (vacuum): 4D (2D pos + 2D dir, non-dispersive medium)

        Only need plenoptic surface: The surface of a cube holds all the radiance info due to the enclosed obj

        image-20220113000425315

        Lightfield: The intensity of light from any position to any direction (2D pos + 2D dir)

        Synthesizing novel views

        image-20220113000719480

        Lumigraph / Lightfield

        Outside convex space (The right side as a black box, and can be neglected. We only need to know what will be on the left side)

        image-20220113002452425

        Organization: 2 planes (parameterization: uv and st) can define a lightfield

        image-20220113002708739 image-20220113002914819

        Hold st, vary uv => an image

        image-20220113003011384

        Integral Imaging (“Fly’s Eye” Lenslets)

        image-20220113003332328

        Spatially-multiplexed light field capture using lenslets: Impose fixed trade-off between spatial and angular resolution

        Light Field Camera

        [Ren Ng] Microlens design => Computational Refocusing (virtually changing focal length & aperture size, etc. after taking a photo)

        Understanding

        image-20220113003853466 image-20220113003942494

        The left side of the microlens arrya is a light field

        To Get a “Regular” Photo

        image-20220113004306452

        Computational Refocusing

        Properties

        Problem:

        Computer Graphics is about trade-offs

         

        Color and Perception (Lec. 20)

        Physical Basis of Color

        The Visible Spectrum of Light

        Electromagnetic Radiation: Oscillations of different frequencies (wavelengths)

        image-20220113114111425

        Spectral Power Distribution (SPD)

        Salient property in measuring light: The amount of light present at each wavelength [radiometric units / nanometer OR unitless]

        Daylight Spectral Power Distributions Vary

        image-20220113114242945

        SPD is Linear: Different colors result in an addition of the wavelength distribution

        Color

        Color is a phenomenon of human perception (not a universal property of light)

        Biological Basis of Color

        Anatomy of The Human Eye

        image-20220113114646978

        Retinal Photoreceptor Cells: Rods and Cones

        Photoreceptor cells(感光细胞)

        image-20220113115103092

        Spectral Response of Human Cone Cells

        Three types: S, M, L (corres to peak response at low / medium / long wavelengths)

        image-20220113115240954

        But the distributions of the 3 types of cone cells in different people are very different

        Tristimulus Theory of Color

        Spectral Respondse of Human Cone Cells

        From the prev graph:

        S=rS(λ)s(λ) dλ ;M=rM(λ)s(λ) dλ ;L=rL(λ)s(λ) dλ ;

        The Human Visual System

        Human eye does not measure and brain does not receive info about each wavelength of light

        => The eye “sees” only 3 response values (S, M, L) and only info available to brain

        image-20220113115849174

        Metamerism

        Metamerism(同色异谱)

        Metamers

        Metamers are 2 different spectra (-dim) that project to the same (S, M, L) (3-dim) response

        Result: different spectra have the same color to a human

        Critical to color reproduction: don’t have to reproduce the full spectrum of a real world scene

        Metamerism

        The theory behind color matching

        image-20220113130425592

        Color Reproduction / Matching

        Additive Color

        Addition: when RGB are all 255, get the color of white

        Additive color matching experiment: Use primary colors to obtain a required color => may result in negative values

        CIE RGB Color Matching

        Same as before but primaries are monochromatic light (single wavelength) and the test light is also a monochromatic light

        image-20220113171545597

         

        Function: Graph plots how much of each CIE RGB primary light must be combined to match a monochromatic light of wavelength given on x-axis

        image-20220113171627886

        RCIE RGB=λs(λ)r(λ) dλ ;GCIE RGB=λs(λ)g(λ) dλ ;BCIE RGB=λs(λ)b(λ) dλ

        For any given color, it can be represented by these functions

        Color Space

        Standard Color Space

        Standardized RGB (sRGB)

        A Universal Color Space: CIE XYZ

        Imaginary set of standard color primaries X, Y, Z (artifically defined, red has no negative values)

        image-20220113172534210

        Designed such that:

        Separating Luminance, Chromaticity

        image-20220113172845672

        CIE Chromaticity Diagram

        The diagram represents the Gamut

        image-20220113173115648

        The curved boundary:

        Any color inside is less pure (mixed, white is the least pure color)

        Gamut

        Gamut is the set of chromaticities generated by a set of color primaries. Different color spaces represent different ranges of colors, so they have different gamuts (cover different regions on the chromaticity diagram).

        image-20220113173319061

        Perceptually Organized Color Spaces

        HSV Color Space (Hue-Saturation-Value)

        Axes correspond to artistic characteristics of color

        Widely used in a “color picker” (e.g., in Photoshop)

        In color adjustment: HSL (Hue-Saturation-Lighness)

        image-20220113173527131

        CIELAB Space (AKA L*a*b)

        A commonly used color space that strives for perceptual uniformity

        image-20220113173937415

        Opponent Color Theory

        There’s a good neurological basis for the color space dimensions in CIELAB

        And colors / lightness are relative

        CMYK: A Subtractive Color Space

        CMYK: Cyan, Magenta, Yellow, and Key (Black) (for key: though it can be obtained by mixing, but lower the costs in printing)

        The more we mix, the darker it will be.

        image-20220113174815438

         

        Animation (Lec. 21-22)

        -> See more in GAMES103 - Physics-Based Animation