SlideShare une entreprise Scribd logo
1  sur  91
SIGGRAPH 2015: Advances in Real-Time Rendering course
Stochastic Screen-
Space Reflections
TOMASZ STACHOWIAK ELECTRONIC ARTS / FROSTBITE
YASIN ULUDAG ELECTRONIC ARTS / DICE
SIGGRAPH 2015: Advances in Real-Time Rendering course
Agenda
 Motivation and requirements
 Previous work
 Core algorithm
 Secret sauce
SIGGRAPH 2015: Advances in Real-Time Rendering course
Motivation and requirements
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
Our requirements
 Sharp and blurry reflections
 Contact hardening
 Specular elongation
 Per-pixel roughness and normal
SIGGRAPH 2015: Advances in Real-Time Rendering course
Our requirements
 Sharp and blurry reflections
 Contact hardening
 Specular elongation
 Per-pixel roughness and normal
SIGGRAPH 2015: Advances in Real-Time Rendering course
Our requirements
 Sharp and blurry reflections
 Contact hardening
 Specular elongation
 Per-pixel roughness and normal
SIGGRAPH 2015: Advances in Real-Time Rendering course
Our requirements
 Sharp and blurry reflections
 Contact hardening
 Specular elongation
 Per-pixel roughness and normal
SIGGRAPH 2015: Advances in Real-Time Rendering course
Previous work
SIGGRAPH 2015: Advances in Real-Time Rendering course
Previous work
 Basic mirror-only SSR
 Shoot rays from G-Buffer
 Normals just work
 Raymarch depth
 Return color at hit point
 Final lighting of last frame
 Reprojected
SIGGRAPH 2015: Advances in Real-Time Rendering course
Previous work
 Killzone Shadow Fall [Valient14]
SIGGRAPH 2015: Advances in Real-Time Rendering course
Previous work
 Killzone Shadow Fall [Valient14]
???
SIGGRAPH 2015: Advances in Real-Time Rendering course
Previous work
 Killzone Shadow Fall [Valient14]
No hardening
SIGGRAPH 2015: Advances in Real-Time Rendering course
Previous work
 Filtered Importance Sampling
SIGGRAPH 2015: Advances in Real-Time Rendering course
Our approach
SIGGRAPH 2015: Advances in Real-Time Rendering course
Our approach
 BRDF importance sampling
SIGGRAPH 2015: Advances in Real-Time Rendering course
Our approach
 Hit point reuse across neighbors
SIGGRAPH 2015: Advances in Real-Time Rendering course
Our approach
 Prefiltered samples
 Weighed by each BRDF
SIGGRAPH 2015: Advances in Real-Time Rendering course
Our approach - screenshots
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
Algorithm breakdown
Tile classification
Ray allocation
Cheap raytracing HQ raytracing
Color resolve
Temporal filter
SIGGRAPH 2015: Advances in Real-Time Rendering course
Tile-based classification
 Split screen in tiles
 Shoot tracer rays at 1/8th resolution
 Estimate tile importance
 Hit anything at all?
 Perceptual variance of hit values
 Allocate ray count per tile
 Between user-specified min and max
SIGGRAPH 2015: Advances in Real-Time Rendering course
Ray classification
 Classify each pixel based on roughness
 Expensive rays
 Hierarchical raytracing
 Exact intersection
 Cheap rays
 Simple linear march
 May skip thin objects
SIGGRAPH 2015: Advances in Real-Time Rendering course
Hierarchical tracing
 Stackless ray walk of min-Z pyramid
mip = 0;
while (level > -1)
step through current cell;
if (above Z plane) ++level;
if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering course
Hierarchical tracing
 Stackless ray walk of min-Z pyramid
mip = 0;
while (level > -1)
step through current cell;
if (above Z plane) ++level;
if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering course
Hierarchical tracing
 Stackless ray walk of min-Z pyramid
mip = 0;
while (level > -1)
step through current cell;
if (above Z plane) ++level;
if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering course
Hierarchical tracing
 Stackless ray walk of min-Z pyramid
mip = 0;
while (level > -1)
step through current cell;
if (above Z plane) ++level;
if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering course
Hierarchical tracing
 Stackless ray walk of min-Z pyramid
mip = 0;
while (level > -1)
step through current cell;
if (above Z plane) ++level;
if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering course
Hierarchical tracing
 Stackless ray walk of min-Z pyramid
mip = 0;
while (level > -1)
step through current cell;
if (above Z plane) ++level;
if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering course
Hierarchical tracing
 Stackless ray walk of min-Z pyramid
mip = 0;
while (level > -1)
step through current cell;
if (above Z plane) ++level;
if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering course
Importance sampling refresher
 Variance reduction for Monte Carlo
 Given a target function to integrate…
 Find a probability distribution function
we can sample
 As close as possible to target
 Generate samples according to PDF
 Calculate mean function / PDF
probability
SIGGRAPH 2015: Advances in Real-Time Rendering course
BRDF importance sampling
 Any BRDF
 We use GGX
 Every ray is sacred, every ray is great…
 Make them count!
 Low discrepancy Halton sequence
 Some rays go under the surface
 We re-generate them
 Could try Distribution of Visible Normals
[Heitz14]
SIGGRAPH 2015: Advances in Real-Time Rendering course
Ray reuse
 Neighboring pixels shoot useful rays
 Visibility might be different
 We assume it’s the same
 Reuse intersection results
SIGGRAPH 2015: Advances in Real-Time Rendering course
Ray reuse
 Weigh by local BRDF
 Divide by original PDF
 Returned by ray-trace along with hit-point
 Neighbors can have vastly different properties
 Spikes in BRDF/PDF ratio
 Worse results than without reuse :(
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
1 ray, 1 resolve sample
half-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course
1 ray, 4 resolve samples
half-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course
Variance reduction
 This is what we’re solving
 Where
 fs is the BRDF
 Li is the incident radiance
 We integrate over hemisphere
 We skip emitted light (Le) in the formulation
Variance!
SIGGRAPH 2015: Advances in Real-Time Rendering course
Mul and div by the same factor
SIGGRAPH 2015: Advances in Real-Time Rendering course
… pre-integrate one of them
SIGGRAPH 2015: Advances in Real-Time Rendering course
… and do the rest with Monte Carlo.
SIGGRAPH 2015: Advances in Real-Time Rendering course
Same thing in Simple English
 BRDF-weighted image contributions
 Normalization of BRDF weights
 Pre-integrated BRDF
SIGGRAPH 2015: Advances in Real-Time Rendering course
… and pseudocode
result = 0.0
weightSum = 0.0
for pixel in neighborhood:
weight = localBrdf(pixel.hit) / pixel.hitPdf
result += color(pixel.hit) * weight
weightSum += weight
result /= weightSum
SIGGRAPH 2015: Advances in Real-Time Rendering course
1 ray, 1 resolve sample
no normalization
half-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course
1 ray, 4 resolve samples
no normalization
half-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course
1 ray, 4 resolve samples
with normalization
half-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course
4 rays, 1 resolve sample
with normalization
half-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course
1 ray, 1 resolve sample
with normalization
half-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course
1 ray, 4 resolve samples
with normalization
half-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course
4 rays, 4 resolve samples
with normalization
half-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course
1 ray, 4 resolve samples
with normalization and temporal filter
half-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course
Sparse raytracing
 Raytrace at a reduced resolution
 Reuse multiple rays at full-res
 Unique blend of rays for every pixel
 Automatic from BRDF
 Per-pixel normals and roughness preserved
 Weight normalization fixes gaps
SIGGRAPH 2015: Advances in Real-Time Rendering course
1 ray, 4 resolve samples
with normalization and temporal filter
half-res trace; half-res resolve
SIGGRAPH 2015: Advances in Real-Time Rendering course
1 ray, 4 resolve samples
with normalization and temporal filter
half-res trace; full-res resolve
SIGGRAPH 2015: Advances in Real-Time Rendering course
Temporal reprojection
 Reprojection along G-Buffer depth ‘smears’
 Add reflection depth
 Average from local rays
 Proper reflection parallax
SIGGRAPH 2015: Advances in Real-Time Rendering course
Importance sampling bias
 Lots of noise from BRDF tail
 Shift samples toward mirror direction
 Still need accurate PDF values
 Truncated distribution sampling
float2 u = halton(sampleIdx);
u.x = lerp(u.x, 1.0, bias);
importanceSample(u);
 Different normalization constant
 Our variance reduction re-normalizes!

angle
probability
SIGGRAPH 2015: Advances in Real-Time Rendering course
Filtered importance sampling
 Pre-filter image pyramid
 Estimate footprint of a cone at intersection
 No actual cone tracing
 Mip determined by log function fit
 Roughness
 Distance to hit
 Elongation
SIGGRAPH 2015: Advances in Real-Time Rendering course
Filter bias
 Counter sampling bias with filter bias
 Same parameter
 Tuned for similar look across the range
 Improves performance
 Better spatial coherency
 Smaller mips
SIGGRAPH 2015: Advances in Real-Time Rendering course
Bias 0.0
SIGGRAPH 2015: Advances in Real-Time Rendering course
Bias 0.7
SIGGRAPH 2015: Advances in Real-Time Rendering course
Multi-pixel resolve
 Resolve four pixels at a time
 Using the same rays
 Four running color and weight sums
 Four times the VGPRs
 Two-three waves enough in practice
 Same four color buffer UVs
 Different mips :(
 Find min and max mips, interpolate samples
 Two color samples instead of four :)
SIGGRAPH 2015: Advances in Real-Time Rendering course
Mip anchor interpolation
2 mip fetches
+ interpolation
4 mip fetches
1 mip fetch
SIGGRAPH 2015: Advances in Real-Time Rendering course
Performance
 PS4 timings (Frostbite testbed)
 1600 x 900; all pixels reflective
 HQ rays for “Disney” roughness < 20%
 Bias 0.7
 All passes use compute, can run async.
Resolve
samples
Rays /
half-res
pixel
Effective
samples/
pixel
Tile
classify
Rayalloc
Linear
trace
Hi-Z
trace
Resolve Temporal Total
4 1 4 0.16ms 0.24ms 0.20ms 0.37ms 0.81ms 0.30ms 2.19ms
4 2 8 0.16ms 0.34ms 0.34ms 0.65ms 1.46ms 0.30ms 3.34ms
1 4 4 0.16ms 1.06ms 0.61ms 0.91ms 0.91ms 0.33ms 4.41ms
SIGGRAPH 2015: Advances in Real-Time Rendering course
Video
SIGGRAPH 2015: Advances in Real-Time Rendering course
Conclusion
 All requirements fulfilled
 Accuracy, per-pixel normal, roughness, etc.
 Tiny bright highlights still result in noise
 Can we detect and sample them more?
 Adaptive Multiple Importance Sampling?
 Track variance and blur where it’s high?
SIGGRAPH 2015: Advances in Real-Time Rendering course
Thanks!
 Questions?
 tomasz.stachowiak@frostbite.com
 @h3r2tic
 By the way, we’re hiring!
SIGGRAPH 2015: Advances in Real-Time Rendering course
References
 [Heitz14] Eric Heitz, Eugene D'Eon "Importance Sampling Microfacet-Based BSDFs
using the Distribution of Visible Normals" https://hal.inria.fr/hal-00996995
 [Hermanns15] Lukas Hermanns "Screen space cone tracing for glossy reflections"
http://publica.fraunhofer.de/documents/N-336466.html
 [Karis13] Brian Karis "Real Shading in Unreal Engine 4"
http://blog.selfshadow.com/publications/s2013-shading-course/
 [Karis14] Brian Karis "High-quality Temporal Supersampling"
http://advances.realtimerendering.com/s2014/
 [McGuire14] Morgan McGuire, Michael Mara "Efficient GPU Screen-Space Ray
Tracing" http://jcgt.org/published/0003/04/04/
 [Pearce15] William Pearce "Screen Space Glossy Reflections"
http://roar11.com/2015/07/screen-space-glossy-reflections/
 [Tokuyoshi15] Yusuke Tokuyoshi "Specular Lobe-Aware Filtering and Upsampling for
Interactive Indirect Illumination" http://www.jp.square-enix.com/info/library/
 [Uludag14] Yasin Uludag "Hi-Z Screen-Space Cone-Traced Reflections" GPU Pro 5
 [Valient14] Michal Valient "Reflections And Volumetrics Of Killzone Shadow Fall"
http://advances.realtimerendering.com/s2014/
SIGGRAPH 2015: Advances in Real-Time Rendering course
Bonus slides
SIGGRAPH 2015: Advances in Real-Time Rendering course
Previous work
 Lobe-aware filtering
[Tokuyoshi15]
No hardening :(
SIGGRAPH 2015: Advances in Real-Time Rendering course
Previous work
 Hi-Z Cone Tracing [Uludag14]
SIGGRAPH 2015: Advances in Real-Time Rendering course
Previous work
 Hi-Z Cone Tracing [Uludag14]
???
Can’t trace this
SIGGRAPH 2015: Advances in Real-Time Rendering course
Overblurring at grazing angles
 Cones poorly fit specular lobes at grazing angles
 Lobes become anisotropic in shape
 Cone fit to wider (azimuthal) angle over-blurs
 Fit it to polar angle
 Effectively: shrink the cone at grazing angles
SIGGRAPH 2015: Advances in Real-Time Rendering course
Filter shrinking due to elongation
 Found a close fit in Mathematica
 Also came up with an ad-hoc one
 Ad-hoc close enough in testing
specularConeTangent *=
lerp(saturate(NdotV * 2.0), 1.0, sqrt(roughness));
SIGGRAPH 2015: Advances in Real-Time Rendering course
Filter shrinking due to elongation
No filter Adjusted filterNaive filter
SIGGRAPH 2015: Advances in Real-Time Rendering course
Pre-integrated FG note
 Remember this?
 Multiply by FG after temporal
 We do it when applying SSR to the screen
 Reduces smearing and noise
SIGGRAPH 2015: Advances in Real-Time Rendering course
Multi-pixel resolve jittering
 Ray reuse across 2x2 quads == 2x2 noise
 Makes Temporal AA unhappy!
 2x2 blocks look like features, not aliasing
 Spread out the target pixels
 Jitter temporally to hide artifacts
SIGGRAPH 2015: Advances in Real-Time Rendering course
Variance reduction for ray reuse
 Initial idea: Multiple Importance Sampling
 Treat neighboring pixels as generation strategies
 Accurate, unbiased results
 Expensive
 ALU and VGPR heavy
SIGGRAPH 2015: Advances in Real-Time Rendering course
Neighborhood clamping
 Similar to Temporal AA
 Tuned for some smearing over noise
 Can’t kill all the lag anyway
 Reflection color is from previous frame!
 Expand the color bounding box
 Tiled, loaded from LDS
SIGGRAPH 2015: Advances in Real-Time Rendering course
Common materials
SSR
SIGGRAPH 2015: Advances in Real-Time Rendering course
Common materials
SSR

Contenu connexe

Tendances

DD18 - SEED - Raytracing in Hybrid Real-Time Rendering
DD18 - SEED - Raytracing in Hybrid Real-Time RenderingDD18 - SEED - Raytracing in Hybrid Real-Time Rendering
DD18 - SEED - Raytracing in Hybrid Real-Time RenderingElectronic Arts / DICE
 
Frostbite Rendering Architecture and Real-time Procedural Shading & Texturing...
Frostbite Rendering Architecture and Real-time Procedural Shading & Texturing...Frostbite Rendering Architecture and Real-time Procedural Shading & Texturing...
Frostbite Rendering Architecture and Real-time Procedural Shading & Texturing...Johan Andersson
 
The Rendering Technology of 'Lords of the Fallen' (Game Connection Europe 2014)
The Rendering Technology of 'Lords of the Fallen' (Game Connection Europe 2014)The Rendering Technology of 'Lords of the Fallen' (Game Connection Europe 2014)
The Rendering Technology of 'Lords of the Fallen' (Game Connection Europe 2014)Philip Hammer
 
Stable SSAO in Battlefield 3 with Selective Temporal Filtering
Stable SSAO in Battlefield 3 with Selective Temporal FilteringStable SSAO in Battlefield 3 with Selective Temporal Filtering
Stable SSAO in Battlefield 3 with Selective Temporal FilteringElectronic Arts / DICE
 
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...Electronic Arts / DICE
 
Physically Based and Unified Volumetric Rendering in Frostbite
Physically Based and Unified Volumetric Rendering in FrostbitePhysically Based and Unified Volumetric Rendering in Frostbite
Physically Based and Unified Volumetric Rendering in FrostbiteElectronic Arts / DICE
 
Moving Frostbite to Physically Based Rendering
Moving Frostbite to Physically Based RenderingMoving Frostbite to Physically Based Rendering
Moving Frostbite to Physically Based RenderingElectronic Arts / DICE
 
Calibrating Lighting and Materials in Far Cry 3
Calibrating Lighting and Materials in Far Cry 3Calibrating Lighting and Materials in Far Cry 3
Calibrating Lighting and Materials in Far Cry 3stevemcauley
 
Taking Killzone Shadow Fall Image Quality Into The Next Generation
Taking Killzone Shadow Fall Image Quality Into The Next GenerationTaking Killzone Shadow Fall Image Quality Into The Next Generation
Taking Killzone Shadow Fall Image Quality Into The Next GenerationGuerrilla
 
Paris Master Class 2011 - 07 Dynamic Global Illumination
Paris Master Class 2011 - 07 Dynamic Global IlluminationParis Master Class 2011 - 07 Dynamic Global Illumination
Paris Master Class 2011 - 07 Dynamic Global IlluminationWolfgang Engel
 
Bindless Deferred Decals in The Surge 2
Bindless Deferred Decals in The Surge 2Bindless Deferred Decals in The Surge 2
Bindless Deferred Decals in The Surge 2Philip Hammer
 
Rendering Technologies from Crysis 3 (GDC 2013)
Rendering Technologies from Crysis 3 (GDC 2013)Rendering Technologies from Crysis 3 (GDC 2013)
Rendering Technologies from Crysis 3 (GDC 2013)Tiago Sousa
 
Hable John Uncharted2 Hdr Lighting
Hable John Uncharted2 Hdr LightingHable John Uncharted2 Hdr Lighting
Hable John Uncharted2 Hdr Lightingozlael ozlael
 
The Rendering Technology of Killzone 2
The Rendering Technology of Killzone 2The Rendering Technology of Killzone 2
The Rendering Technology of Killzone 2Guerrilla
 
Graphics Gems from CryENGINE 3 (Siggraph 2013)
Graphics Gems from CryENGINE 3 (Siggraph 2013)Graphics Gems from CryENGINE 3 (Siggraph 2013)
Graphics Gems from CryENGINE 3 (Siggraph 2013)Tiago Sousa
 
Z Buffer Optimizations
Z Buffer OptimizationsZ Buffer Optimizations
Z Buffer Optimizationspjcozzi
 

Tendances (20)

DD18 - SEED - Raytracing in Hybrid Real-Time Rendering
DD18 - SEED - Raytracing in Hybrid Real-Time RenderingDD18 - SEED - Raytracing in Hybrid Real-Time Rendering
DD18 - SEED - Raytracing in Hybrid Real-Time Rendering
 
Frostbite Rendering Architecture and Real-time Procedural Shading & Texturing...
Frostbite Rendering Architecture and Real-time Procedural Shading & Texturing...Frostbite Rendering Architecture and Real-time Procedural Shading & Texturing...
Frostbite Rendering Architecture and Real-time Procedural Shading & Texturing...
 
Light prepass
Light prepassLight prepass
Light prepass
 
The Rendering Technology of 'Lords of the Fallen' (Game Connection Europe 2014)
The Rendering Technology of 'Lords of the Fallen' (Game Connection Europe 2014)The Rendering Technology of 'Lords of the Fallen' (Game Connection Europe 2014)
The Rendering Technology of 'Lords of the Fallen' (Game Connection Europe 2014)
 
Stable SSAO in Battlefield 3 with Selective Temporal Filtering
Stable SSAO in Battlefield 3 with Selective Temporal FilteringStable SSAO in Battlefield 3 with Selective Temporal Filtering
Stable SSAO in Battlefield 3 with Selective Temporal Filtering
 
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...
 
Physically Based and Unified Volumetric Rendering in Frostbite
Physically Based and Unified Volumetric Rendering in FrostbitePhysically Based and Unified Volumetric Rendering in Frostbite
Physically Based and Unified Volumetric Rendering in Frostbite
 
Moving Frostbite to Physically Based Rendering
Moving Frostbite to Physically Based RenderingMoving Frostbite to Physically Based Rendering
Moving Frostbite to Physically Based Rendering
 
Calibrating Lighting and Materials in Far Cry 3
Calibrating Lighting and Materials in Far Cry 3Calibrating Lighting and Materials in Far Cry 3
Calibrating Lighting and Materials in Far Cry 3
 
Frostbite on Mobile
Frostbite on MobileFrostbite on Mobile
Frostbite on Mobile
 
Taking Killzone Shadow Fall Image Quality Into The Next Generation
Taking Killzone Shadow Fall Image Quality Into The Next GenerationTaking Killzone Shadow Fall Image Quality Into The Next Generation
Taking Killzone Shadow Fall Image Quality Into The Next Generation
 
Paris Master Class 2011 - 07 Dynamic Global Illumination
Paris Master Class 2011 - 07 Dynamic Global IlluminationParis Master Class 2011 - 07 Dynamic Global Illumination
Paris Master Class 2011 - 07 Dynamic Global Illumination
 
Bindless Deferred Decals in The Surge 2
Bindless Deferred Decals in The Surge 2Bindless Deferred Decals in The Surge 2
Bindless Deferred Decals in The Surge 2
 
Rendering Technologies from Crysis 3 (GDC 2013)
Rendering Technologies from Crysis 3 (GDC 2013)Rendering Technologies from Crysis 3 (GDC 2013)
Rendering Technologies from Crysis 3 (GDC 2013)
 
Hable John Uncharted2 Hdr Lighting
Hable John Uncharted2 Hdr LightingHable John Uncharted2 Hdr Lighting
Hable John Uncharted2 Hdr Lighting
 
The Rendering Technology of Killzone 2
The Rendering Technology of Killzone 2The Rendering Technology of Killzone 2
The Rendering Technology of Killzone 2
 
Graphics Gems from CryENGINE 3 (Siggraph 2013)
Graphics Gems from CryENGINE 3 (Siggraph 2013)Graphics Gems from CryENGINE 3 (Siggraph 2013)
Graphics Gems from CryENGINE 3 (Siggraph 2013)
 
Z Buffer Optimizations
Z Buffer OptimizationsZ Buffer Optimizations
Z Buffer Optimizations
 
DirectX 11 Rendering in Battlefield 3
DirectX 11 Rendering in Battlefield 3DirectX 11 Rendering in Battlefield 3
DirectX 11 Rendering in Battlefield 3
 
Lighting the City of Glass
Lighting the City of GlassLighting the City of Glass
Lighting the City of Glass
 

Similaire à Stochastic Screen-Space Reflections

Siggraph15 A Novel Sampling Algorithm for Fast and Stable Real-Time Volume Re...
Siggraph15 A Novel Sampling Algorithm for Fast and Stable Real-Time Volume Re...Siggraph15 A Novel Sampling Algorithm for Fast and Stable Real-Time Volume Re...
Siggraph15 A Novel Sampling Algorithm for Fast and Stable Real-Time Volume Re...민웅 이
 
Interactive Stereoscopic Rendering for Non-Planar Projections (GRAPP 2009)
Interactive Stereoscopic Rendering for Non-Planar Projections (GRAPP 2009)Interactive Stereoscopic Rendering for Non-Planar Projections (GRAPP 2009)
Interactive Stereoscopic Rendering for Non-Planar Projections (GRAPP 2009)Matthias Trapp
 
Generative models (Geek hub 2021 lecture)
Generative models (Geek hub 2021 lecture)Generative models (Geek hub 2021 lecture)
Generative models (Geek hub 2021 lecture)Vitaly Bondar
 
INTL AR VR Conference 2018 keynote BBC
INTL AR VR Conference 2018 keynote BBCINTL AR VR Conference 2018 keynote BBC
INTL AR VR Conference 2018 keynote BBCBBC
 
# Can we trust ai. the dilemma of model adjustment
# Can we trust ai. the dilemma of model adjustment# Can we trust ai. the dilemma of model adjustment
# Can we trust ai. the dilemma of model adjustmentTerence Huang
 
Lecture 01 frank dellaert - 3 d reconstruction and mapping: a factor graph ...
Lecture 01   frank dellaert - 3 d reconstruction and mapping: a factor graph ...Lecture 01   frank dellaert - 3 d reconstruction and mapping: a factor graph ...
Lecture 01 frank dellaert - 3 d reconstruction and mapping: a factor graph ...mustafa sarac
 
Webinar: Taking your JMeter Test Monitoring To The Next Level (Ft. PerfAcademy)
Webinar: Taking your JMeter Test Monitoring To The Next Level (Ft. PerfAcademy)Webinar: Taking your JMeter Test Monitoring To The Next Level (Ft. PerfAcademy)
Webinar: Taking your JMeter Test Monitoring To The Next Level (Ft. PerfAcademy)Sebastian Hensiek
 
OpenGLES - Graphics Programming in Android
OpenGLES - Graphics Programming in Android OpenGLES - Graphics Programming in Android
OpenGLES - Graphics Programming in Android Arvind Devaraj
 
Neural Scene Representation & Rendering: Introduction to Novel View Synthesis
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisNeural Scene Representation & Rendering: Introduction to Novel View Synthesis
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisVincent Sitzmann
 
5 Major Challenges in Interactive Rendering
5 Major Challenges in Interactive Rendering5 Major Challenges in Interactive Rendering
5 Major Challenges in Interactive RenderingElectronic Arts / DICE
 
GIMP Designer Certification
GIMP Designer CertificationGIMP Designer Certification
GIMP Designer CertificationVskills
 
Tracking Robustness and Green View Index Estimation of Augmented and Diminish...
Tracking Robustness and Green View Index Estimation of Augmented and Diminish...Tracking Robustness and Green View Index Estimation of Augmented and Diminish...
Tracking Robustness and Green View Index Estimation of Augmented and Diminish...Tomohiro Fukuda
 
3d renderings services 14june17
3d renderings services 14june173d renderings services 14june17
3d renderings services 14june17Yabibo
 
Architectural 3d rendering services 29 jun 17
Architectural 3d rendering services 29 jun 17Architectural 3d rendering services 29 jun 17
Architectural 3d rendering services 29 jun 17Yabibo
 
Architectural visualization company 27 jun 17
Architectural visualization company 27 jun 17Architectural visualization company 27 jun 17
Architectural visualization company 27 jun 17Yabibo
 
3d architectural animation services 22 jun 17
3d architectural animation services 22 jun 173d architectural animation services 22 jun 17
3d architectural animation services 22 jun 17Yabibo
 
3d rendering services 30 jun 17
3d rendering services 30 jun 173d rendering services 30 jun 17
3d rendering services 30 jun 17Yabibo
 
Architectural 3d rendering services 27 jun 17
Architectural 3d rendering services 27 jun 17Architectural 3d rendering services 27 jun 17
Architectural 3d rendering services 27 jun 17Yabibo
 
3d renderings services 14june17
3d renderings services 14june173d renderings services 14june17
3d renderings services 14june17Yabibo
 
Architectural visualization company 29 jun 17
Architectural visualization company 29 jun 17Architectural visualization company 29 jun 17
Architectural visualization company 29 jun 17Yabibo
 

Similaire à Stochastic Screen-Space Reflections (20)

Siggraph15 A Novel Sampling Algorithm for Fast and Stable Real-Time Volume Re...
Siggraph15 A Novel Sampling Algorithm for Fast and Stable Real-Time Volume Re...Siggraph15 A Novel Sampling Algorithm for Fast and Stable Real-Time Volume Re...
Siggraph15 A Novel Sampling Algorithm for Fast and Stable Real-Time Volume Re...
 
Interactive Stereoscopic Rendering for Non-Planar Projections (GRAPP 2009)
Interactive Stereoscopic Rendering for Non-Planar Projections (GRAPP 2009)Interactive Stereoscopic Rendering for Non-Planar Projections (GRAPP 2009)
Interactive Stereoscopic Rendering for Non-Planar Projections (GRAPP 2009)
 
Generative models (Geek hub 2021 lecture)
Generative models (Geek hub 2021 lecture)Generative models (Geek hub 2021 lecture)
Generative models (Geek hub 2021 lecture)
 
INTL AR VR Conference 2018 keynote BBC
INTL AR VR Conference 2018 keynote BBCINTL AR VR Conference 2018 keynote BBC
INTL AR VR Conference 2018 keynote BBC
 
# Can we trust ai. the dilemma of model adjustment
# Can we trust ai. the dilemma of model adjustment# Can we trust ai. the dilemma of model adjustment
# Can we trust ai. the dilemma of model adjustment
 
Lecture 01 frank dellaert - 3 d reconstruction and mapping: a factor graph ...
Lecture 01   frank dellaert - 3 d reconstruction and mapping: a factor graph ...Lecture 01   frank dellaert - 3 d reconstruction and mapping: a factor graph ...
Lecture 01 frank dellaert - 3 d reconstruction and mapping: a factor graph ...
 
Webinar: Taking your JMeter Test Monitoring To The Next Level (Ft. PerfAcademy)
Webinar: Taking your JMeter Test Monitoring To The Next Level (Ft. PerfAcademy)Webinar: Taking your JMeter Test Monitoring To The Next Level (Ft. PerfAcademy)
Webinar: Taking your JMeter Test Monitoring To The Next Level (Ft. PerfAcademy)
 
OpenGLES - Graphics Programming in Android
OpenGLES - Graphics Programming in Android OpenGLES - Graphics Programming in Android
OpenGLES - Graphics Programming in Android
 
Neural Scene Representation & Rendering: Introduction to Novel View Synthesis
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisNeural Scene Representation & Rendering: Introduction to Novel View Synthesis
Neural Scene Representation & Rendering: Introduction to Novel View Synthesis
 
5 Major Challenges in Interactive Rendering
5 Major Challenges in Interactive Rendering5 Major Challenges in Interactive Rendering
5 Major Challenges in Interactive Rendering
 
GIMP Designer Certification
GIMP Designer CertificationGIMP Designer Certification
GIMP Designer Certification
 
Tracking Robustness and Green View Index Estimation of Augmented and Diminish...
Tracking Robustness and Green View Index Estimation of Augmented and Diminish...Tracking Robustness and Green View Index Estimation of Augmented and Diminish...
Tracking Robustness and Green View Index Estimation of Augmented and Diminish...
 
3d renderings services 14june17
3d renderings services 14june173d renderings services 14june17
3d renderings services 14june17
 
Architectural 3d rendering services 29 jun 17
Architectural 3d rendering services 29 jun 17Architectural 3d rendering services 29 jun 17
Architectural 3d rendering services 29 jun 17
 
Architectural visualization company 27 jun 17
Architectural visualization company 27 jun 17Architectural visualization company 27 jun 17
Architectural visualization company 27 jun 17
 
3d architectural animation services 22 jun 17
3d architectural animation services 22 jun 173d architectural animation services 22 jun 17
3d architectural animation services 22 jun 17
 
3d rendering services 30 jun 17
3d rendering services 30 jun 173d rendering services 30 jun 17
3d rendering services 30 jun 17
 
Architectural 3d rendering services 27 jun 17
Architectural 3d rendering services 27 jun 17Architectural 3d rendering services 27 jun 17
Architectural 3d rendering services 27 jun 17
 
3d renderings services 14june17
3d renderings services 14june173d renderings services 14june17
3d renderings services 14june17
 
Architectural visualization company 29 jun 17
Architectural visualization company 29 jun 17Architectural visualization company 29 jun 17
Architectural visualization company 29 jun 17
 

Plus de Electronic Arts / DICE

GDC2019 - SEED - Towards Deep Generative Models in Game Development
GDC2019 - SEED - Towards Deep Generative Models in Game DevelopmentGDC2019 - SEED - Towards Deep Generative Models in Game Development
GDC2019 - SEED - Towards Deep Generative Models in Game DevelopmentElectronic Arts / DICE
 
SIGGRAPH 2010 - Style and Gameplay in the Mirror's Edge
SIGGRAPH 2010 - Style and Gameplay in the Mirror's EdgeSIGGRAPH 2010 - Style and Gameplay in the Mirror's Edge
SIGGRAPH 2010 - Style and Gameplay in the Mirror's EdgeElectronic Arts / DICE
 
Syysgraph 2018 - Modern Graphics Abstractions & Real-Time Ray Tracing
Syysgraph 2018 - Modern Graphics Abstractions & Real-Time Ray TracingSyysgraph 2018 - Modern Graphics Abstractions & Real-Time Ray Tracing
Syysgraph 2018 - Modern Graphics Abstractions & Real-Time Ray TracingElectronic Arts / DICE
 
Khronos Munich 2018 - Halcyon and Vulkan
Khronos Munich 2018 - Halcyon and VulkanKhronos Munich 2018 - Halcyon and Vulkan
Khronos Munich 2018 - Halcyon and VulkanElectronic Arts / DICE
 
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time Raytracing
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time RaytracingCEDEC 2018 - Towards Effortless Photorealism Through Real-Time Raytracing
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time RaytracingElectronic Arts / DICE
 
CEDEC 2018 - Functional Symbiosis of Art Direction and Proceduralism
CEDEC 2018 - Functional Symbiosis of Art Direction and ProceduralismCEDEC 2018 - Functional Symbiosis of Art Direction and Proceduralism
CEDEC 2018 - Functional Symbiosis of Art Direction and ProceduralismElectronic Arts / DICE
 
SIGGRAPH 2018 - PICA PICA and NVIDIA Turing
SIGGRAPH 2018 - PICA PICA and NVIDIA TuringSIGGRAPH 2018 - PICA PICA and NVIDIA Turing
SIGGRAPH 2018 - PICA PICA and NVIDIA TuringElectronic Arts / DICE
 
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time Raytracing
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time RaytracingSIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time Raytracing
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time RaytracingElectronic Arts / DICE
 
EPC 2018 - SEED - Exploring The Collaboration Between Proceduralism & Deep Le...
EPC 2018 - SEED - Exploring The Collaboration Between Proceduralism & Deep Le...EPC 2018 - SEED - Exploring The Collaboration Between Proceduralism & Deep Le...
EPC 2018 - SEED - Exploring The Collaboration Between Proceduralism & Deep Le...Electronic Arts / DICE
 
Creativity of Rules and Patterns: Designing Procedural Systems
Creativity of Rules and Patterns: Designing Procedural SystemsCreativity of Rules and Patterns: Designing Procedural Systems
Creativity of Rules and Patterns: Designing Procedural SystemsElectronic Arts / DICE
 
Shiny Pixels and Beyond: Real-Time Raytracing at SEED
Shiny Pixels and Beyond: Real-Time Raytracing at SEEDShiny Pixels and Beyond: Real-Time Raytracing at SEED
Shiny Pixels and Beyond: Real-Time Raytracing at SEEDElectronic Arts / DICE
 
Future Directions for Compute-for-Graphics
Future Directions for Compute-for-GraphicsFuture Directions for Compute-for-Graphics
Future Directions for Compute-for-GraphicsElectronic Arts / DICE
 
Physically Based Sky, Atmosphere and Cloud Rendering in Frostbite
Physically Based Sky, Atmosphere and Cloud Rendering in FrostbitePhysically Based Sky, Atmosphere and Cloud Rendering in Frostbite
Physically Based Sky, Atmosphere and Cloud Rendering in FrostbiteElectronic Arts / DICE
 
High Dynamic Range color grading and display in Frostbite
High Dynamic Range color grading and display in FrostbiteHigh Dynamic Range color grading and display in Frostbite
High Dynamic Range color grading and display in FrostbiteElectronic Arts / DICE
 
4K Checkerboard in Battlefield 1 and Mass Effect Andromeda
4K Checkerboard in Battlefield 1 and Mass Effect Andromeda4K Checkerboard in Battlefield 1 and Mass Effect Andromeda
4K Checkerboard in Battlefield 1 and Mass Effect AndromedaElectronic Arts / DICE
 
FrameGraph: Extensible Rendering Architecture in Frostbite
FrameGraph: Extensible Rendering Architecture in FrostbiteFrameGraph: Extensible Rendering Architecture in Frostbite
FrameGraph: Extensible Rendering Architecture in FrostbiteElectronic Arts / DICE
 
Photogrammetry and Star Wars Battlefront
Photogrammetry and Star Wars BattlefrontPhotogrammetry and Star Wars Battlefront
Photogrammetry and Star Wars BattlefrontElectronic Arts / DICE
 

Plus de Electronic Arts / DICE (20)

GDC2019 - SEED - Towards Deep Generative Models in Game Development
GDC2019 - SEED - Towards Deep Generative Models in Game DevelopmentGDC2019 - SEED - Towards Deep Generative Models in Game Development
GDC2019 - SEED - Towards Deep Generative Models in Game Development
 
SIGGRAPH 2010 - Style and Gameplay in the Mirror's Edge
SIGGRAPH 2010 - Style and Gameplay in the Mirror's EdgeSIGGRAPH 2010 - Style and Gameplay in the Mirror's Edge
SIGGRAPH 2010 - Style and Gameplay in the Mirror's Edge
 
SEED - Halcyon Architecture
SEED - Halcyon ArchitectureSEED - Halcyon Architecture
SEED - Halcyon Architecture
 
Syysgraph 2018 - Modern Graphics Abstractions & Real-Time Ray Tracing
Syysgraph 2018 - Modern Graphics Abstractions & Real-Time Ray TracingSyysgraph 2018 - Modern Graphics Abstractions & Real-Time Ray Tracing
Syysgraph 2018 - Modern Graphics Abstractions & Real-Time Ray Tracing
 
Khronos Munich 2018 - Halcyon and Vulkan
Khronos Munich 2018 - Halcyon and VulkanKhronos Munich 2018 - Halcyon and Vulkan
Khronos Munich 2018 - Halcyon and Vulkan
 
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time Raytracing
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time RaytracingCEDEC 2018 - Towards Effortless Photorealism Through Real-Time Raytracing
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time Raytracing
 
CEDEC 2018 - Functional Symbiosis of Art Direction and Proceduralism
CEDEC 2018 - Functional Symbiosis of Art Direction and ProceduralismCEDEC 2018 - Functional Symbiosis of Art Direction and Proceduralism
CEDEC 2018 - Functional Symbiosis of Art Direction and Proceduralism
 
SIGGRAPH 2018 - PICA PICA and NVIDIA Turing
SIGGRAPH 2018 - PICA PICA and NVIDIA TuringSIGGRAPH 2018 - PICA PICA and NVIDIA Turing
SIGGRAPH 2018 - PICA PICA and NVIDIA Turing
 
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time Raytracing
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time RaytracingSIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time Raytracing
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time Raytracing
 
EPC 2018 - SEED - Exploring The Collaboration Between Proceduralism & Deep Le...
EPC 2018 - SEED - Exploring The Collaboration Between Proceduralism & Deep Le...EPC 2018 - SEED - Exploring The Collaboration Between Proceduralism & Deep Le...
EPC 2018 - SEED - Exploring The Collaboration Between Proceduralism & Deep Le...
 
Creativity of Rules and Patterns: Designing Procedural Systems
Creativity of Rules and Patterns: Designing Procedural SystemsCreativity of Rules and Patterns: Designing Procedural Systems
Creativity of Rules and Patterns: Designing Procedural Systems
 
Shiny Pixels and Beyond: Real-Time Raytracing at SEED
Shiny Pixels and Beyond: Real-Time Raytracing at SEEDShiny Pixels and Beyond: Real-Time Raytracing at SEED
Shiny Pixels and Beyond: Real-Time Raytracing at SEED
 
Future Directions for Compute-for-Graphics
Future Directions for Compute-for-GraphicsFuture Directions for Compute-for-Graphics
Future Directions for Compute-for-Graphics
 
Physically Based Sky, Atmosphere and Cloud Rendering in Frostbite
Physically Based Sky, Atmosphere and Cloud Rendering in FrostbitePhysically Based Sky, Atmosphere and Cloud Rendering in Frostbite
Physically Based Sky, Atmosphere and Cloud Rendering in Frostbite
 
High Dynamic Range color grading and display in Frostbite
High Dynamic Range color grading and display in FrostbiteHigh Dynamic Range color grading and display in Frostbite
High Dynamic Range color grading and display in Frostbite
 
4K Checkerboard in Battlefield 1 and Mass Effect Andromeda
4K Checkerboard in Battlefield 1 and Mass Effect Andromeda4K Checkerboard in Battlefield 1 and Mass Effect Andromeda
4K Checkerboard in Battlefield 1 and Mass Effect Andromeda
 
FrameGraph: Extensible Rendering Architecture in Frostbite
FrameGraph: Extensible Rendering Architecture in FrostbiteFrameGraph: Extensible Rendering Architecture in Frostbite
FrameGraph: Extensible Rendering Architecture in Frostbite
 
Photogrammetry and Star Wars Battlefront
Photogrammetry and Star Wars BattlefrontPhotogrammetry and Star Wars Battlefront
Photogrammetry and Star Wars Battlefront
 
Rendering Battlefield 4 with Mantle
Rendering Battlefield 4 with MantleRendering Battlefield 4 with Mantle
Rendering Battlefield 4 with Mantle
 
Mantle for Developers
Mantle for DevelopersMantle for Developers
Mantle for Developers
 

Dernier

Powering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsPowering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsSafe Software
 
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...Bert Jan Schrijver
 
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptxReal-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptxRTS corp
 
Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...Rob Geurden
 
Understanding Flamingo - DeepMind's VLM Architecture
Understanding Flamingo - DeepMind's VLM ArchitectureUnderstanding Flamingo - DeepMind's VLM Architecture
Understanding Flamingo - DeepMind's VLM Architecturerahul_net
 
Machine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their EngineeringMachine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their EngineeringHironori Washizaki
 
What’s New in VictoriaMetrics: Q1 2024 Updates
What’s New in VictoriaMetrics: Q1 2024 UpdatesWhat’s New in VictoriaMetrics: Q1 2024 Updates
What’s New in VictoriaMetrics: Q1 2024 UpdatesVictoriaMetrics
 
Osi security architecture in network.pptx
Osi security architecture in network.pptxOsi security architecture in network.pptx
Osi security architecture in network.pptxVinzoCenzo
 
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdfEnhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdfRTS corp
 
Comparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdfComparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdfDrew Moseley
 
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdfAndrey Devyatkin
 
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptxThe Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptxRTS corp
 
Salesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZSalesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZABSYZ Inc
 
Large Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and RepairLarge Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and RepairLionel Briand
 
VictoriaMetrics Q1 Meet Up '24 - Community & News Update
VictoriaMetrics Q1 Meet Up '24 - Community & News UpdateVictoriaMetrics Q1 Meet Up '24 - Community & News Update
VictoriaMetrics Q1 Meet Up '24 - Community & News UpdateVictoriaMetrics
 
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...OnePlan Solutions
 
Amazon Bedrock in Action - presentation of the Bedrock's capabilities
Amazon Bedrock in Action - presentation of the Bedrock's capabilitiesAmazon Bedrock in Action - presentation of the Bedrock's capabilities
Amazon Bedrock in Action - presentation of the Bedrock's capabilitiesKrzysztofKkol1
 
eSoftTools IMAP Backup Software and migration tools
eSoftTools IMAP Backup Software and migration toolseSoftTools IMAP Backup Software and migration tools
eSoftTools IMAP Backup Software and migration toolsosttopstonverter
 
Keeping your build tool updated in a multi repository world
Keeping your build tool updated in a multi repository worldKeeping your build tool updated in a multi repository world
Keeping your build tool updated in a multi repository worldRoberto Pérez Alcolea
 
Ronisha Informatics Private Limited Catalogue
Ronisha Informatics Private Limited CatalogueRonisha Informatics Private Limited Catalogue
Ronisha Informatics Private Limited Catalogueitservices996
 

Dernier (20)

Powering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsPowering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data Streams
 
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
JavaLand 2024 - Going serverless with Quarkus GraalVM native images and AWS L...
 
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptxReal-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
Real-time Tracking and Monitoring with Cargo Cloud Solutions.pptx
 
Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...Simplifying Microservices & Apps - The art of effortless development - Meetup...
Simplifying Microservices & Apps - The art of effortless development - Meetup...
 
Understanding Flamingo - DeepMind's VLM Architecture
Understanding Flamingo - DeepMind's VLM ArchitectureUnderstanding Flamingo - DeepMind's VLM Architecture
Understanding Flamingo - DeepMind's VLM Architecture
 
Machine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their EngineeringMachine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their Engineering
 
What’s New in VictoriaMetrics: Q1 2024 Updates
What’s New in VictoriaMetrics: Q1 2024 UpdatesWhat’s New in VictoriaMetrics: Q1 2024 Updates
What’s New in VictoriaMetrics: Q1 2024 Updates
 
Osi security architecture in network.pptx
Osi security architecture in network.pptxOsi security architecture in network.pptx
Osi security architecture in network.pptx
 
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdfEnhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
Enhancing Supply Chain Visibility with Cargo Cloud Solutions.pdf
 
Comparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdfComparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdf
 
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
 
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptxThe Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
The Role of IoT and Sensor Technology in Cargo Cloud Solutions.pptx
 
Salesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZSalesforce Implementation Services PPT By ABSYZ
Salesforce Implementation Services PPT By ABSYZ
 
Large Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and RepairLarge Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and Repair
 
VictoriaMetrics Q1 Meet Up '24 - Community & News Update
VictoriaMetrics Q1 Meet Up '24 - Community & News UpdateVictoriaMetrics Q1 Meet Up '24 - Community & News Update
VictoriaMetrics Q1 Meet Up '24 - Community & News Update
 
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
 
Amazon Bedrock in Action - presentation of the Bedrock's capabilities
Amazon Bedrock in Action - presentation of the Bedrock's capabilitiesAmazon Bedrock in Action - presentation of the Bedrock's capabilities
Amazon Bedrock in Action - presentation of the Bedrock's capabilities
 
eSoftTools IMAP Backup Software and migration tools
eSoftTools IMAP Backup Software and migration toolseSoftTools IMAP Backup Software and migration tools
eSoftTools IMAP Backup Software and migration tools
 
Keeping your build tool updated in a multi repository world
Keeping your build tool updated in a multi repository worldKeeping your build tool updated in a multi repository world
Keeping your build tool updated in a multi repository world
 
Ronisha Informatics Private Limited Catalogue
Ronisha Informatics Private Limited CatalogueRonisha Informatics Private Limited Catalogue
Ronisha Informatics Private Limited Catalogue
 

Stochastic Screen-Space Reflections

  • 1. SIGGRAPH 2015: Advances in Real-Time Rendering course Stochastic Screen- Space Reflections TOMASZ STACHOWIAK ELECTRONIC ARTS / FROSTBITE YASIN ULUDAG ELECTRONIC ARTS / DICE
  • 2. SIGGRAPH 2015: Advances in Real-Time Rendering course Agenda  Motivation and requirements  Previous work  Core algorithm  Secret sauce
  • 3. SIGGRAPH 2015: Advances in Real-Time Rendering course Motivation and requirements
  • 4. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 5. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 6. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 7. SIGGRAPH 2015: Advances in Real-Time Rendering course Our requirements  Sharp and blurry reflections  Contact hardening  Specular elongation  Per-pixel roughness and normal
  • 8. SIGGRAPH 2015: Advances in Real-Time Rendering course Our requirements  Sharp and blurry reflections  Contact hardening  Specular elongation  Per-pixel roughness and normal
  • 9. SIGGRAPH 2015: Advances in Real-Time Rendering course Our requirements  Sharp and blurry reflections  Contact hardening  Specular elongation  Per-pixel roughness and normal
  • 10. SIGGRAPH 2015: Advances in Real-Time Rendering course Our requirements  Sharp and blurry reflections  Contact hardening  Specular elongation  Per-pixel roughness and normal
  • 11. SIGGRAPH 2015: Advances in Real-Time Rendering course Previous work
  • 12. SIGGRAPH 2015: Advances in Real-Time Rendering course Previous work  Basic mirror-only SSR  Shoot rays from G-Buffer  Normals just work  Raymarch depth  Return color at hit point  Final lighting of last frame  Reprojected
  • 13. SIGGRAPH 2015: Advances in Real-Time Rendering course Previous work  Killzone Shadow Fall [Valient14]
  • 14. SIGGRAPH 2015: Advances in Real-Time Rendering course Previous work  Killzone Shadow Fall [Valient14] ???
  • 15. SIGGRAPH 2015: Advances in Real-Time Rendering course Previous work  Killzone Shadow Fall [Valient14] No hardening
  • 16. SIGGRAPH 2015: Advances in Real-Time Rendering course Previous work  Filtered Importance Sampling
  • 17. SIGGRAPH 2015: Advances in Real-Time Rendering course Our approach
  • 18. SIGGRAPH 2015: Advances in Real-Time Rendering course Our approach  BRDF importance sampling
  • 19. SIGGRAPH 2015: Advances in Real-Time Rendering course Our approach  Hit point reuse across neighbors
  • 20. SIGGRAPH 2015: Advances in Real-Time Rendering course Our approach  Prefiltered samples  Weighed by each BRDF
  • 21. SIGGRAPH 2015: Advances in Real-Time Rendering course Our approach - screenshots
  • 22. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 23. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 24. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 25. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 26. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 27. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 28. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 29. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 30. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 31. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 32. SIGGRAPH 2015: Advances in Real-Time Rendering course Algorithm breakdown Tile classification Ray allocation Cheap raytracing HQ raytracing Color resolve Temporal filter
  • 33. SIGGRAPH 2015: Advances in Real-Time Rendering course Tile-based classification  Split screen in tiles  Shoot tracer rays at 1/8th resolution  Estimate tile importance  Hit anything at all?  Perceptual variance of hit values  Allocate ray count per tile  Between user-specified min and max
  • 34. SIGGRAPH 2015: Advances in Real-Time Rendering course Ray classification  Classify each pixel based on roughness  Expensive rays  Hierarchical raytracing  Exact intersection  Cheap rays  Simple linear march  May skip thin objects
  • 35. SIGGRAPH 2015: Advances in Real-Time Rendering course Hierarchical tracing  Stackless ray walk of min-Z pyramid mip = 0; while (level > -1) step through current cell; if (above Z plane) ++level; if (below Z plane) --level;
  • 36. SIGGRAPH 2015: Advances in Real-Time Rendering course Hierarchical tracing  Stackless ray walk of min-Z pyramid mip = 0; while (level > -1) step through current cell; if (above Z plane) ++level; if (below Z plane) --level;
  • 37. SIGGRAPH 2015: Advances in Real-Time Rendering course Hierarchical tracing  Stackless ray walk of min-Z pyramid mip = 0; while (level > -1) step through current cell; if (above Z plane) ++level; if (below Z plane) --level;
  • 38. SIGGRAPH 2015: Advances in Real-Time Rendering course Hierarchical tracing  Stackless ray walk of min-Z pyramid mip = 0; while (level > -1) step through current cell; if (above Z plane) ++level; if (below Z plane) --level;
  • 39. SIGGRAPH 2015: Advances in Real-Time Rendering course Hierarchical tracing  Stackless ray walk of min-Z pyramid mip = 0; while (level > -1) step through current cell; if (above Z plane) ++level; if (below Z plane) --level;
  • 40. SIGGRAPH 2015: Advances in Real-Time Rendering course Hierarchical tracing  Stackless ray walk of min-Z pyramid mip = 0; while (level > -1) step through current cell; if (above Z plane) ++level; if (below Z plane) --level;
  • 41. SIGGRAPH 2015: Advances in Real-Time Rendering course Hierarchical tracing  Stackless ray walk of min-Z pyramid mip = 0; while (level > -1) step through current cell; if (above Z plane) ++level; if (below Z plane) --level;
  • 42. SIGGRAPH 2015: Advances in Real-Time Rendering course Importance sampling refresher  Variance reduction for Monte Carlo  Given a target function to integrate…  Find a probability distribution function we can sample  As close as possible to target  Generate samples according to PDF  Calculate mean function / PDF probability
  • 43. SIGGRAPH 2015: Advances in Real-Time Rendering course BRDF importance sampling  Any BRDF  We use GGX  Every ray is sacred, every ray is great…  Make them count!  Low discrepancy Halton sequence  Some rays go under the surface  We re-generate them  Could try Distribution of Visible Normals [Heitz14]
  • 44. SIGGRAPH 2015: Advances in Real-Time Rendering course Ray reuse  Neighboring pixels shoot useful rays  Visibility might be different  We assume it’s the same  Reuse intersection results
  • 45. SIGGRAPH 2015: Advances in Real-Time Rendering course Ray reuse  Weigh by local BRDF  Divide by original PDF  Returned by ray-trace along with hit-point  Neighbors can have vastly different properties  Spikes in BRDF/PDF ratio  Worse results than without reuse :(
  • 46. SIGGRAPH 2015: Advances in Real-Time Rendering course
  • 47. SIGGRAPH 2015: Advances in Real-Time Rendering course 1 ray, 1 resolve sample half-resolution
  • 48. SIGGRAPH 2015: Advances in Real-Time Rendering course 1 ray, 4 resolve samples half-resolution
  • 49. SIGGRAPH 2015: Advances in Real-Time Rendering course Variance reduction  This is what we’re solving  Where  fs is the BRDF  Li is the incident radiance  We integrate over hemisphere  We skip emitted light (Le) in the formulation Variance!
  • 50. SIGGRAPH 2015: Advances in Real-Time Rendering course Mul and div by the same factor
  • 51. SIGGRAPH 2015: Advances in Real-Time Rendering course … pre-integrate one of them
  • 52. SIGGRAPH 2015: Advances in Real-Time Rendering course … and do the rest with Monte Carlo.
  • 53. SIGGRAPH 2015: Advances in Real-Time Rendering course Same thing in Simple English  BRDF-weighted image contributions  Normalization of BRDF weights  Pre-integrated BRDF
  • 54. SIGGRAPH 2015: Advances in Real-Time Rendering course … and pseudocode result = 0.0 weightSum = 0.0 for pixel in neighborhood: weight = localBrdf(pixel.hit) / pixel.hitPdf result += color(pixel.hit) * weight weightSum += weight result /= weightSum
  • 55. SIGGRAPH 2015: Advances in Real-Time Rendering course 1 ray, 1 resolve sample no normalization half-resolution
  • 56. SIGGRAPH 2015: Advances in Real-Time Rendering course 1 ray, 4 resolve samples no normalization half-resolution
  • 57. SIGGRAPH 2015: Advances in Real-Time Rendering course 1 ray, 4 resolve samples with normalization half-resolution
  • 58. SIGGRAPH 2015: Advances in Real-Time Rendering course 4 rays, 1 resolve sample with normalization half-resolution
  • 59. SIGGRAPH 2015: Advances in Real-Time Rendering course 1 ray, 1 resolve sample with normalization half-resolution
  • 60. SIGGRAPH 2015: Advances in Real-Time Rendering course 1 ray, 4 resolve samples with normalization half-resolution
  • 61. SIGGRAPH 2015: Advances in Real-Time Rendering course 4 rays, 4 resolve samples with normalization half-resolution
  • 62. SIGGRAPH 2015: Advances in Real-Time Rendering course 1 ray, 4 resolve samples with normalization and temporal filter half-resolution
  • 63. SIGGRAPH 2015: Advances in Real-Time Rendering course Sparse raytracing  Raytrace at a reduced resolution  Reuse multiple rays at full-res  Unique blend of rays for every pixel  Automatic from BRDF  Per-pixel normals and roughness preserved  Weight normalization fixes gaps
  • 64. SIGGRAPH 2015: Advances in Real-Time Rendering course 1 ray, 4 resolve samples with normalization and temporal filter half-res trace; half-res resolve
  • 65. SIGGRAPH 2015: Advances in Real-Time Rendering course 1 ray, 4 resolve samples with normalization and temporal filter half-res trace; full-res resolve
  • 66. SIGGRAPH 2015: Advances in Real-Time Rendering course Temporal reprojection  Reprojection along G-Buffer depth ‘smears’  Add reflection depth  Average from local rays  Proper reflection parallax
  • 67. SIGGRAPH 2015: Advances in Real-Time Rendering course Importance sampling bias  Lots of noise from BRDF tail  Shift samples toward mirror direction  Still need accurate PDF values  Truncated distribution sampling float2 u = halton(sampleIdx); u.x = lerp(u.x, 1.0, bias); importanceSample(u);  Different normalization constant  Our variance reduction re-normalizes!  angle probability
  • 68. SIGGRAPH 2015: Advances in Real-Time Rendering course Filtered importance sampling  Pre-filter image pyramid  Estimate footprint of a cone at intersection  No actual cone tracing  Mip determined by log function fit  Roughness  Distance to hit  Elongation
  • 69. SIGGRAPH 2015: Advances in Real-Time Rendering course Filter bias  Counter sampling bias with filter bias  Same parameter  Tuned for similar look across the range  Improves performance  Better spatial coherency  Smaller mips
  • 70. SIGGRAPH 2015: Advances in Real-Time Rendering course Bias 0.0
  • 71. SIGGRAPH 2015: Advances in Real-Time Rendering course Bias 0.7
  • 72. SIGGRAPH 2015: Advances in Real-Time Rendering course Multi-pixel resolve  Resolve four pixels at a time  Using the same rays  Four running color and weight sums  Four times the VGPRs  Two-three waves enough in practice  Same four color buffer UVs  Different mips :(  Find min and max mips, interpolate samples  Two color samples instead of four :)
  • 73. SIGGRAPH 2015: Advances in Real-Time Rendering course Mip anchor interpolation 2 mip fetches + interpolation 4 mip fetches 1 mip fetch
  • 74. SIGGRAPH 2015: Advances in Real-Time Rendering course Performance  PS4 timings (Frostbite testbed)  1600 x 900; all pixels reflective  HQ rays for “Disney” roughness < 20%  Bias 0.7  All passes use compute, can run async. Resolve samples Rays / half-res pixel Effective samples/ pixel Tile classify Rayalloc Linear trace Hi-Z trace Resolve Temporal Total 4 1 4 0.16ms 0.24ms 0.20ms 0.37ms 0.81ms 0.30ms 2.19ms 4 2 8 0.16ms 0.34ms 0.34ms 0.65ms 1.46ms 0.30ms 3.34ms 1 4 4 0.16ms 1.06ms 0.61ms 0.91ms 0.91ms 0.33ms 4.41ms
  • 75. SIGGRAPH 2015: Advances in Real-Time Rendering course Video
  • 76. SIGGRAPH 2015: Advances in Real-Time Rendering course Conclusion  All requirements fulfilled  Accuracy, per-pixel normal, roughness, etc.  Tiny bright highlights still result in noise  Can we detect and sample them more?  Adaptive Multiple Importance Sampling?  Track variance and blur where it’s high?
  • 77. SIGGRAPH 2015: Advances in Real-Time Rendering course Thanks!  Questions?  tomasz.stachowiak@frostbite.com  @h3r2tic  By the way, we’re hiring!
  • 78. SIGGRAPH 2015: Advances in Real-Time Rendering course References  [Heitz14] Eric Heitz, Eugene D'Eon "Importance Sampling Microfacet-Based BSDFs using the Distribution of Visible Normals" https://hal.inria.fr/hal-00996995  [Hermanns15] Lukas Hermanns "Screen space cone tracing for glossy reflections" http://publica.fraunhofer.de/documents/N-336466.html  [Karis13] Brian Karis "Real Shading in Unreal Engine 4" http://blog.selfshadow.com/publications/s2013-shading-course/  [Karis14] Brian Karis "High-quality Temporal Supersampling" http://advances.realtimerendering.com/s2014/  [McGuire14] Morgan McGuire, Michael Mara "Efficient GPU Screen-Space Ray Tracing" http://jcgt.org/published/0003/04/04/  [Pearce15] William Pearce "Screen Space Glossy Reflections" http://roar11.com/2015/07/screen-space-glossy-reflections/  [Tokuyoshi15] Yusuke Tokuyoshi "Specular Lobe-Aware Filtering and Upsampling for Interactive Indirect Illumination" http://www.jp.square-enix.com/info/library/  [Uludag14] Yasin Uludag "Hi-Z Screen-Space Cone-Traced Reflections" GPU Pro 5  [Valient14] Michal Valient "Reflections And Volumetrics Of Killzone Shadow Fall" http://advances.realtimerendering.com/s2014/
  • 79. SIGGRAPH 2015: Advances in Real-Time Rendering course Bonus slides
  • 80. SIGGRAPH 2015: Advances in Real-Time Rendering course Previous work  Lobe-aware filtering [Tokuyoshi15] No hardening :(
  • 81. SIGGRAPH 2015: Advances in Real-Time Rendering course Previous work  Hi-Z Cone Tracing [Uludag14]
  • 82. SIGGRAPH 2015: Advances in Real-Time Rendering course Previous work  Hi-Z Cone Tracing [Uludag14] ??? Can’t trace this
  • 83. SIGGRAPH 2015: Advances in Real-Time Rendering course Overblurring at grazing angles  Cones poorly fit specular lobes at grazing angles  Lobes become anisotropic in shape  Cone fit to wider (azimuthal) angle over-blurs  Fit it to polar angle  Effectively: shrink the cone at grazing angles
  • 84. SIGGRAPH 2015: Advances in Real-Time Rendering course Filter shrinking due to elongation  Found a close fit in Mathematica  Also came up with an ad-hoc one  Ad-hoc close enough in testing specularConeTangent *= lerp(saturate(NdotV * 2.0), 1.0, sqrt(roughness));
  • 85. SIGGRAPH 2015: Advances in Real-Time Rendering course Filter shrinking due to elongation No filter Adjusted filterNaive filter
  • 86. SIGGRAPH 2015: Advances in Real-Time Rendering course Pre-integrated FG note  Remember this?  Multiply by FG after temporal  We do it when applying SSR to the screen  Reduces smearing and noise
  • 87. SIGGRAPH 2015: Advances in Real-Time Rendering course Multi-pixel resolve jittering  Ray reuse across 2x2 quads == 2x2 noise  Makes Temporal AA unhappy!  2x2 blocks look like features, not aliasing  Spread out the target pixels  Jitter temporally to hide artifacts
  • 88. SIGGRAPH 2015: Advances in Real-Time Rendering course Variance reduction for ray reuse  Initial idea: Multiple Importance Sampling  Treat neighboring pixels as generation strategies  Accurate, unbiased results  Expensive  ALU and VGPR heavy
  • 89. SIGGRAPH 2015: Advances in Real-Time Rendering course Neighborhood clamping  Similar to Temporal AA  Tuned for some smearing over noise  Can’t kill all the lag anyway  Reflection color is from previous frame!  Expand the color bounding box  Tiled, loaded from LDS
  • 90. SIGGRAPH 2015: Advances in Real-Time Rendering course Common materials SSR
  • 91. SIGGRAPH 2015: Advances in Real-Time Rendering course Common materials SSR

Notes de l'éditeur

  1. Here’s a concept art piece from Mirror’s Edge Catalyst. As you can see, pretty much everything here is reflective. It is worth noting that while the surfaces are generally smooth, not all reflections are mirror-like. Notice the stretched reflection on the left, and the blurry reflections in the walls on the right.
  2. Here we have a photograph of one of our conference rooms at Frostbite. Interestingly, when looking at this picture, you don’t perceive reflections per se; instead we see some shadowing of the outdoor light. This highlights an important feature of SSR which is often underplayed. It provides specular occlusion for local and distant reflections. Additionally, in this scene, there are plenty of rougher reflections, as well as variations in the reflectivity, especially on the floor. Take notice also how the reflections become sharper as they approach the contact points of the surfaces they reflect in, for example the chair legs.
  3. This photo represents something that you might see quite often in a racing game. But it also highlights a few important features we would like to support. The reflections stretch vertically; there is also high frequency variation in surface roughness and normals.
  4. So now we can formulate a set of requirements that we would like to have for our SSR. First of all, it should support sharp and blurry reflections. But those aren’t realistic blurry reflections…
  5. … we would like them to sharpen as they come in contact with the reflecting surface.
  6. They should also stretch vertically. If you’re using a microfacet BRDF, your analytical lights do this already. SSR should also exhibit this physical phenomenon, as it’s important for realism.
  7. Finally, we would like to support per-pixel normal and roughness variation.
  8. Let’s take a look at a few SSR techniques, and see if and how they fulfill the requirements we want here. We don’t have the time to go through all techniques presented so far, so we will mostly look at the methods which inspired our work.
  9. Let’s start with a quick refresher of the basic SSR algorithm. There were a few publications detailing a possible implementation, so I won’t go into much detail. If you’d like to cover those basics more, check out Morgan McGuire and Michael Mara’s paper, for example [McGuire14]. Anyway, the basic algorithm only works with mirror-like reflections, so ray generation is trivial. We just take the G-Buffer’s depth and normals, and calculate the mirror directions. Then we ray-march the same depth buffer using any algorithm, for example by taking constant-sized steps in screen-space. When we intersect a surface, we must fetch the color at the intersection. There are a few options here, but the most popular one is re-projecting the previous frame’s final lighting, and returning that.
  10. Killzone Shadow Fall had an interesting approach for glossy SSR, similar to Image Space Gathering. They would first generate a sharp reflection image... … for all the pixels on the screen. And then they would create a blur pyramid of this sharp reflection, and choose the blurriness based on surface roughness. This guarantees that there are no discontinuities, although there could be other artifacts. For one, it isn’t clear what level of blurriness should correspond to roughness value.
  11. And then there’s the problem of normals. Meaning, they basically disappear on rougher surfaces, as the blur is not feature-aware.
  12. Plus there’s the lack of contact hardening, as the blurriness is only derived from roughness, not object proximity.
  13. An approach which does satisfy all the requirements is importance sampling, with or without filtering. Instead of blurring a sharp reflection, this approach produces a blurry reflection by stochastically sampling the directions. For a rough surface, the directions will be more dispersed, and for a smooth surfaces, they will go in a narrow beam. The filtered variant uses a pre-blurred color buffer to sample from. The only problem with this approach is that unless you use a lot of rays, it will be very noisy.
  14. Our approach takes ideas from both filtered importance sampling, and the Killzone approach. We stochastically shoot rays, but also perform filtering in the image space, although with a pretty fancy kernel.
  15. We begin by importance sampling directions from surfaces, but shoot only very few rays, even as low as just one per pixel.
  16. The major difference is that instead of returning the colors immediately, we barely store the intersection points. Then we have a ‘resolve’ pass, which reuses those intersection points across neighboring pixels.
  17. During the reuse, we calculate the desired blur level of each sample, using an approximate cone fit, and taking the reflection distance into account. We carefully weigh the contributions by each pixel’s BRDF, and that way we avoid normal smearing, over-blurring, and we retain sharp contact points.
  18. Here’s a few screenshots of the results. I’ll show a video later as well.
  19. Now that you (hopefully) liked the results, let’s take a look at the break-down of our algorithm. We begin by classifying which areas of the screen need reflections, and estimating how much work we need to spend in each tile. Then we adaptively allocate rays, and importance sample them. Next, we perform the ray-tracing in two flavors. After all the ray-tracing is done, we resolve the colors via a ray-reuse filter. Finally, we reproject the previous frame’s results, and blend the results. This temporal filter significantly reduces noise.
  20. Diving deeper, let’s take a closer look at all the steps. The first one is tile-based classification. We subdivide the screen into square tiles, and we estimate for each tile how many rays we need for it. We let the user specify a min and max number of rays that should be shot per pixel, and this pass decides on a value between the min and max. To do so, we actually trace some rays for each tile, and guesstimate the perceptual variance of the reflection. This catches stark contrasts in the image, and allocates work where the reflection is estimated to be more noisy. We also approximately skip tiles from further calculations if none of the generated rays hit anything.
  21. Once we know which tiles need SSR, we launch an indirect compute pass which decides what kind of ray-tracing is needed for each pixel. We have two ray-tracing algorithms. There’s the super precise hierarchical Z tracer, which gives exact intersection points. We use this one on smooth surfaces, as for mirror-like reflections we would like the intersection points to be as precise as possible. For rough reflections, we cheat and use a low quality linear marcher, as the result will be heavily filtered anyway, and exact intersection points aren’t that important.
  22. Just a quick note about the hierarchical tracing. A description of it is in Yasin’s article in GPU Pro 5 [Uludag14] and a few other publications [Pearce15] [Hermanns15], but it’s probably good to mention it here as well. The idea is quite simple. We have a min-z pyramid, which is basically a quad-tree.
  23. We start at the most refined level of the hierarchy. Regardless of which level we are in at a given moment, we step the ray to the edge of the cell, or until the intersectoin with the Z plane in the cell. If we don’t hit the Z plane, we move to a coarser level.
  24. This provides efficient empty space skipping
  25. Once we find an intersection with a Z plane, we move to a finer level in the hierarchy.
  26. … again.
  27. … and again.
  28. Once we hit the Z plane at the coarsest level, we have found the intersection.
  29. I have mentioned importance sampling before, but you may not be intrinsically familiar with the concept. It is a variance reduction technique for Monte Carlo integration. Suppose that we want to integrate this function; we need to have a probability distribution from which we can generate samples. It should be as close as possible to the original function that we’re integrating. We generate samples from the PDF, and for each one we calculate the ratio of the function value to the value of the PDF. As the number of samples increases, the average of those values approaches the value of our integrand.
  30. We use importance sampling of the BRDF in order to generate ray directions. Any BRDF can be used here; we happen to worship GGX. Now, we can’t generate lousy rays. Ray-tracing is expensive, so we spend some extra effort to generate very nice rays. As a bare minimum, we use a Halton sampler, with the values pre-calculated on the CPU side. Now, the classic way for importance sampling BRDFs works by generating half-angle vectors, and reflecting them off the microfacet normal. This results in some rays actually pointing below the surface. We can’t use those, so we re-generate them a few times until we get nice ones. The extra cost is actually negligible here. One could go even further, and importance sample from the Distribution of Visible Normals, as proposed by Eric Heitz and Eugene D’Eon. This will probably considerably reduce the noise, and is certainly future work for us.
  31. Once we’ve performed all the ray-tracing, it’s time to calculate the colors of our picture. The simplest way is to look-up the colors at the intersections of each pixel’s rays. But then we would need plenty of rays per pixel to achieve a noise-free result. Most of the time however, we can steal some rays from neighboring pixels. Usually the surface properties will be similar, and visibility is almost always the same. Therefore we pretend the neighboring rays were shot from the local pixel, and use their intersection points.
  32. Of course we can’t just arbitrarily reuse the intersection points. We still need to be careful to produce the correct result. All the rays’ contributions need to be correctly weighed by the current pixel’s BRDF, and divided by the PDF that they were sampled from. … and then it turns out that we have been overly optimistic. The neighboring probability distributions can be quite different from the BRDF values at the local pixel, causing pretty big spikes in the f over pdf ratio.
  33. Let’s take a look at an example scene, where there’s roughness and normal variation.
  34. Here’s just the reflections, calculated with one ray per pixel at half-res, and without any ray reuse.
  35. And here we use four rays in the resolve pass. A few surfaces became smoother, but we have plenty of high intensity speckles everywhere ground. Overall, this is a worse result than without any ray reuse.
  36. We can solve that, but we must first take a step back and take a look at what exactly we’re trying to solve here. This is a version of the scattering integral, with the emitted light removed for brevity. The reflection’s value is determined by a hemi-spherical integral of the incident radiance weighed by the BRDF. Here, the BRDF and cosine term divided by the PDF value is the source of variance. After summing up all the samples, spikes in those ratios result in pixels with overly bright or dark colors.
  37. We attack the variance problem directly, as will become obvious in a few moments. We take the original scattering equation, and we multiply its numerator and denominator by the same value. As long as it’s not zero, that’s an identity transformation.
  38. We then replace one of the integrals with a constant which we can pre-calculate offline. Chances are that you are already calculating this value in your engine; it happens to be the same thing which we normally use for the split integral formulation for image based lighting. See for example Brian Karis’s presentation in the Physically Based Rendering course from 2013 [Karis13].
  39. And finally, we calculate the remaining bit with Monte Carlo quadrature.
  40. Here’s the same thing again, with some annotations. What this basically boils down to is a weighted sum. The BRDF over PDF values are weights, and divide the result by their sum at the end.
  41. The code ends up being super simple too.
  42. Here we have the same pictures again. First, without ray reuse.
  43. Now with ray reuse again, without the new variance reduction trick.
  44. And now with the variance reduction. Much saner now.
  45. For a comparison, this is without ray reuse, but with four rays /traced/ per pixel. It looks a bit nicer due to more uniform noise distribution, but otherwise variance is pretty similar.
  46. Just one ray and no reuse again.
  47. And now with the variance reduction. Much saner now.
  48. Another image; here we trace 4 rays per pixel, and use 4 samples to resolve. So 16 color values integrated, with only four rays traced per pixel. It’s starting to look nice, but looks like it will be still too expensive to get a noise-free result.
  49. … which is why we use everyone’s favorite technique in the recent days, which is temporal filtering. We get a similar result to the previous image, except here we still only trace one ray per pixel.
  50. The weight re-normalization allows us to do one more neat thing. What we’re doing is saying “The BRDF weights *must* integrate to one”, even if we have poorly fitting rays. Thing is, we will usually find some rays in the neighbors which we can use. Even if the given pixel didn’t trace any rays for itself. What this enables us to do, is sparse ray-tracing. We don’t need to shoot rays for every pixel, we just need to resolve at full resolution. We will find *some* rays which will suit us anyway. And we will properly weigh them by the full-res BRDF.
  51. Here’s the last image again. This is a half-resolution result.
  52. … and here with a full-res resolve. We’re still ray-tracing at half-res, but we get full-res detail. Noise is lower here as well, as our temporal filter is designed with the full-res resolve in mind.
  53. I mentioned temporal reprojection, so let’s have a quick look at it now. The idea is simple: given a pixel’s 3D coordinate, check where it was in the previous frame, project it, and we have the texcoord to sample. … except when we move the viewport, reflections exhibit parallax. That is, the reflected objects move according to their depth, not the depth of the surface which reflects them. Attempting to reproject reflections using the closer depth results in smearing. So we calculate the average depth of the reflected objects, and reproject using that. This results in much reduced smearing.
  54. We are actually quite nasty with our importance sampling. We bias it quite a bit. It is difficult to eliminate all the noise without using a crazy number of samples. A lot of it comes from the tail of the BRDF, especially for distributions like GGX. Which is why we shift the generated samples toward the mirror direction. In a fancy way, because we still need accurate PDF values for importance sampling. It’s actually quite easy to sample a ‘truncated distribution’ when using the inversion method… … we just offset the pseudo-random value used to generate the offset from the mirror direction. This has the effect of redistributing the samples from the ‘truncated’ region proportionally in the ‘untruncated’ region of the function. The PDF values are the same except for a constant scale factor… and we don’t need to calculate it, since our variance reduction scheme re-normalizes all the constant factors anyway.
  55. Another variance reduction technique we use is filtered importance sampling. I have already covered the ‘importance sampling’ bit, so the filtering part is quite easy to add. Instead of fetching the high frequency color buffer for our reflections, we use appropriately pre-blurred versions of it. We pretend that the rays we trace are cones, and we estimate their footprint at intersection points. From that, we get the mip level of a pre-blurred image pyramid to use.
  56. The sampling of pre-blurred image levels can be biased to produce blurrier reflections than necessary. This is useful to counter the bias in importance sampling, which over-sharpens reflections. We wire the two together, such that we end up with a single ‘bias’ parameter, which goes from “pure unbiased Monte Carlo” to tracing in the mirror direction, and taking all reflection blurriness from the filter. Increasing the bias results in improved performance in two ways. The ray-tracing becomes more coherent, and we sample smaller color mips. Pushing it *too* high is not desirable however, as we introduce some discontinuities and leaks, since proximity in screen-space doesn’t necessarily mean proximity in the reflection. Increasing the bias also reduces the desirable elongation effect.
  57. Here’s our scene with a bias of zero.
  58. And again with 0.7. Some of the reflections changed slightly, but that is mostly due to the temporal filter’s neighborhood clamping. Otherwise the look of the image is similar, yet the noise is reduced, and the performance is improved.
  59. Remember the full-res color resolve? We actually run it four pixels at a time, and reuse the same rays for all four pixels. It becomes more VGPR heavy, but lets us save some memory bandwidth, so it’s a win in practice. We only need to fetch the ray data once for all four pixels, and we also calculate the hit point screen-space coordinates once, as they are the same. The thing which doesn’t necessarily have to be the same however is the mip level of the color buffer to fetch for each pixel. The pixels can have four different roughness values, and therefore should use different pre-filtered images. Here we use a similar trick as the DXT format uses for color compression. We find the min and max anchor mip levels for the four pixels, and we interpolate between them. That way we take only two color buffer samples instead of four.
  60. To illustrate that the mip anchor interpolation works, here we have some comparison images. At the bottom, we’se using just one mip level for the four pixels we resolve; this results in a blurry image. Then at the top we have four unique mip levels, which is the reference. In the middle we have the result of interpolation. As you can see, the difference is barely noticable.
  61. Here’s a quick slide with performance on the PS4. As you can see, the ray reuse scheme allows us to quite efficiently reduce the ray tracing cost.
  62. One can try to filter the importance sampling noise. Similarly to how the Killzone approach blurs mirror-like reflections in screen-space, we can filter the noisy reflections. Only we need a smarter filter, which respects surface details. Yusuke Tokuyoshi has recently published a really cool paper about lobe-aware filtering [Tokuyoshi15], which produces great results. He uses it to de-noise path-tracing and real-time cone tracing. An image is first rendered using importance sampling. Then the filtering process compares the similarity of outgoing specular lobes through the use of fitted spherical gaussians. The similarity acts as a weight in a bilateral filter. This approach fits almost all the bills, except contact hardening. The lobe similarity does not take object proximity into account, therefore we lose sharp contacts between objects.
  63. Yasin Uludag published a screen-space cone tracing algorithm in GPU Pro 5, which ticks many of the boxes on our list of requirements; everything except elongation. // See also [Pearce15] and [Hermanns15] The algorithm starts by tracing a ray, not a cone. Once the ray hits something, the algorithm pretends it was a cone, and samples a pre-filtered color buffer. The contribution is weighed by a calculated coverage of the pre-filtered area. It then marches forward and accumulates more samples, until full coverage is obtained.
  64. The trouble is with the coverage calculation, and with tracing following the first hit. The depth buffer only contains the first visible surface, so we don’t acutally know the thickness of objects. We can do some guesswork, or assume a constant thickness. But the algorithm still won’t know whether it just hit a solid wall, and should stop, or whether the thing it hit was a thin surface, and it should continue. The cone trace after the first hit can’t be performed accurately either since we can’t trace behind objects. We would need to perform depth peeling, or have other information about the scene structure. Overall, in simple scenes the algorithm produces great results, but it can produce discontinuities and artifacts which are difficult to get rid of.
  65. Just a quick note about the pre-integrated FG term that I’d mentioned before. It is much better to multiply by it after the temporal filter, so that the changes in it do not affect the neighborhood clamping. They also don’t need to be re-projected, since the pre-integrated value is noise free anyway. In our case, we multiply by the FG term in the pass which applies SSR to the screen. Its value is also used for image-based lights, so it effectively costs just one multiply operation.
  66. There’s one more detail to the multi-pixel resolve that I need to mention. We process four pixels at the same time, but those are not the four closest pixels. Doing so would produce half-resolution noise, which doesn’t play well with Temporal AA due to its neighborhood clamping in a 3x3 neighborhood. We actually spread out the pixels such that the distance between them is greater than the TAA window. Every frame we change which four pixels are resolved together, and let TAA resolve that to a smooth hi-resolution image.
  67. I actually got the idea for ray reuse while reading Eric Veach’s thesis, particularly the bit about multiple importance sampling. The insight I had is that I could treat the neighboring pixels as ray generation strategies. That way, if we get an unlikely ray with a high BRDF over PDF ratio, we can ask the other neighbors “so what’s the chance that /you/ would give me this ray”, and weigh down the contribution of the unlikely one. The approach worked fine, and produced great result. But it was also too expensive. All the careful weighing was ALU and VGPR hell.
  68. To reduce temporal smearing and lag, we use neighborhood clamping. This is similar to the temporal anti-aliasing algorithm from Brian Karis [Karis14], but it’s tuned a bit differently. The values here are generally much more noisy, so the color bounding box clipping can’t be exact like in TAA. It turns out that we can’t remove all the reprojection lag anyway, since the reflection color buffer we sample in the resolve pass is from the previous frame. With that in mind, if we give some slack to the neighborhood clamping, it will only marginally increase the lag and smearing.