Quantcast
Channel: Ray Tracey's blog
Viewing all 85 articles
Browse latest View live

Shiny Toy pathmarcher on Shadertoy


3ds Max plugin developer wanted

$
0
0
As much as we despise Autodesk and would rather see the entire company go down in a pool of radioactive, fiery plasma (the eyebrow-scorching kind that is), the fact of the matter is that a sizeable chunk of the 3d artists out there remains loyal to 3ds Max for whatever reason. Due to this shocking fact, we're looking for an outstanding 3ds Max plugin developer with the skills to integrate our technology into 3ds Max (this role is in addition to the two roles advertised in the previous post: the graphics developer and full-stack developer).    

What we're looking for:

- 2+ years of experience developing plug-ins for 3ds Max
- Solid understanding of 3d artist workflows
- Experience with rendering (this is a rendering plug-in) 
- Knowledge of real-time data streaming protocols and technologies (WebSocket etc.) desirable
- Keen to keep abreast of the latest cutting-edge technologies in the fields of graphics and rendering

This is a remote contracting role. Send your application to sam.lapere@live.be


Freedom of noise: Nvidia releases OptiX 5.0 with royalty-free AI denoiser

$
0
0
2018 will be bookmarked as a turning point for Monte Carlo rendering due to the wide availability of fast, high quality denoising algorithms, which can be attributed for a large part to Nvidia Research: Nvidia just released OptiX 5.0 to developers, which contains a new GPU accelerated post-processing denoising filter.



The new denoiser was trained with machine learning on a database of thousands of rendered images and works pretty much in real-time. The OptiX 5.0 SDK contains a sample program of a simple path tracer with the denoiser running on top (as a post-process). The results are nothing short of stunning: noise disappears completely, even difficult indirectly lit surfaces like refractive (glass) objects and shadowy areas clear up remarkably fast and the image progressively get closer to the ground truth. 

The OptiX denoiser works great for glass and dark, indirectly lit areas

While in general the denoiser does a fantastic job, it's not yet optimised to deal with areas that converge fast, and in some instances overblurs and fails to preserve texture detail as shown in the screen grab below (perhaps this can be solved with more training for the machine learning):

Overblurring of textures
The denoiser is provided free for commercial use (royalty-free), but requires an Nvidia GPU. It works with both CPU and GPU rendering engines and is already implemented in Iray (Nvidia's own GPU renderer), V-Ray (by Chaos Group), Redshift Render and Clarisse (a CPU based renderer for VFX by Isotropix).

Some videos of the denoiser in action in Optix, V-Ray, Redshift and Clarisse:

Optix 5.0: youtu.be/l-5NVNgT70U



Iray: youtu.be/yPJaWvxnYrg

This video provides a high level explanation of the deep learning algorithm behind the OptiX/Iray denoiser based on the Nvidia research paper "Interactive Reconstruction of Monte Carlo Image Sequences using a Recurrent Denoising Autoencoder"



V-Ray 4.0: youtu.be/nvA4GQAPiTc




Redshift: youtu.be/ofcCQdIZAd8 (and a post from Redshift's Panos explaining the implementation in Redshift)


ClarisseFX: youtu.be/elWx5d7c_DI



Other renderers like Cycles and Corona already have their own built-in denoisers, but will probably benefit from the OptiX denoiser as well (especially Corona which was acquired by Chaos Group in September 2017).

The OptiX team has indicated that they are researching an optimised version of this filter for use in interactive to real-time photorealistic rendering, which might find its way into game engines. Real-time noise-free photorealistic rendering is tantalisingly close.

Technical 3D artist wanted

$
0
0
The Blue Brain Project is currently looking for an exceptional technical 3D artist to join the scientific visualisation team. Blue Brain is a Swiss initiative based at the Biotech Campus in Geneva (part of the Ecole Polytechnique Fédérale de Lausanne), which aims to digitally reconstruct and simulate the mammalian brain in a supercomputer for the purpose of advancing knowledge in brain science and medicine and applying this knowledge to engineering fields (such as robotics). 

Desired skills and experience:
  • Good knowledge of scripting in Python to automate rendering tasks
  • Experience with Blender, Cycles and node based shaders
  • Knowledge of biomedical sciences (specifically neuroscience) is a plus
  • Passionate about explaining complex scientific ideas in an easy to understand way 
  • Good command of written and spoken English
  • Desire to live in Switzerland near the Alps and be part of an international environment

More requirements can be found at https://emploi.epfl.ch/page-141270-en.html

If you have any questions about this position, you can email me at samuel.lapere@epfl.ch

New GPU path tracer announced

$
0
0
Jacco Bikker just announced a new GPU based path tracer on the ompf forum. There's also a demo version available that you can grab from this post

Shiny Toy pathmarcher on Shadertoy

Real-time path tracing with OctaneRender 1.5

$
0
0
Just want to share a couple of real-time rendered videos made with the upcoming OctaneRender 1.5. The scene used in the videos is the same one that was used for the Brigade 3 launch videos. The striking thing about Octane is that you can navigate through this scene in real-time while having an instant final quality preview image. It converges in just a few seconds to a noise free image, even with camera motion blur enabled. It's both baffling and extremely fun. 

The scene geometry contains 3.4 million triangles without the Lamborghini model, and 7.4 million triangles with (the Lamborghini alone has over 4 million triangles). All videos below were rendered in real-time on 4 GTX 680 GPUs. Because of the 1080p video capture, the framerate you see in the videos is less than half the framerate you get in real life, it's incredibly smooth. 



There are a bunch more real-time rendered videos and screenshots of the upcoming OctaneRender 1.5 in this thread on the Octane forum (e.g. on page 7).

Lamborghini Test Drive (OctaneRender animation)

$
0
0
Recently I was blown away by a video posted by niuq.cam on the Octane forum, called "Lamborghini Test Drive" as a tribute to celebrate the 50h anniversary of Lamborghini. The realism you can achieve with Octane is just batshit crazy as evidenced by the video.

Try to spot the 7 differences with reality:


Some specs:

- the scene is 100% 3D, all rendered with Octane
- rendered on 4x GTX Titan
- render resolution 1280 x 538 Panavision format (2,39:1)
- average rendertime per frame: from 1 minute for the large shots with the cars to 15 minute for the helmet shots by night
- over 5.000.000 triangles for both cars
- instances for the landscape.


Real-time path tracing on a 40 megapixel screen

$
0
0
The Blue Brain Project is a Switzerland based computational neuroscience project which aims to demystify how the brain works by simulating a biologically accurate brain using a state-of-the-art supercomputer. The simulation runs at multiple scales and goes from the whole brain level down to the tiny molecules which transport signals from one cell to another (neurotransmitters). The knowledge gathered from such an ultra-detailed simulation can be applied to simulating drug therapies for neurological diseases (computational medicine) and developing self-thinking machines (computational intelligence).

To visualize these detailed brain simulations, we have been working on a high performance rendering engine, aptly named "Brayns". Brayns uses raytracing to render massively complex scenes comprised of trillions of molecules interacting in real-time on a supercomputer. The core ray tracing intersection kernels in Brayns are based on Intel's Embree and Ospray high performance ray tracing libraries, which are optimised to render on recent Intel CPUs (such as the Skylake architecture). These CPUs  basically are a GPU in CPU disguise (as they are based on Intel's defunct Larrabee GPU project), but can render massive scientific scenes in real-time as they can address over a terabyte of RAM. What makes these CPUs ultrafast at ray tracing is a neat feature called AVX-512 extensions, which can run several ray tracing calculations in parallel (in combination with ispc), resulting in blazingly fast CPU ray tracing performance which rivals that of a GPU and even beats it when the scene becomes very complex. 

Besides using Intel's superfast ray tracing kernels, Brayns has lots of custom code optimisations which allows it to render a fully path traced scene in real-time. These are some of the features of Brayns:
  • hand optimised BVH traversal and geometry intersection kernels
  • real-time path traced diffuse global illumination
  • Optix real-time AI accelerated denoising
  • HDR environment map lighting
  • explicit direct lighting (next event estimation)
  • quasi-Monte Carlo sampling
  • volume rendering
  • procedural geometry
  • signed distance fields raymarching 
  • instancing, allowing to visualize billions of dynamic molecules in real-time
  • stereoscopic omnidirectional 3D rendering
  • efficient loading and rendering of multi-terabyte datasets
  • linear scaling across many nodes
  • optimised for real-time distributed rendering on a cluster with high speed network interconnection
  • ultra-low latency streaming to high resolution display walls and VR caves
  • modular architecture which makes it ideal for experimenting with new rendering techniques
  • optional noise and gluten free rendering
Below is a screenshot of an early real-time path tracing test on a 40 megapixel curved screen powered by seven 4K projectors: 

Real-time path traced scene on a 8 m by 3 m (25 by 10 ft) semi-cylindrical display,
powered by seven 4K projectors (40 megapixels in total)

Seeing this scene projected lifesize in photorealistic detail on a 180 degree stereoscopic 3D screen and interacting with it in real-time is quite a breathtaking experience. Having 3D molecules zooming past the observer will be the next milestone. I haven't felt this thrilled about path tracing in quite some time.



Technical/Medical/Scientific 3D artists wanted 


We are currently looking for technical 3D artists to join our team to produce immersive neuroscientific 3D content. If this sounds interesting to you, get in touch by emailing me at sam.lapere@live.be

Accelerating path tracing by using the BVH as multiresolution geometry

$
0
0
Before continuing the tutorial series, let's have a look at a simple but effective way to speed up path tracing. The idea is quite simple: like an octree, a bounding volume hierarchy (BVH) can double as both a ray tracing acceleration structure and a way to represent the scene geometry at multiple levels of detail (multiresolution geometry representation). Specifically the axis-aligned bounding boxes of the BVH nodes at different depths in the tree serve as a more or less crude approximation of the geometry. 

Low detail geometry enables much faster ray intersections and can be useful when light effects don't require full geometric accuracy, for example in the case of motion blur, glossy (blurry) reflections, soft shadows, ambient occlusion and indirect illumination. Especially when geometry is not directly visible in the view frustum or in specular (mirror-like) reflections, using geometry proxies can provide a significant speedup (depending on the fault tolerance) at an almost imperceptible and negligible loss in quality.

- Skipping triangle intersection
- only ray/box intersection, better for thread divergence

The renderer determines the appropriate level of detail based on the distance from the camera (for primary rays) or on the distance from the ray origin for secondary rays. The following screenshots show the bounding boxes of the BVH nodes from depth 1 (depth 0 is the rootnode) up to depth 12:





The screenshot below shows only the bounding boxes of the leafnodes:


Normals are computed according to 

TODO link to github code, propose fixes to fill holes, present benchmark results (8x speedup), get more timtams 

Nvidia gearing up to unleash real-time ray tracing to the masses

$
0
0
In the last two months, Nvidia roped in several high profile, world class ray tracing experts (with mostly a CPU ray tracing background):

Matt Pharr

One of the authors of the Physically Based Rendering books (www.pbrt.org, some say it's the bible for Monte Carlo ray tracing). Before joining Nvidia, he was working at Google with Paul Debevec on Daydream VR, light fields and Seurat (https://www.blog.google/products/google-ar-vr/experimenting-light-fields/), none of which took off in a big way for some reason.

Before Google, he worked at Intel on Larrabee, Intel's failed attempt at making a GPGPU for real-time ray tracing and rasterisation which could compete with Nvidia GPUs) and ISPC, a specialised compiler intended to extract maximum parallelism from the new Intel chips with AVX extensions. He described his time at Intel in great detail on his blog: http://pharr.org/matt/blog/2018/04/30/ispc-all.html (sounds like an awful company to work for).

Intel also bought Neoptica, Matt's startup, which was supposed to research new and interesting rendering techniques for hybrid CPU/GPU chip architectures like the PS3's Cell


Ingo Wald

Pioneering researcher in the field of real-time ray tracing from the Saarbrücken computer graphics group in Germany, who later moved to Intel and the university of Utah to work on a very high performance CPU based ray tracing frameworks such as Embree (used in Corona Render and Cycles) and Ospray.

His PhD thesis "Real-time ray tracing and interactive global illumination" from 2004, describes a real-time GI renderer running on a cluster of commodity PCs and hardware accelerated ray tracing (OpenRT) on a custom fixed function ray tracing chip (SaarCOR).

Ingo contributed a lot to the development of high quality ray tracing acceleration structures (built with the surface area heuristic).


Eric Haines

Main author of the famous Real-time Rendering blog, who worked until recently for Autodesk. He also used to maintain the Real-time Raytracing Realm and Ray Tracing News


What connects these people is that they all have a passion for real-time ray tracing running in their blood, so having them all united under one roof is bound to give fireworks.

With these recent hires and initiatives such as RTX (Nvidia's ray tracing API), it seems that Nvidia will be pushing real-time ray tracing into the mainstream really soon. I'm really excited to finally see it all come together. I'm pretty sure that ray tracing will very soon be everywhere and its quality and ease-of-use will soon displace rasterisation based technologies (it's also the reason why I started this blog exactly ten years ago).



Chaos Group (V-Ray) announces real-time path tracer Lavina

$
0
0
The Chaos Group blog features quite an interesting article about the speed increase which can be expected by using Nvidia's recently announced RTX cards: 

https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray

Excerpt:
"Specialized hardware for ray casting has been attempted in the past, but has been largely unsuccessful — partly because the shading and ray casting calculations are usually closely related and having them run on completely different hardware devices is not efficient. Having both processes running inside the same GPU is what makes the RTX architecture interesting. We expect that in the coming years the RTX series of GPUs will have a large impact on rendering and will firmly establish GPU ray tracing as a technique for producing computer generated images both for off-line and real-time rendering."

The article features a new research project, called Lavina, which is essentially doing real-time ray tracing and path tracing (with reflections, refractions and one GI bounce). The video below gets seriously impressive towards the end: 


Chaos Group have always been a frontrunner in real-time photorealistic ray tracing research on GPUs, even as far back as Siggraph 2009 where they showed off the first version of V-Ray RT GPU rendering on CUDA (see http://raytracey.blogspot.com/2009/08/race-for-real-time-ray-tracing.html or https://www.youtube.com/watch?v=DJLCpS107jg). 

I have to admit that I'm both stoked, but also a bit jealous when I see what Chaos Group has achieved with project Lavina, as it is exactly what I hoped Brigade would turn into one day (Brigade was a premature real-time path tracing engine developed by Jacco Bikker in 2010, which I experimented with and blogged about quite extensively, see e.g. http://raytracey.blogspot.com/2012/09/real-time-path-tracing-racing-game.html ). 

Then again, thanks to noob-friendly ray tracing API's like Nvidia's RTX and Optix, soon everyone's grandmother and their dog will be able to write a real-time path tracer, so all is well in the end.  

Looking for C++ ray tracing and frontend developers

$
0
0
The Blue Brain Project is a Swiss research project, based in Geneva, which started in 2005 and aims to faithfully simulate a detailed digital version of the mouse brain, (as close to biology as is possible with today's supercomputers).

Visualising this simulated brain and its components is a massive challenge. Our goal is to build state-of-the-art visualisation tools to interactively explore extremely large and detailed scientific datasets (over 3 TB). The real-time visualisation is rendered remotely on a supercomputing cluster and can be interacted with on any client device (laptop, tablet or phone) via the web.

To achieve interactive frame rates and high resolution, we are building our tools on top of the industry's highest performance ray tracing libraries (the Ospray library from Intel, which itself is based on Embree, and the OptiX framework for interactive GPU ray tracing from Nvidia). These libraries take advantage of the embarrassingly parallel nature of ray tracing and scale extremely efficiently across multiple cores, devices and nodes in a cluster.

We are currently looking for software engineers to help accelerate the development of these tools, both in the frontend and backend. Our offices are located at the Campus Biotech in the international district in Geneva, Switzerland.


Frontend/fullstack web developer

Your profile

  • 3+ years experience in full stack/frontend engineering
  • 3+ years designing, developing, and scaling modern web applications
  • 3+ years experience with JavaScript, HTML5, CSS3, and other modern web technologies

Main duties and responsibilities

Your responsibility will be to develop new features for our web based interactive 3D viewer "Brayns" (on the frontend) and maintain existing ones, and to drive the development of our new hub application where the scientists can manage their data visualisations.


Required skills and experience


  • TypeScript, JavaScript (ES6)
  • React JavaScript framework
  • REST, WebSockets and Remote Procedure Calls
  • RxJS, NodeJS
  • Deep understanding of asynchronous code and the observable pattern in JavaScript
  • Experience using the browser dev tools for debugging, profiling, performance evaluation, etc.
  • Understanding of both the object oriented and functional programming paradigms
  • Knowledge of code chunking strategies
  • Experience writing unit tests using Jest and component tests using Enzyme (or similar technologies)
  • Experience with source versioning systems (Git, Github, etc.)
  • Knowledge of common UI/UX design patterns and ability to implement/use them accordingly
  • Knowledge of the Material Design spec
  • Fluent English in speech and writing
  • Self-motivated and ability to work independently 
  • Team oriented

Nice to have


  • Interest in science (in particular neuroscience)
  • Experience with ThreeJS, WebGL, WebAssembly
  • Basic understanding of C++, Python and Docker
  • UI graphics design skills

Apply


For more info, email samuel.lapere@epfl.ch



C++ interactive graphics developer 

Main duties and responsibilities

Your responsibility will be to develop and research new features for "Brayns", our interactive raytracer for scientific visualisation and maintain existing ones


Required skills and experience
  • 3+ years of experience in C++/Python software development, testing, release, compilation, debugging, and documentation
  • 2+ years of experience with computer graphics (OpenGL, CUDA)
  • Strong knowledge of object-oriented, parallel, and distributed programming
  • Deep understanding of ray tracing and physically based rendering
  • Experience in software quality control and testing
  • Experience using UNIX/Linux operating systems
  • Experience in Linux-based system administration
  • Experience with Continuous Integration systems such as Jenkins
  • Great team player
  • Fluent English in speech and writing

Nice to have

  • Interest in science (in particular neurobiology)
  • Experience in software development on supercomputers and distributed systems.

Apply


For more info, email samuel.lapere@epfl.ch

Nvidia release OptiX 6.0 with support for hardware accelerated ray tracing

$
0
0
Nvidia recently released a new version of Optix, which finally adds support for the much hyped RTX cores on the Turing GPUs (RTX 2080, Quadro RTX 8000 etc), which provide hardware acceleration for ray-BVH and ray-triangle intersections.

First results are quite promising. One user reports a speedup between 4x and 5x when using the RTX cores (compared to not using them). Another interesting revelation is that the speedup gets larger with higher scene complexity (geometry-wise, not shading-wise): 


As a consequence, the Turing cards can render up to 10x faster in some scenes than the previous generation of Geforce cards, i.e. Pascal (GTX 1080), which is in fact two generations old if you take the Volta architecture into account (Volta was already a huge step up from Pascal in terms of rendering speed, so for Nvidia's sake it's better to compare Turing with Pascal).

This post will be updated with more Optix benchmark numbers as they become available.


Looking for fullstack React developers

$
0
0
The Blue Brain Project is a research project based in Geneva, Switzerland with an ambitious goal: to push computational neuroscience to another level by simulating a full digitally reconstructed biological brain.

We are currently looking for experienced fullstack React developers to help build a web application for real-time raytraced neuroscientific data (which is rendered on a remote supercomputer).


The Ideal candidate's profile
  • 2+ years experience in full stack/frontend engineering
  • 2+ years designing, developing, and scaling modern web applications
  • 2+ years experience with React, JavaScript, HTML5, CSS3, and other modern web technologies

Technical requirements:
  • Deep understanding of asynchronous code and the observable pattern in JavaScript
  • Experience using the browser's dev tools for debugging, profiling, performance evaluation, etc.
  • Knowledge of code chunking strategies
  • Experience writing unit tests and component tests
  • Experience with version control systems (Git, Github, etc.)
  • Continuous integration and deployment using Jenkins
  • Knowledge of common UI/UX design patterns and ability to use them accordingly
  • Knowledge of Google's Material Design spec

Required skills:
  • TypeScript, JavaScript (ES6), React, Redux, NodeJS
  • REST, WebSocket API

Nice to have skills:
  • ThreeJS, D3, Python, C++
  • Docker, OpenShift, CI/CD, Webpack, Bash


Unreal Engine now has real-time ray tracing and a path tracer

$
0
0
Epic recetly just released the stable version of Unreal Engine 4.22 which comes with real-time ray tracing and a fully fledged path tracer for ground truth images.

https://www.unrealengine.com/en-US/blog/real-time-ray-tracing-new-on-set-tools-unreal-engine-4-22

The path tracer is explained in more detail on this page: https://docs.unrealengine.com/en-us/Engine/Rendering/RayTracing/PathTracer

The following video is an incredible example of an architectural visualisation rendered with Unreal's real-time raytraced reflections and refractions:



It's fair to say that real-time photorealism on consumer graphics card has finally arrived. In the last few years, fast and performant path tracers have become available for free (e.g. Embree, OptiX, RadeonRays, Cycles) or virtually for free (e.g Arnold, Renderman). Thanks to advances in noise reduction algorithms, their rendering speed has been accelerated from multiple hours to a few seconds per frame. The rate at which game engines, with Unreal at the forefront, are taking over the offline-rendering world is staggering. Off-line rendering for architecture will most probably disappear in the near future and be replaced by game engines with real-time ray tracing features. 

LightTracer, the first WebGL path tracer for photorealistic rendering of complex scenes in the browser

$
0
0
A couple of days ago, Denis Bogolepov sent me a link to LightTracer, a browser based path tracer which he and Danila Ulyanov have developed. I'm quite impressed and excited about LightTracer, as it is the first WebGL based path tracer that can render relatively complex scenes (including textures), which is something I've been waiting to see happen for a while (I tried something similar a few years ago, WebGL still had too many limitations back then).


What makes LightTracer particularly interesting is that it has the potential to bring photoreal interactive 3D to the web, paving the way for online e-commerce stores offering their clients a fully photorealistic preview of an article (be it jewellery,  cars, wristwatches, running shoes or handbags).

Up until now, online shops have been trying several ways to offer their clients "photorealistic" previews with the ability to configure the product's materials and colours. These previews were either precomputed 360 degree videos, interactive 3D using WebGL rasterization and even using server-side rendering via cloud based ray tracing streamed to the browser (e.g. Clara.io and Lagoa Render) which requires expensive servers and is tricky to scale.

LightTracer's WebGL ray tracing offers a number of unique selling points:

- ease of use: it's entirely browser based, so nothing needs to be downloaded or installed
- intuitive: since ray tracing follow the physics of light, lights and materials behave just like in the real world, allowing non-rendering-experts to predictably light their scenes
- photorealisitic lighting and materials: as Monte Carlo path tracing solves the full rendering equations without taking shortcuts, this results in truly photoreal scenes
speed: LightTracer's ray tracing is accelerated by the GPU via WebGL, offering very fast previews. This should get even faster once WebGL will support hardware accelerated ray tracing via Nvidia's RTX technology (and whatever AMD has in the works)







LightTracer is still missing a few features, such as an easy-to-use subsurface scattering shader for realistic skin, hair and waxy materials, and there are plenty of optimisations possible (scene loading speed, UI improvements and presets, etc.) but I think this is the start of something big. 

LightHouse 2, the new OptiX based real-time GPU path tracing framework, released as open source

$
0
0
Just before Siggraph, Jacco Bikker released Lighthouse 2, his new real-time path tracing framework as open source on Github:


If you haven't heard of Jacco Bikker before, he is the original author of the Brigade engine, which pioneered the use of real-time path tracing in games (way before Nvidia got interested) and was  released as open source in 2010 (see https://raytracey.blogspot.com/2010/04/real-time-pathtracing-demo-shows-future.html). Brigade was a real trailblazer and showed off a glimpse of how photorealistic games could look like in a not so distant future. Brigade 2, its successor (and also developed by Jacco Bikker) was fully GPU based which pushed things to another level. I used to work a lot with Brigade and designed many tech demos with the engine for this blog (e.g. https://raytracey.blogspot.com/2013/03/real-time-path-traced-carmageddon.html and https://raytracey.blogspot.com/2013/10/brigade-3.html), so I was quite thrilled to read that Jacco had released a new path tracing engine which fully exploits OptiX and the new hardware accelerated ray tracing cores on Nvidia's Turing GPUs. 

Lighthouse has a few unique feature
  • Lighthouse uses Nvidia's OptiX framework, which provides state-of-the-art methods to build and traverse BVH acceleration structures, including a built-in "top level BVH" which allows for real-time animated scenes with thousands of individual meshes, practically for free. 
  • There are 3 manually optimised OptiX render cores: 
    • OptiX 5 and OptiX Prime for Maxwell and Pascal
    • the new OptiX 7 for Volta and Turing
      • OptiX 7 is much more low level than previous OptiX versions, creating more control for the developer, less overhead and a substantial performance boost on Turing GPUs compared to OptiX 5/6 (about 35%)
      • A Turing GPU running Lighthouse 2 with OptiX 7 (with RTX support) is about 6x faster than a Pascal GPU running OptiX 5 for path tracing (you have to try it to believe it :-) )
  • Lighthouse incorporates the new "blue noise" sampling method (https://eheitzresearch.wordpress.com/762-2/), which creates cleaner/less noisy looking images at low sample rates
  • Lighthouse manages a full game scene graph with instances, camera, lights and materials, including the Disney BRDF (principled shader) and their parameters can be edited on-the-fly through a lightweight GUI
More in the Lighthouse 2 wiki: https://github.com/jbikker/lighthouse2/wiki

Some screenshots (rendered with Lighthouse's OptiX 7 core on a RTX 2060)

1024 real-time ray traced dragons
2025 lego cars, spinning in real-time
Lighthouse 2 material test scene
A real-time raytraced Shelby Cobra

An old video of Sponza rendered with Lighthouse, showing off the real-time denoiser:



Lighthouse is still a work in progress, but given the fact that it handles real-time animation, offers state-of-the-art performance and is licensed under Apache 2.0, it may soon end up in professional 3D tools like Blender for fast, photorealistic previews of real-time animations. Next-gen game engine developers should also keep an eye on this.

Stay tuned for more™ !

P.S. I may release some executable demos for people who can't compile Lighthouse on their machines.

Brand new GPU path tracing research from Nvidia and AMD

$
0
0
A very interesting paper called "Gradient domain path tracing" was just published by Nvidia researchers (coming from the same incredibly talented Helsinki university research group as Timo Aila, Samuli Laine and Tero Karras who developed highly optimized open source CUDA ray tracing kernels for Tesla, Fermi and Kepler GPUs), describing a new technique derived from the ideas in the paper Gradient domain Metropolis Light Transport, which drastically reduces noise without blurring details. 
Abstract 
We introduce gradient-domain rendering for Monte Carlo image synthesis. While previous gradient-domain Metropolis Light Transport sought to distribute more samples in areas of high gradients, we show, in contrast, that estimating image gradients is also possible using standard (non-Metropolis) Monte Carlo algorithms, and furthermore, that even without changing the sample distribution, this often leads to significant error reduction. This broadens the applicability of gradient rendering considerably. To gain insight into the conditions under which gradient-domain sampling is beneficial, we present a frequency analysis that compares Monte Carlo sampling of gradients followed by Poisson reconstruction to traditional Monte Carlo sampling. Finally, we describe Gradient-Domain Path Tracing (G-PT), a relatively simple modification of the standard path tracing algorithm that can yield far superior results. 
This picture shows a noise comparison between gradient domain path tracing (GPT) and regular path tracing (PT). Computing a sample with the new technique is about 2.5x slower, but path tracing noise seems to clear up much faster, far outweighing the computational overhead: 

More images and details of the technique can be found in https://mediatech.aalto.fi/publications/graphics/GPT/kettunen2015siggraph_paper.pdf

Related to the previous post about using real-time ray tracing for augmented reality, a brand new Nvidia paper titled "Filtering Environment Illumination for Interactive Physically-Based Rendering in Mixed Reality" demonstrates the feasibility of real-time Monte Carlo path tracing for augmented or mixed reality: 
Abstract 
Physically correct rendering of environment illumination has been a long-standing challenge in interactive graphics, since Monte-Carlo ray-tracing requires thousands of rays per pixel. We propose accurate filtering of a noisy Monte-Carlo image using Fourier analysis. Our novel analysis extends previous works by showing that the shape of illumination spectra is not always a line or wedge, as in previous approximations, but rather an ellipsoid. Our primary contribution is an axis-aligned filtering scheme that preserves the frequency content of the illumination. We also propose a novel application of our technique to mixed reality scenes, in which virtual objects are inserted into a real video stream so as to become indistinguishable from the real objects. The virtual objects must be shaded with the real lighting conditions, and the mutual illumination between real and virtual objects must also be determined. For this, we demonstrate a novel two-mode path tracing approach that allows ray-tracing a scene with image-based real geometry and mesh-based virtual geometry. Finally, we are able to de-noise a sparsely sampled image and render physically correct mixed reality scenes at over 5 fps on the GPU.

While Nvidia is certainly at the forefront of GPU path tracing research (with CUDA), AMD has recently begun venturing into GPU rendering as well and has previewed its own OpenCL based path tracer at the Siggraph 2014 conference. The path tracer is developed by Takahiro Harada, who is a bit of an OpenCL rendering genius. He recently published an article in GPU Pro 6 about rendering on-the-fly vector displacement mapping with OpenCL based GPU path tracing. Vector displacement mapping differs from regular displacement mapping in that it allows the extrusion of overlapping geometry (eg a mushroom), which is not possible with the heightfield-like displacement provided by traditional displacement (the Renderman vector displacement documentation explains this nicely with pictures).

Slides from http://www.slideshare.net/takahiroharada/introduction-to-monte-carlo-ray-tracing-opencl-implementation-cedec-2014:


This video shows off the new technique, rendering in near-realtime on the GPU:

There's more info on Takahiro's personal page, along with some really interesting slideshow presentations about OpenCL based ray tracing. This guy also developed a new technique called "Foveated real-time ray tracing for virtual reality devices" (paper), progressively focusing more samples on the parts in the image where the eyes are looking (determined by eye/pupil tracking), "reducing the number of pixels to shade by 1/20, achieving 75 fps while preserving the same visual quality" (source: http://research.lighttransport.com/foveated-real-time-ray-tracing-for-virtual-reality-headset/asset/abstract.pdf). Foveated rendering takes advantage of the fact that the human retina is most sensitive in its center (the "fovea", which contains densely packed colour sensitive cones) where objects' contours and colours are sharply observed, while the periphery of the retina consists mostly of sparsely distributed, colour insensitive rods, which cause objects in the periphery of the visual field to be represented by the brain as blurry blobs (although we do not consciously perceive it like that, thinking that our entire visual field is sharply defined and has colour).
This graph shows that the resolution of the retina is highest at the fovea and drops off quickly with increasing distance from the center. This is due to the fact that the fovea contains only cones which each send individual inputs over the optic fibre (maximizing resolution), while the inputs from several rods in the periphery of the retina are merged by the retinal nerve cells before reaching the optic nerve (image from www.telescope-optics.net/eye.htm):

Foveated rendering has the potential to make high quality real-time raytraced imagery feasible on VR headsets that support eye tracking like the recently Kickstarted FOVE VR headset. Using ray tracing for foveated rendering is also much more efficient than using rasterisation: ray tracing allows for sparse loading and sampling of the scene geometry in the periphery of the visual field, while rasterisation needs to load and project all geometry in the viewplane, whether it's sampled or not.


Slides from www.slideshare.net/takahiroharada/foveated-ray-tracing-for-vr-on-multiple-gpus

This video shows a working prototype of the FOVE VR headset with a tracking beam to control which parts of the scene are in focus, so this type of real-time ray traced (or path traced) foveated rendering should be possible right now, (which is pretty exciting):


It's good to finally see AMD stepping up its OpenCL game with its own GPU path tracer. Another example of this greater engagement is that AMD recently released a large patch to fix the OpenCL performance of Blender's Cycles renderer on AMD cards. Hopefully it will put some pressure on Nvidia and make GPU rendering as exciting as in 2010 with the release of the Fermi GPU, a GPGPU computing monster which effectively doubled the CUDA ray tracing performance compared to the previous generation. 

Rendering stuff aside, today is a very important day: for the first time in their 115 year long existence, the Buffalo's from AA Gent, my hometown's football team, have won the title in the Belgian Premier League, giving them a direct ticket to the Champions League. This calls for a proper celebration!

FireRays, AMD's OpenCL based high performance ray tracing renderer

$
0
0
Pretty big news for GPU rendering: about 6 years after Nvidia released the source code of their high performance GPU ray tracing kernels and 4 years after Intel released Embree (high performance CPU ray tracing kernels), last week at Siggraph AMD finally released their own GPU rendering framework in the form of FireRays, an OpenCL based ray tracing SDK, first shown in prototype form at Siggraph 2014 by Takahiro Harada (who also conducted research into foveated ray tracing for VR):



The FireRays, SDK can be downloaded from the AMD Developer site: http://developer.amd.com/tools-and-sdks/graphics-development/firepro-sdk/

More details  can be found at http://developer.amd.com/tools-and-sdks/graphics-development/firepro-sdk/firerays-sdk/. The acceleration structure is a BVH with spatial splits and the option to build the BVH with or without the surface area heuristic (SAH). For instances and motion blur, a two level BVH is used, which enables very efficient object transformations (translation, rotation, scaling) at virtually no cost. 

AMD's own graphs show that their OpenCL renderer is roughly 10x faster running on 2 D700 FirePro GPUs than Embree running on the CPU:


There are already a few OpenCL based path tracers available today such as Blender's Cycles engine and LuxRays (even V-Ray RT GPU was OpenCL based at some point), but none of them have been able to challenge their CUDA based GPU rendering brethren. AMD's OpenCL dev tools have historically been lagging behind Nvidia's CUDA SDK tools which made compiling large and complex OpenCL kernels a nightmare (splitting the megakernel in smaller parts was the only option). Hopefully the OpenCL developer tools have gotten a makeover as well with the release of this SDK, but at least I'm happy to see AMD taking GPU ray tracing serious. This move could truly bring superfast GPU rendering to the masses and with the two big GPU vendors in the ray tracing race, there will hopefully be more ray tracing specific hardware improvements in future GPU architectures.

(thanks heaps to CPFUUU for pointing me to this)

UPDATE: Alex Evans from Media Molecule had a great talk at Siggraph 2015 about his research into raymarching signed distance fields for Dreams. Alex Evans is currently probably the biggest innovator in real-time game rendering since John Carmack (especially since Carmack spends all his time on VR now, which is a real shame). Alex's presentation can be downloaded from http://www.mediamolecule.com/blog/article/siggraph_2015 and is well worth reading. It sums up a bunch of approaches to rendering voxels, signed distance fields and global illumination in real-time that ultimately were not as successful as hoped, but they came very close to real-time on the PS4 (and research is still ongoing).

For people interested in the real-world physics of light bouncing, there was also this very impressive video from Karoly Zsolnai about ultra high speed femto-photography cameras able to shoot images at the speed of light, demonstrating how light propagates and is transprorted as an electromagnetic wave through a scene, illuminating objects a fraction of a nanosecond before their mirror image becomes visible:




Viewing all 85 articles
Browse latest View live




Latest Images