Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Showcase


Channel Catalog


Channel Description:

Quasi-random, more or less unbiased blog about real-time photorealistic GPU rendering

older | 1 | 2 | 3 | (Page 4)

    0 0

    This looks incredible, raymarching with GI, glossy road, glossy car: https://www.shadertoy.com/view/ldsGWB

    The future of real-time graphics is noisy!

    0 0

    As much as we despise Autodesk and would rather see the entire company go down in a pool of radioactive, fiery plasma (the eyebrow-scorching kind that is), the fact of the matter is that a sizeable chunk of the 3d artists out there remains loyal to 3ds Max for whatever reason. Due to this shocking fact, we're looking for an outstanding 3ds Max plugin developer with the skills to integrate our technology into 3ds Max (this role is in addition to the two roles advertised in the previous post: the graphics developer and full-stack developer).    

    What we're looking for:

    - 2+ years of experience developing plug-ins for 3ds Max
    - Solid understanding of 3d artist workflows
    - Experience with rendering (this is a rendering plug-in) 
    - Knowledge of real-time data streaming protocols and technologies (WebSocket etc.) desirable
    - Keen to keep abreast of the latest cutting-edge technologies in the fields of graphics and rendering

    This is a remote contracting role. Send your application to sam.lapere@live.be



    0 0

    2018 will be bookmarked as a turning point for Monte Carlo rendering due to the wide availability of fast, high quality denoising algorithms, which can be attributed for a large part to Nvidia Research: Nvidia just released OptiX 5.0 to developers, which contains a new GPU accelerated post-processing denoising filter.



    The new denoiser was trained with machine learning on a database of thousands of rendered images and works pretty much in real-time. The OptiX 5.0 SDK contains a sample program of a simple path tracer with the denoiser running on top (as a post-process). The results are nothing short of stunning: noise disappears completely, even difficult indirectly lit surfaces like refractive (glass) objects and shadowy areas clear up remarkably fast and the image progressively get closer to the ground truth. 

    The OptiX denoiser works great for glass and dark, indirectly lit areas

    While in general the denoiser does a fantastic job, it's not yet optimised to deal with areas that converge fast, and in some instances overblurs and fails to preserve texture detail as shown in the screen grab below (perhaps this can be solved with more training for the machine learning):

    Overblurring of textures
    The denoiser is provided free for commercial use (royalty-free), but requires an Nvidia GPU. It works with both CPU and GPU rendering engines and is already implemented in Iray (Nvidia's own GPU renderer), V-Ray (by Chaos Group), Redshift Render and Clarisse (a CPU based renderer for VFX by Isotropix).

    Some videos of the denoiser in action in Optix, V-Ray, Redshift and Clarisse:

    Optix 5.0: youtu.be/l-5NVNgT70U



    Iray: youtu.be/yPJaWvxnYrg

    This video provides a high level explanation of the deep learning algorithm behind the OptiX/Iray denoiser based on the Nvidia research paper "Interactive Reconstruction of Monte Carlo Image Sequences using a Recurrent Denoising Autoencoder"



    V-Ray 4.0: youtu.be/nvA4GQAPiTc




    Redshift: youtu.be/ofcCQdIZAd8 (and a post from Redshift's Panos explaining the implementation in Redshift)


    ClarisseFX: youtu.be/elWx5d7c_DI



    Other renderers like Cycles and Corona already have their own built-in denoisers, but will probably benefit from the OptiX denoiser as well (especially Corona which was acquired by Chaos Group in September 2017).

    The OptiX team has indicated that they are researching an optimised version of this filter for use in interactive to real-time photorealistic rendering, which might find its way into game engines. Real-time noise-free photorealistic rendering is tantalisingly close.

    0 0
  • 01/19/18--02:03: Technical 3D artist wanted
  • The Blue Brain Project is currently looking for an exceptional technical 3D artist to join the scientific visualisation team. Blue Brain is a Swiss initiative based at the Biotech Campus in Geneva (part of the Ecole Polytechnique Fédérale de Lausanne), which aims to digitally reconstruct and simulate the mammalian brain in a supercomputer for the purpose of advancing knowledge in brain science and medicine and applying this knowledge to engineering fields (such as robotics). 

    Desired skills and experience:
    • Good knowledge of scripting in Python to automate rendering tasks
    • Experience with Blender, Cycles and node based shaders
    • Knowledge of biomedical sciences (specifically neuroscience) is a plus
    • Passionate about explaining complex scientific ideas in an easy to understand way 
    • Good command of written and spoken English
    • Desire to live in Switzerland near the Alps and be part of an international environment

    More requirements can be found at https://emploi.epfl.ch/page-141270-en.html

    If you have any questions about this position, you can email me at samuel.lapere@epfl.ch

    0 0

    Jacco Bikker just announced a new GPU based path tracer on the ompf forum. There's also a demo version available that you can grab from this post

    0 0

    This looks incredible, raymarching with GI, glossy road, glossy car: https://www.shadertoy.com/view/ldsGWB

    The future of real-time graphics is noisy!

    0 0

    Just want to share a couple of real-time rendered videos made with the upcoming OctaneRender 1.5. The scene used in the videos is the same one that was used for the Brigade 3 launch videos. The striking thing about Octane is that you can navigate through this scene in real-time while having an instant final quality preview image. It converges in just a few seconds to a noise free image, even with camera motion blur enabled. It's both baffling and extremely fun. 

    The scene geometry contains 3.4 million triangles without the Lamborghini model, and 7.4 million triangles with (the Lamborghini alone has over 4 million triangles). All videos below were rendered in real-time on 4 GTX 680 GPUs. Because of the 1080p video capture, the framerate you see in the videos is less than half the framerate you get in real life, it's incredibly smooth. 



    There are a bunch more real-time rendered videos and screenshots of the upcoming OctaneRender 1.5 in this thread on the Octane forum (e.g. on page 7).

    0 0

    Recently I was blown away by a video posted by niuq.cam on the Octane forum, called "Lamborghini Test Drive" as a tribute to celebrate the 50h anniversary of Lamborghini. The realism you can achieve with Octane is just batshit crazy as evidenced by the video.

    Try to spot the 7 differences with reality:


    Some specs:

    - the scene is 100% 3D, all rendered with Octane
    - rendered on 4x GTX Titan
    - render resolution 1280 x 538 Panavision format (2,39:1)
    - average rendertime per frame: from 1 minute for the large shots with the cars to 15 minute for the helmet shots by night
    - over 5.000.000 triangles for both cars
    - instances for the landscape.


    0 0

    The Blue Brain Project is a Switzerland based computational neuroscience project which aims to demystify how the brain works by simulating a biologically accurate brain using a state-of-the-art supercomputer. The simulation runs at multiple scales and goes from the whole brain level down to the tiny molecules which transport signals from one cell to another (neurotransmitters). The knowledge gathered from such an ultra-detailed simulation can be applied to simulating drug therapies for neurological diseases (computational medicine) and developing self-thinking machines (computational intelligence).

    To visualize these detailed brain simulations, we have been working on a high performance rendering engine, aptly named "Brayns". Brayns uses raytracing to render massively complex scenes comprised of trillions of molecules interacting in real-time on a supercomputer. The core ray tracing intersection kernels in Brayns are based on Intel's Embree and Ospray high performance ray tracing libraries, which are optimised to render on recent Intel CPUs (such as the Skylake architecture). These CPUs  basically are a GPU in CPU disguise (as they are based on Intel's defunct Larrabee GPU project), but can render massive scientific scenes in real-time as they can address over a terabyte of RAM. What makes these CPUs ultrafast at ray tracing is a neat feature called AVX-512 extensions, which can run several ray tracing calculations in parallel (in combination with ispc), resulting in blazingly fast CPU ray tracing performance which rivals that of a GPU and even beats it when the scene becomes very complex. 

    Besides using Intel's superfast ray tracing kernels, Brayns has lots of custom code optimisations which allows it to render a fully path traced scene in real-time. These are some of the features of Brayns:
    • hand optimised BVH traversal and geometry intersection kernels
    • real-time path traced diffuse global illumination
    • Optix real-time AI accelerated denoising
    • HDR environment map lighting
    • explicit direct lighting (next event estimation)
    • quasi-Monte Carlo sampling
    • volume rendering
    • procedural geometry
    • signed distance fields raymarching 
    • instancing, allowing to visualize billions of dynamic molecules in real-time
    • stereoscopic omnidirectional 3D rendering
    • efficient loading and rendering of multi-terabyte datasets
    • linear scaling across many nodes
    • optimised for real-time distributed rendering on a cluster with high speed network interconnection
    • ultra-low latency streaming to high resolution display walls and VR caves
    • modular architecture which makes it ideal for experimenting with new rendering techniques
    • optional noise and gluten free rendering
    Below is a screenshot of an early real-time path tracing test on a 40 megapixel curved screen powered by seven 4K projectors: 

    Real-time path traced scene on a 8 m by 3 m (25 by 10 ft) semi-cylindrical display,
    powered by seven 4K projectors (40 megapixels in total)

    Seeing this scene projected lifesize in photorealistic detail on a 180 degree stereoscopic 3D screen and interacting with it in real-time is quite a breathtaking experience. Having 3D molecules zooming past the observer will be the next milestone. I haven't felt this thrilled about path tracing in quite some time.



    Technical/Medical/Scientific 3D artists wanted 


    We are currently looking for technical 3D artists to join our team to produce immersive neuroscientific 3D content. If this sounds interesting to you, get in touch by emailing me at sam.lapere@live.be

    0 0

    Before continuing the tutorial series, let's have a look at a simple but effective way to speed up path tracing. The idea is quite simple: like an octree, a bounding volume hierarchy (BVH) can double as both a ray tracing acceleration structure and a way to represent the scene geometry at multiple levels of detail (multiresolution geometry representation). Specifically the axis-aligned bounding boxes of the BVH nodes at different depths in the tree serve as a more or less crude approximation of the geometry. 

    Low detail geometry enables much faster ray intersections and can be useful when light effects don't require full geometric accuracy, for example in the case of motion blur, glossy (blurry) reflections, soft shadows, ambient occlusion and indirect illumination. Especially when geometry is not directly visible in the view frustum or in specular (mirror-like) reflections, using geometry proxies can provide a significant speedup (depending on the fault tolerance) at an almost imperceptible and negligible loss in quality.

    - Skipping triangle intersection
    - only ray/box intersection, better for thread divergence

    The renderer determines the appropriate level of detail based on the distance from the camera (for primary rays) or on the distance from the ray origin for secondary rays. The following screenshots show the bounding boxes of the BVH nodes from depth 1 (depth 0 is the rootnode) up to depth 12:





    The screenshot below shows only the bounding boxes of the leafnodes:


    Normals are computed according to 

    TODO link to github code, propose fixes to fill holes, present benchmark results (8x speedup), get more timtams 


    0 0

    In the last two months, Nvidia roped in several high profile, world class ray tracing experts (with mostly a CPU ray tracing background):

    Matt Pharr

    One of the authors of the Physically Based Rendering books (www.pbrt.org, some say it's the bible for Monte Carlo ray tracing). Before joining Nvidia, he was working at Google with Paul Debevec on Daydream VR, light fields and Seurat (https://www.blog.google/products/google-ar-vr/experimenting-light-fields/), none of which took off in a big way for some reason.

    Before Google, he worked at Intel on Larrabee, Intel's failed attempt at making a GPGPU for real-time ray tracing and rasterisation which could compete with Nvidia GPUs) and ISPC, a specialised compiler intended to extract maximum parallelism from the new Intel chips with AVX extensions. He described his time at Intel in great detail on his blog: http://pharr.org/matt/blog/2018/04/30/ispc-all.html (sounds like an awful company to work for).

    Intel also bought Neoptica, Matt's startup, which was supposed to research new and interesting rendering techniques for hybrid CPU/GPU chip architectures like the PS3's Cell


    Ingo Wald

    Pioneering researcher in the field of real-time ray tracing from the Saarbrücken computer graphics group in Germany, who later moved to Intel and the university of Utah to work on a very high performance CPU based ray tracing frameworks such as Embree (used in Corona Render and Cycles) and Ospray.

    His PhD thesis "Real-time ray tracing and interactive global illumination" from 2004, describes a real-time GI renderer running on a cluster of commodity PCs and hardware accelerated ray tracing (OpenRT) on a custom fixed function ray tracing chip (SaarCOR).

    Ingo contributed a lot to the development of high quality ray tracing acceleration structures (built with the surface area heuristic).


    Eric Haines

    Main author of the famous Real-time Rendering blog, who worked until recently for Autodesk. He also used to maintain the Real-time Raytracing Realm and Ray Tracing News


    What connects these people is that they all have a passion for real-time ray tracing running in their blood, so having them all united under one roof is bound to give fireworks.

    With these recent hires and initiatives such as RTX (Nvidia's ray tracing API), it seems that Nvidia will be pushing real-time ray tracing into the mainstream really soon. I'm really excited to finally see it all come together. I'm pretty sure that ray tracing will very soon be everywhere and its quality and ease-of-use will soon displace rasterisation based technologies (it's also the reason why I started this blog exactly ten years ago).




    0 0

    The Chaos Group blog features quite an interesting article about the speed increase which can be expected by using Nvidia's recently announced RTX cards: 

    https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray

    Excerpt:
    "Specialized hardware for ray casting has been attempted in the past, but has been largely unsuccessful — partly because the shading and ray casting calculations are usually closely related and having them run on completely different hardware devices is not efficient. Having both processes running inside the same GPU is what makes the RTX architecture interesting. We expect that in the coming years the RTX series of GPUs will have a large impact on rendering and will firmly establish GPU ray tracing as a technique for producing computer generated images both for off-line and real-time rendering."

    The article features a new research project, called Lavina, which is essentially doing real-time ray tracing and path tracing (with reflections, refractions and one GI bounce). The video below gets seriously impressive towards the end: 


    Chaos Group have always been a frontrunner in real-time photorealistic ray tracing research on GPUs, even as far back as Siggraph 2009 where they showed off the first version of V-Ray RT GPU rendering on CUDA (see http://raytracey.blogspot.com/2009/08/race-for-real-time-ray-tracing.html or https://www.youtube.com/watch?v=DJLCpS107jg). 

    I have to admit that I'm both stoked, but also a bit jealous when I see what Chaos Group has achieved with project Lavina, as it is exactly what I hoped Brigade would turn into one day (Brigade was a premature real-time path tracing engine developed by Jacco Bikker in 2010, which I experimented with and blogged about quite extensively, see e.g. http://raytracey.blogspot.com/2012/09/real-time-path-tracing-racing-game.html ). 

    Then again, thanks to noob-friendly ray tracing API's like Nvidia's RTX and Optix, soon everyone's grandmother and their dog will be able to write a real-time path tracer, so all is well in the end.  

    0 0

    The Blue Brain Project is a Swiss research project, based in Geneva, which started in 2005 and aims to faithfully simulate a detailed digital version of the mouse brain, (as close to biology as is possible with today's supercomputers).

    Visualising this simulated brain and its components is a massive challenge. Our goal is to build state-of-the-art visualisation tools to interactively explore extremely large and detailed scientific datasets (over 3 TB). The real-time visualisation is rendered remotely on a supercomputing cluster and can be interacted with on any client device (laptop, tablet or phone) via the web.

    To achieve interactive frame rates and high resolution, we are building our tools on top of the industry's highest performance ray tracing libraries (the Ospray library from Intel, which itself is based on Embree, and the OptiX framework for interactive GPU ray tracing from Nvidia). These libraries take advantage of the embarrassingly parallel nature of ray tracing and scale extremely efficiently across multiple cores, devices and nodes in a cluster.

    We are currently looking for software engineers to help accelerate the development of these tools, both in the frontend and backend. Our offices are located at the Campus Biotech in the international district in Geneva, Switzerland.


    Frontend/fullstack web developer

    Your profile

    • 3+ years experience in full stack/frontend engineering
    • 3+ years designing, developing, and scaling modern web applications
    • 3+ years experience with JavaScript, HTML5, CSS3, and other modern web technologies

    Main duties and responsibilities

    Your responsibility will be to develop new features for our web based interactive 3D viewer "Brayns" (on the frontend) and maintain existing ones, and to drive the development of our new hub application where the scientists can manage their data visualisations.


    Required skills and experience


    • TypeScript, JavaScript (ES6)
    • React JavaScript framework
    • REST, WebSockets and Remote Procedure Calls
    • RxJS, NodeJS
    • Deep understanding of asynchronous code and the observable pattern in JavaScript
    • Experience using the browser dev tools for debugging, profiling, performance evaluation, etc.
    • Understanding of both the object oriented and functional programming paradigms
    • Knowledge of code chunking strategies
    • Experience writing unit tests using Jest and component tests using Enzyme (or similar technologies)
    • Experience with source versioning systems (Git, Github, etc.)
    • Knowledge of common UI/UX design patterns and ability to implement/use them accordingly
    • Knowledge of the Material Design spec
    • Fluent English in speech and writing
    • Self-motivated and ability to work independently 
    • Team oriented

    Nice to have


    • Interest in science (in particular neuroscience)
    • Experience with ThreeJS, WebGL, WebAssembly
    • Basic understanding of C++, Python and Docker
    • UI graphics design skills

    Apply


    For more info, email samuel.lapere@epfl.ch



    C++ interactive graphics developer 

    Main duties and responsibilities

    Your responsibility will be to develop and research new features for "Brayns", our interactive raytracer for scientific visualisation and maintain existing ones


    Required skills and experience
    • 3+ years of experience in C++/Python software development, testing, release, compilation, debugging, and documentation
    • 2+ years of experience with computer graphics (OpenGL, CUDA)
    • Strong knowledge of object-oriented, parallel, and distributed programming
    • Deep understanding of ray tracing and physically based rendering
    • Experience in software quality control and testing
    • Experience using UNIX/Linux operating systems
    • Experience in Linux-based system administration
    • Experience with Continuous Integration systems such as Jenkins
    • Great team player
    • Fluent English in speech and writing

    Nice to have

    • Interest in science (in particular neurobiology)
    • Experience in software development on supercomputers and distributed systems.

    Apply


    For more info, email samuel.lapere@epfl.ch


older | 1 | 2 | 3 | (Page 4)