The Flame Learning Channel show

The Flame Learning Channel

Summary: The official learning channel for the Autodesk® Flame® software products, the most comprehensive VFX, real-time color grading, and editorial finishing post-production solutions. The Autodesk® Flame® Learning Channel provides tutorials of all levels to help you learn Autodesk® Flame® Products.

Join Now to Subscribe to this Podcast
  • Visit Website
  • RSS
  • Artist: Autodesk
  • Copyright: © This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. Permission is granted to translate these videos into other languages. Autodesk, Inc. some rights reserved.

Podcasts:

 Machine Learning - Part 6 - Human Face Extraction - Flame 2021 | File Type: video/x-m4v | Duration: 902

With the release of Flame 2021, the machine learning models have been enhanced yet again to give you more choice and flexibility When working on your productions. So previously, the machine learning models were able to recognise and isolate the human body and human head for various finishing tasks. With this new release of Flame, you can now segment various parts of the human face for any compositing, grading and beauty work. So you can isolate the skin, eyes, the mouth, nose, ears and much more. This is all achieved using the familiar Semantic Keyer workflow and the Human Face Segmentation is available in the timeline as well as Batch and BatchFX. We’ll run through some examples to cover the features. But as a usual reminder with Machine Learning, this is not a perfect solution. Similar to all the previous videos, the Machine Learning Models are only as good as their training And depending on certain conditions They may or may not be successful. As long as you try these tools on multiple shots, you’ll eventually understand Which situations may be successful.

 Depth of Field Blurring with Physical Defocus - Part 3 - Flame 2021 | File Type: video/x-m4v | Duration: 437

In parts 1 and 2 of the Physical Defocus series, you learnt how to create depth of field blurring in an Action 3D composite as well as with CG render passes for final look development. In both cases, the 3D depth was already provided since all the examples originated in a virtual 3D environment and the camera depth was provided in the form of Z-depth map. In part 3, the situation is somewhat different when it comes to live action footage. Unless the shooting camera is able to capture the depth of the scene, normally no depth information is provided. So in order to use Physical Defocus with live action material in Flame, you can use machine learning to analyse the shot and produce the required Z-depth information.

 Depth of Field Blurring with Physical Defocus - Part 3 - Flame 2021 | File Type: video/x-m4v | Duration: 437

In parts 1 and 2 of the Physical Defocus series, you learnt how to create depth of field blurring in an Action 3D composite as well as with CG render passes for final look development. In both cases, the 3D depth was already provided since all the examples originated in a virtual 3D environment and the camera depth was provided in the form of Z-depth map. In part 3, the situation is somewhat different when it comes to live action footage. Unless the shooting camera is able to capture the depth of the scene, normally no depth information is provided. So in order to use Physical Defocus with live action material in Flame, you can use machine learning to analyse the shot and produce the required Z-depth information.

 Depth of Field Blurring with Physical Defocus - Part 2 - Flame 2021 | File Type: video/x-m4v | Duration: 1038

In part 1 of the Physical Defocus series, you learnt how to create depth of field blurring in an Action 3D composite that consisted of 3D objects. So if you were building your compositions within Flame’s 3D environment, The controls are really easy and quick to use. In part 2 of this series, You’ll examine another scenario where you could have been provided with CG render passes from a 3D application and you need to add depth of field blurring. In this situation, we’ll discuss two potential workflows With Batch and Action.

 Depth of Field Blurring with Physical Defocus - Part 1 - Flame 2021 | File Type: video/x-m4v | Duration: 611

In this video, you’ll learn about a new look development tool which allows you to create depth of field blurring in a variety of compositing scenarios. This tool is known as Physical Blur and it is a continuation of the convolution shaders such as Convolve, Physical Bokeh and Physical Glare. So whether you’re working on a 3D composite in Action, performing post operations on CG render passes or even look development with Flame’s finishing tools, Physical Blur allows you to create realistic depth of field blurring with lots of flexibility. We’ll be covering some of the most common scenarios over the coming videos.

 Mastering in Dolby Vision™ - Part 1 - Flame 2021 | File Type: video/x-m4v | Duration: 874

In this series, we are going to cover how to master Dolby Vision™ deliverables which has been introduced in the Flame 2021 products. There is support for Dolby Vision™ 2.9 and Dolby Visio™n 4.0 and this is available in the timeline as well as Batch enabling you to have HDR workflows ranging from simple mastering tasks all the way up to complex conform workflows. Now Dolby Laboratories developed this HDR format to make the most of High Dynamic Range and wide colour gamut technology. So its main purpose is to provide the content producer with some control over how HDR masters will play back on displays that have less dynamic range or colour gamut. Dolby Vision™ has become one of the deliverable standards and it has been implemented with various content streaming providers such as Netflix, Amazon, etc. And in order to view the content, Dolby Vision™ has also been added across a range of displays including the latest UHD viewing hardware. So in the Flame products, you can master Dolby Vision™ As well as edit the Dolby Vision metadata with a sophisticated, yet flexible, suite of tools. This video will assume that you have a general understanding of Dolby Vision™. If you need to familiarise yourself with Dolby Vision™ as well as their best practices and hardware display recommendations, Please visit the Dolby website - http://www.dolby.com Please refer to the Flame on-line documentation for a detailed explanation of the various supported Flame, Flare and Flame Assist configurations.

 Animating in Flame - P8 - Extrapolating & "Baking" the curve - Flame 2020.3 | File Type: video/x-m4v | Duration: 380

In part 8 of the animation series, You’ll learn how to manage your animation beyond the keyframes and the curve. So before and after your keyframes, you can control the extrapolation of your curve to determine whether keyframe values remain constant, the animations loops, ping pongs or continues in a linear fashion. Makes easy work of repetitive animation for graphics etc. As an added bonus, you'll also learn to convert the extrapolation into editable keyframes to further tweak an animation. Also very handy in lots of situations.

 Animating in Flame - P7 - Inserting Keyframes into a curve - Flame 2020.3 | File Type: video/x-m4v | Duration: 347

In part 7 of the animation series, you’ll look at the various uses of the INSERT KEY function. This is a very handy but under used tool that enables you to insert keyframes into an animation curve as well as insert extra timing into curve where needed. This will hopefully become a lot clearer as you progress through the video. Lots of interesting possible uses.

 Animating in Flame - P6 - Copy & Paste Animation - Flame 2020.3 | File Type: video/x-m4v | Duration: 357

In part 6 of the animation series, you are going to cover the concepts of copying and pasting animation. The reason for making a video on this universal subject is that you can copy and paste curves, copy and paste keyframes as well as copy and paste with an offset. Understanding these methods will help you achieve the expected results when moving animation between the channels.

 3D Interoperability with 'Send To' - PART 3 - Flame 2020.3 | File Type: video/x-m4v | Duration: 690

In part 2 of the 3D Interoperability series, you went through the workflow of exchanging 3D data between Flame and Mudbox. This allows you to take 3D geometry in Flame and push it over to Mudbox for sculpting and painting. Once you were happy with your model, tt gets sent back to Flame with an updated 3D geometry and a new diffuse texture map that was processed in Mudbox. In part 3, you’ll learn to troubleshoot some mesh issues, when sending 3D geometry to Mudbox. This is actually quite easy to manage since Mudbox points out the issues as well as offers a solution to fix them. This is due to the technical complexity of sculpting on a 3D model and there are just a couple of guidelines that need to be followed in order to sculpt successfully in Mudbox. Please check out the Mudbox documentation for detailed descriptions when troubleshooting mesh issues.

 3D Interoperability with 'Send To' - PART 2 - Flame 2020.3 | File Type: video/x-m4v | Duration: 857

In the first part of this series, you learnt the fundamental basics of how to instantly exchange 3D data between the Flame 2020.3 Update, Maya 2020 and Mudbox 2020. In part 2 of the 3D Interoperability series, you'll recap some of the topics from part 1 as well as start looking at the interoperability between Flame and Mudbox. Mudbox is a 3D sculpting and 3D painting application compared to Maya which is a more general 3D application. So their toolsets are different and because of that, their 3D dataset requirements are slightly different. In fact Mudbox is really smart and it will tell you when something is wrong and how to correct that. We’ll discuss this in detail as you progress through the workflow. To learn more about Mudbox, you can watch the following videos: Dynamic Tessellation Autodesk Basic: https://www.youtube.com/watch?v=lTubn6dTGMk Dynamic Tessellation Impressive Usage: https://www.youtube.com/watch?v=idF5C8mu4Ds https://www.youtube.com/watch?v=IqyCYjz9AHk https://www.youtube.com/watch?v=QXD8_7wS5Ew Vector Displacement Map(Very Cool for Flame users): https://www.youtube.com/watch?v=fJJQaQDRkG8 https://www.youtube.com/watch?v=yHWbOVYNoKw https://www.youtube.com/watch?v=QuGDdhSqIhg Modelling And Painting with Mudbox: https://www.youtube.com/watch?v=EIP9L7N5JTE https://www.youtube.com/watch?v=LrzbNaCFtEw https://www.youtube.com/watch?v=VvcS3_De9yg Retopology https://www.youtube.com/watch?v=ZPQfB333TBc https://www.youtube.com/watch?v=eXSkVfsZZic Meet the Experts: https://www.youtube.com/watch?v=ElVbyLyD_ts https://www.youtube.com/watch?v=fziRd30iECQ https://www.youtube.com/watch?v=rrolEp1uQY0 https://www.youtube.com/watch?v=ylgPKhgWzzA https://www.youtube.com/watch?v=yWldS1Jzq0U All aspect of Mudbox (third party): https://www.youtube.com/channel/UCPnvnhNXATxhKeNeKClD2yw/videos

 3D Interoperability with 'Send To' - PART 1 - Flame 2020.3 | File Type: video/x-m4v | Duration: 802

With the release of Flame 2020.3, Maya 2020 and Mudbox 2020, it is now possible to instantly exchange 3d scenes between Flame and Mudbox or Maya, running on the same workstation (Mac or Linux). This is known as the 'Send To' workflow which offers 3D data exchange without the need for manual importing and exporting. In part 1, you'll go through the fundamentals of setting up the workflow as well as the potential for tons of creative and technical capabilities for any Flame Artists working with 3D assets. This video will cover a basic example of interoperability between Flame and Maya. Please note that this is a single system workflow and all the applications need to be on the same workstation. Please check the on-line documentation for the setup configuration. To learn more about Maya, please watch the Maya Learning Channel - http://www.youtube.com/mayahowtos

 Machine Learning - Part 5 - Human Body & Human Head Extraction - Flame 2020.2 | File Type: video/x-m4v | Duration: 778

With the Flame 2020.2 update, two new machine learning models have been introduced into the Flame products. So part 5 in the Machine Learning Series introduces Human Body Extraction and Human Head Extraction. Both machine learning models allow Flame to look at an image and identify parts of the human body. So Human Body Extraction attempts to identify the entire human structure whereas Human Head Extraction concentrates purely on the human head. In both cases, you can then use the extraction as part of a selective or composite. For example, you may need to roto someone in a shot or perhaps blur some faces out really quickly.

 The Image Toolset - Part 11 - 3D Selective - Using Primitives with 3D AOVs - Flame 2020.2 | File Type: video/x-m4v | Duration: 1119

In parts 5,6 and 8 of the Image Toolset series, we looked at the 3D AOV capability where you could produce a selective matte for your image based on supplied 3D information. So you could create isolation mattes based on the z-depth of the image, the normals of a 3D object as well as the movement of an image based on motion vectors. In part 11 of the series, you’ll learn about a new tool to refine your 3D AOVs known as “Primitives”. This allows you to constrain the effect of a 3D AOV by placing virtual 3D objects in 3D space. If you are new to 3D Selectives, please watch part 5, 6 and 8 of the Image Toolset series to explain the basics and fundamentals of 3D AOVs.

 Comparing Images In Grading & VFX - Flame 2020.2 | File Type: video/x-m4v | Duration: 576

In this video, you’ll learn how to compare images when performing any grading or VFX work in the Flame products. This is quite an essential function if you’re comparing the progression of an effect, one look to another look or simply comparing the current frame to a still reference. This video will cover comparing in the Effects Environment and Batch. It will also go into detail regarding the new Grabbed References Library.

Comments

Login or signup comment.