Categories
Advanced & Experimental Advanced Nuke

Week 10: Term 3 Group Project Brief, & Homework Q&A

In this lecture, we were introduced to term 3 group project and reviewed the devil comp homework.

Term 3 group project brief

The deadline of the term 3 group project would be on the 29th June 2023 (8 weeks duration) and we will have this group project and a personal project too.

We will have to create a futuristic spaceship room based on any of these 3 themes: steam punk, cyber punk, or low-fi sci-fi. This project will simulate the work pipeline of a professional VFX studio.

We will have the live footage of a corridor/room and we will have to comp it with CG elements to make it futuristic. We will have to choose our group and each team member will have to pick a roll. We will need to organise the project with our team members, taking care of:

  • Research of ideas, and moodboard build up.
  • Planning of assets that are going to be needed.
  • Planning each team member’s task weekly.

Every week, this project will be reviewed and notes will be shared that will need to be followed for the next review. We will work with F-track programme to be able to see notes, calendar, objectives, etc. Every time we present assets, we will need to present them in detail, showing the topology, texturing, and lighting. Also, Dom Maidlow (CG Generalist) will be teaching us how to track plates and how to import into Nuke.

The final outcome required would be a 10 secs animation, and it will have to follow the requirements stated in the brief provided.

Devil comp homework

In this comp, we were asked to add texture and elements to a live footage of a devil man. I researched some skin texture that shows like scarring or open wounds and I found a dragon-like texture that if graded and colour corrected could look like what I had in mind. I also wanted to add like a wound with surgical stitches to the closed eye, and like satanic tattoo in his forehead. Also, I added some fire in the horns, and for the environment, I found a video with smoke, fire, and sparks.

For the face texture, I ‘Roto paint’ part of the face to get rid of the hair on one of the sides. Then I also added a dragon-like texture to add a more interesting look. I also covered one of the eyes and added a stitches texture on top. Then I added a ‘Vector Distort’ so the alphas created with the textures are warped following the movement of the face.

For the forehead tattoo and the fire in the horns, I used 3D tracking of the man’s face and created 3 cards taking as reference points the middle of the forehead and to top tips of both horns. Then I projected the tattoo texture and the fire footage on those cards and colour corrected them. Since the video of the fire was flat and the head of the man was moving side to side, I had to adjust the rotation of the card adding some keyframes when the head rotated.

Lastly, I added some sparks footage on the foreground, and colour corrected the whole comp.

Final comp

References

Algol, M. W. F. This 666 Devil Satan Pentagram Black (online). Available at: https://www.nicepng.com/ourpic/u2q8t4q8e6o0e6q8_666-devil-satan-pentagram-black-freetoedit-michael-w/ [Accessed on 19 March 2023]

Ektoplazm. Over 50 Skin Textures Free Download (online). Available at: http://www.psd-dude.com/tutorials/resources/over-50-skin-textures-free-download.aspx [Accessed on 19 March 2023]

Ezstudio. Orange cloud smoke Fire sparks rising up Free Video (online). Available at: https://www.vecteezy.com/video/5160760-orange-cloud-smoke-fire-sparks-rising-up [Accessed on 19 March 2023]

Videezy. Fire Stock Video Footage (online). Available at: https://www.videezy.com/free-video/fire?page=5&from=mainsite&in_se=true [Accessed on 19 March 2023]

Categories
Advanced & Experimental Advanced Nuke

Week 9: Nuke Homework Q&A Session

We dedicated this lecture to asked all the questions we had about what we have seen in this term and our weekly homework and projects.

Hero shot green screen removal homework correction

In this comp I had an issue getting the finer details of the girls hair when removing the green screen and comping with the forest background. Also, the snowflakes I keyed to add in the foreground of the scene were barely visible. In order to take the hair details, I had to take a ‘IBK colour’ node and pick the darks and lights of G (green colour) so it selects as much detail of the hair as possible. I can also use ‘Filter erode’ to remove noise and then add ‘patch black’ (like at 20) to remove black part. Then, I can add a ‘IBK gizmo’ set to green, and then link ‘fg’ to green screen plate, and ‘bg’ to background plate (so it takes background features). Then, I can tick ‘use bkg luminance’ in the ‘IBK gizmo’ node, so it takes the background luminance, and tick ‘use bkg chroma’ so it takes background colour too. I can then ‘Merge (over)’ the ‘A’ link to the ‘IBK gizmo’, and ‘B’ link to background. This will take all the details of the hair from the green screen and add it to the luminance of the new background.

Regarding the snow problem, I was taking the luminance with ‘Keyer’ node from the original plate, and I had to connect it to the ‘Transform’ node instead so it takes the correct aspect ratio. Then, every time I ‘Premult’, I always need to use ‘Merge (over)’, so I changed to this node. I can also add more or less effect with ‘Multiply’ node.

Final green screen removal scene
Final green screen removal scene – alpha

Markers clean-up homework correction

I asked the professor for the distortion I was getting from the smart vector and he confirmed to me that the problem was that the node was affecting the whole image. So in order to correct it, I had to add a ‘Premult’ to the ‘Roto paint’ and add a ‘Framehold’ again (before the ‘ST map’), so the distort only affects the alpha created with the ‘Roto’. Also, I need to improve ‘Roto paint’ using the techniques to control light changes.

Final result

Garage comp homework correction

In this comp I had the issue with the shadow being cast on the wall hole. To remove the shadow from the wall hole, I need to take the previous roto made for that wall and ‘Merge (stencil)’ in the shadow part (between ‘Blur’ and ‘Grade’ nodes). Then, before this ‘Merge (stencil)’ node, we add an ‘Invert’ node so the roto alpha only takes the hole instead of the wall. To correct some bits that are outside this previous roto and that now are showing as this has been inverted, we make a quick ‘Roto’ that selects the area we want to keep (the hole in this case), we adjust the position of this ‘Roto’ in several frames, and then we ‘Merge (mask)’ to ‘Invert’ node (‘B’ connection). Lastly, we make the edge of the roto less crisp adding ‘Edge blur’ so it softens it.

I also fixed the back objects as they were looking too dark and the smoke effect was not affecting them so it did not look realistic. I then desaturated the colours adjusting their ‘Grade’ nodes and then added a ‘Merge (over)’ from the smoke card block to these objects.

Final garage comp
Categories
Advanced & Experimental Advanced Nuke

Week 8: Markers Clean-up Techniques & Homework in Nuke, & Final Garage Homework Review

In this lecture, we learnt how to remove markers from a character’s face in a live footage scene, and how to add texture and corrections that follow the movement of the character.

Degrain/Regrain techniques

Before starting with markers removal from a live footage shot, it is important to degrain our footage so Nuke can read and detect better the pixel information when adding different nodes for cleaning up or tracking techniques. If we use this, then we will need to regrain the plate once we have finish all our changes, so all added elements have the same grain texture and it looks like it has been filmed all in one shot with the same camera and light conditions.

  • Simple degrain. We can denoise plates with ‘Merge (minus)‘ followed by a ‘Merge (plus)‘.
  • ‘F_ReGRain’ node. This is an alternative to regrain node and it is only available in NukeX. It is more precise than a simple regrain, since it shows less of the patches added for clean up plates.
  • ‘DasGrain’ gizmo. This gizmo can be downloaded from Nukepedia where there is also a tutorial on how to use it. We will plug the ‘DasGrain‘ to the original plate and to the denoised plate. Then we plug a ‘Common key‘ gizmo to ‘comp‘ and ‘mark‘ links in ‘DasGrain’. In ‘DasGrain’ node settings, we can set ‘output‘ to desired one (it has different outputs for QC). In the ‘replace‘ tab, we can select the area we want to scan (usually the darkest area), then select ‘activate‘ and then ‘analyse‘. This gizmo is being newly used across VFX companies due to its efficiency and reliability.

Patch changing light techniques

When adding patched to clean up marker in our plate, we need to take care of light changes as the patch could be too obvious:

  • First, we can try to correct lighting manually by using a ‘Unpremult‘ node, then ‘Grade‘ by hand in the needed keyframes, and then ‘Premult‘ back. This technique is not recommended as it is time consuming.
  • Divide/multiply technique. ‘Blur‘ image (add a lot of blur), then clone the ‘Blur‘ node, and add ‘Merge (divide)‘ to merge both ‘Blur’ nodes. Lastly, ‘Merge (multiply)‘ with background.
  • Image Frequency Separation technique. We use ‘Slice Tool‘ gizmo to analyse a specific area of the plate (a face with markers for example), and all frames too (separated gizmo). Then we ‘Blur‘ to see low frequency of image and ‘Merge (from)‘ node to see high frequency. With this, when cloning area with ‘Roto paint‘ to clean markers, we are going to paint only low/high frequencies so the light is not affected (only gamma). This technique is used so light changes do not affect the patched area. With ‘Laplacian‘ node, we could get the same result too. We first need to link with ‘Merge (plus)‘ node to bring back the light and the colours from the original plate, then we ‘Rotopaint’ the part we want, followed by a ‘Blur‘ to add/remove the quantity of light required. Alternatively, we could also ‘Blur‘ and ‘Multiply (divide)‘ to see and correct different values, to then we ‘Merge (multiply)‘ to merge back (like mentioned before).
  • Interaction patch technique. Add patch with ‘Roto paint’ with ‘match move‘, then scan original plate with ‘Transform‘, ‘Copy (alpha -> alpha)‘, and ‘Premult‘. Then ‘Merge (multiply)‘ with plate, ‘Regrain‘, and ‘Merge (over)‘ with main plate.
  • Curve tool’ node. This is used to add/remove info to the plate (for example, to correct flickering of image). First we start by cropping the info we want by adding ‘Curve tool‘ node, selecting an area, setting ‘curve type‘ as ‘max luma pixel‘ and then click ‘go‘ so it starts to analyse the area. Then, in ‘max or min luma data‘ we click on the icon at the end and then right ‘click + copy + copy links‘. Then we go to ‘grade‘ and ‘paste + paste absolute’ on ‘lift‘ (shadows or min luma data) and ‘gain‘ (luminance or max luma data).
  • ‘Roto’ and ‘Transform’ technique. We start with ‘Transform‘ node, followed by a ‘Roto‘ of the part we want, and a ‘Track‘ of the roto. Then we ‘Blur‘ the roto as alpha, ‘Premult‘, and ‘Merge (over)‘ with main plate.
  • Clone patch technique. First we denoise the plate so we can ‘Track‘ the markers properly (1 track per marker). Then we copy translate x and centre x to ‘Rotopaint‘ node. We do the patch with clone tool and add ‘Roto‘ over cloned area. Finally, we ‘Filter erode‘, ‘Blur‘, ‘Regrain‘, and ‘Merge (over)‘ to main plate.
  • ‘Premult’ and ‘Unpremult’ for paint technique. First, ‘Denoise‘ plate and ‘Track‘ marker. Then copy ‘Roto‘ over marker. ‘Invert‘ roto/mask (like a hole), and ‘Merge (mask)‘ to ‘Shuffle‘. then ‘Blur‘ slightly and link as a mask to ‘Edge blur‘ node which previously was linked to ‘Merge (mask)’ node. Then we ‘Unpremult‘, ‘Copy (alpha -> alpha)‘ from ‘Blur‘ to ‘Premult‘. Lastly, we ‘Regrain‘ (linked to original plate), ‘Premult‘, and ‘Merge (over)‘ to main plate.
  • ‘In Paint’ technique. It is nearly the same as the previous technique but, instead of inverting the roto and blur it, this time we use ‘In paint‘ node, which can be tweaked to make the patch blend in.
  • ‘UV map’ technique. When using ‘Expression‘ node, R and G channels (X and Y coordinates) have identical values, and just B value is 1, which has no effect on what ST/UV images do. With ‘Expression‘ node, we can ‘Roto paint‘ specific details such as motion blur or warp of an image, and the we connect ‘ST map‘ node to plate. We could also use ‘Grid warp‘ node, but since this is a really heavy tool, it is recommended to avoid this if no needed.
  • Vectors technique. As usual, first we ‘Denoise‘ the plate, to then use a ‘Smart vector‘ node. This node could work fine with the default settings, however, it is better to increase ‘detail‘ to achieve a better result and to have less problems with image warp later on. Then we can export this with ‘Write‘ node since smart vectors are really heavy and could slow down the preview. Separately, we remove the markers with ‘Roto paint‘, ‘Filter erode‘, and ‘Blur‘, and we also add a ‘Frame hold‘ node in the reference frame where we are doing the cleaning up. Then we add a ‘Vector distort‘ node that will track the movement of the markers (set ‘output‘ to ‘warped src‘ in this case) following the smart vector map created previously, and then we add a ‘Copy (motion -> motion)‘. Apart, we add a ‘Vector to motion‘ node to add motion blur to the movement of the markers and the we link to to the ‘Copy’ node we added before. Then we add a ‘Vector blur‘ node (with the ‘output‘ as ‘result‘), we ‘Regrain‘, ‘Premult‘ and ‘Merge (over)‘ to main plate. We could also use an ‘ST map’ after the ‘Vector distort’ and in the last one, add ‘output’ as ‘ST map’ instead. This way is better than ‘warped src’, since ‘ST map’ is lighter. Smart vectors can also be used to add texture.

Homework – Face markers clean-up

This week’s homework was to remove the markers of a live footage shot of a girl moving her face. I first tried tracking the markers with a regular ‘Tracker’ node to then link it to the patches made on each marker. This technique is quite straight forward for time consuming since the ‘Tracker’ was also failing to track properly so I had to move the tracker point manually to the correct spot in most of the frames. Also, some of the patches are visible when the girl looks to the sides.

I also tried a different technique, using a ‘Smart vector’ node this time. This technique is really quick if it works fine, however, I am struggling with the distortion of the face when the girl moves her head.

I think I may be doing something wrong as it is distorting the whole image and not just the patches added. I will have to ask Gonzalo in the next class (final result added on Advanced Nuke – Week 9 post)

Final Garage Comp

Since this week I could not go to class in person as I was ill, I did not have the chance to ask for the questions I had regarding the shadows in my garage comp. Therefore, I emailed Gonzalo with a version of my comp attached and my question regarding the shadows being too harsh, and he sent me back a solution to this issue. It looks like I had to add another the ‘Shuffle’ + ‘Blur’ and mask link it to another ‘Grade’ node connected to the main plate, as shown below:

Garage Comp

However, I still got the issue of the shadow casting on the wall hole. I tried to add a ‘Merge (stencil)’ node using the previous wall roto I had, however, it was not working as it was cropping the whole wall and not just the hole. I will ask the professor next week about this (final result added on Advanced Nuke – Week 9 post).

Categories
Advanced & Experimental Advanced Nuke

Week 7: Despill Corrections Tips, Creating Gizmos in Nuke, & Garage Homework WIP

In this lesson, we learnt to make despill corrections when removing green/blue screen, also we saw how to create our own personalised gizmos in Nuke, and lastly, we asked questions we had related to our garage homework WIP.

Despill correction tips

  1. When keying to remove green screen and then remove saturation, we can roto some parts of the shot and then link this roto to ‘invert’ node so the despill does not affect that specific part.
  2. We can also correct edges with ‘IBK Colour’ node set to ‘blue’ colour only, then we add a ‘Grade (alpha)’ so it only affects the alpha, then we correct the edge with ‘Filter Erode’ and ‘Blur’, and lastly, we ‘Merge (screen)’. We could also add an ‘Edge Blur’ to soften sharp edges and a ‘Clamp’ to make sure all merged alphas value is 0.
  3. With ‘Add mix’ node, we can merge alphas and can set how much alpha we want to see.
  4. Additive key: after a ‘Merge (minus)’ we desaturate and grade, and then we add a ‘Merge (plus)’ to ‘Constant’ node with the green colour as reference.
  5. Divide/Mult key: we rerplace spill with ‘Merge (divide)’ from both the chroma plate and the chroma reference plate, to then ‘Merge (multiply)’ with the background plate.
  6. When the green/blur screen have different luminance along the shot, we correct it taking a ‘Constant’ node with the darkest part colour of the green/blue screen connected to a ‘Merge (average)’ node so we create a ‘Constant’ with a colour with the same luminance. Then we ‘Merge (minus)’ with a ‘Keylight’ for despill.
  7. We could add a ‘Light wrap’ node to add a light glow around specific areas. We will ‘Merge (plus)’ to the background in this case.
  8. An inverted matte can be used to delete light from an outside edge. We just ‘Invert’ the matte, then ‘Roto’ the required parts, and ‘Merge (mask)’ to the matte. We could also add a ‘Grade’ node with a mask link to this ‘Merge (mask)’ node so we colour correct that specific edge.

How to create gizmos

First, we select the nodes we want in the gizmo and group them (ctrl + G). Then in the creed node options, we click the ‘edit’ button and drag and drop the features that we want (controllers). We can label these controllers by clicking on the little circle next to it. We then link each controller with the node controller (hold ctrl + drag and drop from main node to grouped node).

Green screen and despill homework

The homework for this week was to remove the green screen of a hero shot girl scene and add it to a snowing forest background, as well as add some of the background snow to the foreground.

First, I use a ‘Keylight’ node set to detect just the green colour and ‘Merge (minus)’ to the main plate to only see the greens of the shot. Then I aded a ‘Roto’ to the eyes of he girl and ‘Invert’ it linked as a mask to he saturation node to preserve the little amount of green of her eyes. Then I linked a ‘Merge (multiply)’ node from the background plate to the foreground plate to take some of the luminance of the background to the girl. This also was ‘Merge (plus)’ to the foreground to add that luminance to the scene.

Separately in another block of nodes, I used an ‘IBK Colour’ node to key the green screen of the foreground. Then I desaturated it, and added another ‘Grade’ with ‘Filter erode’ and ‘Blur’ and ‘Merge (Screen)’ to previous ‘Grade’ so I get more details and luminance from the girls hair. Then I ‘Copy’ this alpha to the main foreground alpha, to then ‘Add mix’ these alphas to the background plate.

In the background plate, I used a luminance ‘Keyer (alpha)’ node, to select only the colour of the snowflakes falling. then I ‘Copy’ this alpha to the background plate and also ‘Premult’ to create the alpha that will be added to the foreground with ‘Merge (over)’ node.

Finally, I colour corrected and graded the overall result and rendered the alpha and the final comp.

Final hero shot
Alpha version

I am not totally sure about the amount of hair detail that is visible in this version so will ask the professor on the next class (corrected version added to Advanced Nuke – Week 9 post).

Garage homework WIP

Lastly, I asked Gonzalo about my issue with the shadows not showing in my garage comp. He found out that the ‘Grade’ node that was after the ‘Shuffle’ node to create the shadows alpha, had ‘black clamp’ option ticked, so I had to deselect this and select ‘white clamp’ option instead so the blacks of the shadows started to show. However, despite the shadows were finally showing, I feel like they are too harsh and saturated and I could not figure out how to soften them. I tried to grade them and desaturate them but still looked to black and unnatural to me. Also, the hole in the wall is receiving the shadow from the chain hang on the wall and it looks like there is a plane receiving this shadow. This issue is due to the card added on that wall to receive the cast shadow so I tried adding a ‘Merge (stencil)’ from the roto I have from that wall but it did not work for some reason. I will have to ask Gonzalo in the next class.

Categories
Advanced & Experimental Advanced Nuke

Week 6: HSV Correction Process & Chroma Key in Nuke, & Garage Homework WIP

In this lecture, we learnt how to correct Hue, Saturation, and Value (Luminance), and how to use chroma key in Nuke. We also reviewed our garage comp WIP.

The meaning of HSV break down is the following:

  • H – Hue. Applies to colour (R, red value)
  • S – Saturation. Applies to the intensity of the hue/colour (G, green value)
  • V – Value. Applies to luminance or brightness (B, blue value)
HSV illustration (Cini, 2023)

This is important to understand as the quality of a HSV correction in a comp depends of our understanding of this elements.

When making hue corrections, we can use the ‘Hue correct‘ node to mute (colour channel as ‘0’), suppress (suppressed colour channel as ‘0’), or desaturate (saturation channel as ‘0’) a specific colour. This is useful for example to remove a green screen with the green colour channel suppressed.

With the ‘Keyer‘ node set with ‘luminance key‘ operation, we can determine the quantity of luminance we want to remove from the alpha of an image (black parts are not affected, but white parts only). We could also set the operation for ‘red keyer’, ‘blue keyer’, etc. Then we could ‘shuffle’ to green only, for example, and the colour correction will only affect saturation. We can also use this node to add, subtract, and multiply elements:

  • We can remove a colour channel with a ‘Merge (minus)‘ node linked to the colour we want as background and the colour we want to remove.
  • With ‘Add (Math)‘ node we can add colour when linked to a ‘merge (minus)’ node.
  • We could also use ‘Roto‘ to add or remove colour of a specific area.

Texturing can also be made with a ‘keyer (luminance)‘ so we use the alpha of the texture to adjust the luminance. Then this would be blurred, graded, and merged (minus). Moreover, we could also use the keyer to add noise or denoise in certain areas.

Some extra nodes and techniques can be used to create some effects that will give more credibility to our image:

  • Volume rays‘ node. Used to create rays effects or motion effect.
  • Edge detect‘ node. To select the edges of an image’s alpha and colour correct those specific edges.
  • Ramp‘ node. To balance image with gradient (used with ‘Merge (minus)’ node).
  • To add a new channel in ‘Roto’. We create a new ‘output’ (name it, click ‘rgba’ and ‘ok’), so when adding different features like blur or grade, we can link that change in the node and it would only affect that new channel created. We could also use ‘Add (channel)‘ node instead, select the channel as ‘Matte’, and choose that it will only affect to certain colour. We could also add a ‘Rotopaint’ to this and add shapes linked to different channels.

We can use keying nodes and techniques for chroma key such as:

  • IBK‘ (Image Based Keyer). We can subtract or difference with this node. It is considered the best option for getting detail out of little areas like hair or severely motion blurred edges:
    • IBK colour‘ node. Frame by frame rebuilding background taking blue or green colour.
    • IBK Gizmo‘ node. Can select specific colour.
  • Chroma key‘ node. First we can unselect ‘use GPU if available’ if our computer starts lagging. This node works better with evenly lit screens and with more saturated colour. We could use it for despill, but better not to as what we want is to extract the alpha.
  • Key light‘ node. This is used for colour spill.
  • Primate‘ node. This is a 3D keyer that puts colour into 3D colour space and creates a 3D geometric shape to select colours from it. We first select ‘Smart select background colour’, we pick a colour while holding ctrl+shift, then we change to ‘Clean background noise’, and holding ctrl+shift, we pick the colour parts that are still showing in the alpha (and need to be removed). We could also click on ‘Auto compute’ to create an automatic alpha and then retouch areas back to alpha with ‘Clean foreground noise’.
  • Ultimate‘ node. This is used for fine detail, and to pull shadows and transparency from the same image.
  • Green/Blue despill technique. We create an alpha with ‘Keylight’ node and ‘Merge (difference)’ to plate. Then we desaturate the background with a ‘Colour correct’ node and ‘Merge (plus)’ with the ‘Keylight’ node. Then we ‘Shuffle’ and put ‘alpha’ in black. Additionally, we could reduce the light in the background (saturate and grade) with ‘IBK Gizmo/Colour’. There are some companies that have created their own gizmo with all the required presets to despill.
  • Edge Extend‘ node. This is used to extend edges so we can correct the darken bits (smoother edges and not as pixelated).

A standard chroma key process would have the following steps:

  1. Denoise plate
  2. White balance
  3. Alpha pipe
    1. Core matte
    2. Base matte
    3. Hair matte
    4. Gamma matte
    5. Edges alpha
  4. Despill pipe
    1. Edges despill (specific parts)
    2. Core despill (overall)
  5. QC
  6. Light wrap
  7. Regrain alpha and background
  8. ‘Addmix’ to link alpha and background

Green screen homework

The homework of this week is to improve the garage comp and to make a chroma key of a sequence provided by the professor so we can put in practice all the techniques learnt in class.

Final green screen replacement
Alpha version

Garage comp WIP

Regarding my garage comp work in progress, the professor also sent us a Nuke comp with examples of how to set lighting and shadows with projections or geometry. I tried to follow the example with the geometry as I only had 3D objects in my comp, however, I had some problems with the shadows as they were not showing at all in the final result. I could see them showing in the alpha created with the ‘shuffle’ node, however, since I could not see them in the final output, I guess I have something wrong with the ‘merge’ node or with the concatenation of the comp. I will try to ask Gonzalo in the next lecture about this. I also added a texture to the right wall so it looks like it was previously painted but the paint is being degraded and it is peeling off the wall. I roto the part of the texture that I was interested in showing and then used a projection of the texture on a card in a 3D scene.

References

Cini, A (2023). Color Theory HSV or Hue, Saturation, Value Brightness Illustration Chart Vector (online). Available at: https://www.dreamstime.com/color-theory-hsv-hue-saturation-value-brightness-illustration-chart-color-theory-hsv-hue-saturation-value-brightness-image237125365 [Accessed 19 February 2023]

Categories
Advanced & Experimental Advanced Nuke

Week 5: 3D Compositing Process in Nuke & Garage Homework WIP

This week, we learnt the 3D compositing process to put together plates and CG, to do clean-ups, to regrain, and to do a beauty rebuild with a multipass comp.

The 3D compositing general process looks like the following:

  • Main plate clean-up and roto work. After finishing the clean-up, it is recommended to render the cleaned plate so we can use this pre-rendered version to do the rest of the comp. This is done so Nuke has less nodes to calculate each time and the preview of the work done goes quicker.
  • CG compositing. In this part, we can move on with the beauty rebuild, adjusting the AOVs or passes with subtle grade and/or colour correction
    • Basic grade match. With a ‘grade’ node, we first do the white balance of the CG that we are going to integrate in the plate, measuring the ‘whitepoint’ (whitest part of the CG) and the ‘blackpoint’ (darkest part of the CG) while holding ‘ctrl+shift+alt’. Subsequently, we go to our background plate and measure ‘gain’ (for the whites) and the ‘lift’ (for the darks) while holding ‘ctrl+shift’. This will balance both plate and CG’s darks and shadows and will integrate them together.
    • Multipass comp. In this technique, we need first to ‘unpremult (all)’ our CG so we can start splitting the AOVs or passes. This split is made using ‘shuffle’ node and setting it to the desired pass we want to correct. Before editing the passes, we need to make sure to structure the nodes from the CG plate to a ‘copy’ node with all passes merged together, and double check that the CG plate looks exactly the same from initial point (original plate) to the ‘copy’ node. Sometimes this may look different as some of the passes could have been exported wrong. Once we split our passes we can proceed to ‘grade’ them individually. We ‘merge (plus)’ the light passes and we ‘merge (multiply)’ the shadows. We can also select an ID to create a colour map with ‘Keylight’ node. With this node we can select a specific area of the model that we want to adjust as its features will be separated in different saturated colour mattes. This way we could then re-texture a part of the model using a ‘ST-map’ node connected to the texture source. We can then re-light with ‘position pass’ and ‘normal pass’, followed by a ‘grade’ of the master CG plate. We can finish our beauty rebuild with a ‘copy (alpha-alpha)’ to copy the original alpha to the one created, and we we ‘premult’.
  • Motion Blur. Motion blur will add more realism and dynamism to the movement of the CG added as in 3D everything looks sharp and in focus so it is not as realistic. We can add motion blur following two methods:
    • Method 1: adding a ‘vector blur (rgba)’ node, then link it to ‘camera’, and adjust ‘motion amount’ in the ‘vector blur’ node as desired.
    • Method 2: ‘remove (keep)’ node linked to ‘motion blur 3d’ nodes, and adjust this last one’s ‘motion amount’ as desired.
  • Chroma aberration and defocus. We can add an ‘aberration’ node to match the original camera aberration of the live-footage plate, so we make the scene more credible. Also, with ‘defocus’ node we can add depth to the scene to be able to differentiate between sharp image and out of focus image (depth of field). After adjusting these, we need to add a ‘remove (keep)’ node connected to an ‘ST map’ node to put the original distortion back to the scene.
  • Regrain. We also could add some grain to the scene with ‘grain’ node. Then with ‘key mix (all)’ node linked to previous changes and ‘grain’, we can mix channels and add a mask to the previous changes made in the comp.
  • Effect card. We can add effects like smoke with a ‘card’ node. We will need to connect it to ‘shuffle (rgba to rgba with R to alpha)’ node to ‘card’, and ‘grade’ it. Then we ‘copy (alpha to alpha)’ and ‘premult’ to create the alpha of the effect and then we ‘defocus’. This will be projected on a ‘card’ (connected to ‘scene’, ‘scanline render’, and ‘camera’). Finally, we add the ‘ST map’ to unfreeze the frame and ‘multiply’ to show alpha created.
  • Lightwrap. We use this to add light to the edges, which could be adjusted with ‘diffuse’ and ‘intensity’. Then we will ‘merge (plus)’ as this is light feature.
  • QC. Using the ‘merge (difference)’ node, we can see and assess the changes made and there is any error. The ‘colour space’ node with the ‘output’ set as ‘HSV’ can be used to check the colours hue (R), saturation (G), and luminance (B) quality.
  • Final colour correction.
  • Export. The main preferred format to export our comp would be EXR. Some companies will also want a photo ‘JPEG’, or ‘AppleProRes’, or even ‘Avid DNxHD’, but that depends of the pipeline of each company.

The homework for this week was to start to put together the elements that would form part of our garage comp, and also, include the machine provided by the professor following all the steps we have learnt today.

Following the reference pictures we got with the brief, I started to research for 3D objects I could include such as tools, tyres, a table, etc.

I also decided to re-watch this week’s recording of the lecture to make sure I followed step by step the compositing process. This way, I started to understand the functionality of each node and technique, and to become more confident at the time of creating a whole comp by myself without having to look at references in other comps. The first thing I added was the machine in the back room. I did a beauty rebuilt with the separation of the passes and added a smoke effect with a card 3D projection. I feel like this part went really well as I did not have any issues along the process and the final look is pretty realistic.

Garage comp WIP with machine

After my back machine was fully set, I continued to add the 3D geometry to the comp with its textures. One problem that I had with the objects is the fact that they were really heavy and really jumpy when following the movement of the scene so it was hard to work with.

My work in progress comp looks like the following:

Garage comp WIP with 3D objects
Categories
Advanced & Experimental Advanced Nuke

Week 4: CG Compositing in Nuke

This week, we studied how to do a CG beauty rebuild, using channels or passes of our CG to see its layers to then adjust them separately, relight them, and put them back together.

To start with the CG beauty rebuild, first we need our CG layers (usually the CG has already been exported like this). We can see all these layers separated in the ‘layer contact sheet‘ which contains a view of passes in EXR (e.g. diffuse, specular, reflection, etc). The separation of the EXR in layers or passes (channels) is used for adjusting each pass separately to match the lighting and colour conditions of the background. In order to adjust each pass, we first need a ‘shuffle‘ node set with the specific pass (input layer) we need to then ‘merge (plus)‘ (+) for the lights (diffuse, indirect, specular, and reflections) and ‘merge (multiply)‘ (*) for shadows (AO or ambient occlusion, and shadow). Every pass must be graded separately and then we could add a final ‘grade’ or/and ‘colour correct’ to the entire asset if needed.

There are several types of ‘render passes’ or ‘AOVs’ (Arbitrary Output Variable):

  1. Beauty Rebuilt Passes:
    • Material AOVs. To adjust material attributes (shader).
    • Light Groups. To adjust individual lights of a scene.
  2. Data Passes:
    • Utilities. Combined with tools to get various effects (e.g. motion blur, defocus, etc.).
    • IDs. To create alphas or mattes for different areas of the render.

There are some elements that can be used to double check or improve our CG beauty rebuild quality:

  • Cryptomatte. To see different parts of the scene colours.
  • KeyID. To create a mask of the ID pass.
  • AO pass. It creates a fake shadow, produced by proximity of geometry to other geometry or background.
  • Motion pass. It let us see the blur of the motion clearly.

The process to subtract a pass to edit it is the following:

  1. Unpremult (all)
  2. Link to ‘shuffle’ node (set with pass needed)
  3. ‘Grade’ and make adjustments needed
  4. Add back with ‘merge (plus)’ or ‘merge (multiply)’
  5. ‘Remove (keep)’ node
  6. ‘Permult’

Once we have our colour correction and grading made, we can relight the scene with ‘position pass’ which is the 3D scene but in colour values (red=X, green=Y, blue=Z). In order to have a reference of the 3D space, we could use a ‘position to points’ node set with ‘surface point’ to ‘position’ and ‘surface normal’ to ‘normal’. We then adjust the point size how we want and we will see a 3D representation of colour values. Once the representation is made we can start to add lights with ‘points’ nodes linked to the ‘scene’ node to put them together. This scene is then connected to a ‘relight’ node which puts light, colour, material, and camera together (use alpha, and link ‘normal vector’ to ‘normal’ and ‘point positions’ to ‘point’). To merge over original background, we then ‘shuffle’ and ‘merge’.

As a homework of the week, we need to composite a 3D modelled car in a background of out choice:

Final car compositing

I feel like this practice was simpler than last week’s homework, however, I still encountered some challenges that I would like to research and study, such as the addition of ‘fake’ lights to the car lights to look like they are turned on, and also to get rid of a specific area glow like the one on the right door of the car which does not really make sense it shows there.

Categories
Advanced & Experimental Advanced Nuke

Week 3: Types of 3D Projections in Nuke

In this lesson, we saw the different techniques that can be used for 3D project, such as patch projection, coverage projection, or nested projection, and we also analysed how to add texture and lighting onto a 3D object as well as the general problems we can encounter with this.

In 3D tracking, we need to try to avoid to include the sky, as it would give us problems later on, in the same way that we avoid objects that move or reflections in roto.

When adding a ‘rotopaint’ to a card in a 3D space, we need to first freeze the frame with a ‘frame hold’ node at the best position in the sequence for visibility and tracking a specific point. Then we add the ‘rotopaint’ or the patch we need, and add another ‘frame hold’ to ‘unfreeze’ the frame. Then we premultiply it to create an alpha and use a ‘project 3D’ node to project it in our card (the ‘project 3D’ node must be connected to the projection camera and another ‘frame hold’ node). Lastly, we connect our card to the ‘scanline render’ node which will be merged with the main plate.

In order to add texture to a ‘card’ in 3D space, we will use the same method as before, but this time we will take the texture or picture that we want to add which we can ‘colour correct’ and ‘grade’ if needed, to then ‘roto’ the part we want to add from it, premultiply it, and with ‘corner pin 2D’ we will place it in the perspective we desire. Then we will ‘transform’ it to the dimensions we want and ‘merge’ it to the main plate after adding a ‘frame hold’. Lastly, we need to ‘copy’ the roto and premultiply it so we can project the alpha to our ‘card’.

If we want to roto something in the scene to change its features (colour correct, grade, etc), we can do the same as we did with the ‘rotopaint’ but in this case we adjust the roto every 10 or 20 frames. We do not need to adjust the roto every frame as it will follow our match move previously done so just a few adjustments should be sufficient.

When we have several 3D projections that we want to put together, we can use ‘Merge mat’ node, as if we use a regular ‘merge’ node, the quality of the image can decrease and look different.

After seeing these 3D projection techniques, we were asked to practice them using the following a footage of a street provided by the lecturer. For example, we could add something on the wall or floor, change the windows texture, colour correct a specific element of the scene, etc. This is the result of my practice:

When 3D projecting on top of a 3D object or artefact, the types of projections we can use are:

  • Patch projection
  • Coverage projection
  • Nested projection (projection inside another projection)

We can find some issues when doing artefact projections that can be solved we the following techniques:

  • Stretching problem: texture is stretched and not showing in the correct place. This issue can be fixed adding a second camera projector on top.
  • Doubling problem: texture is doubled. We can fix it doing two separate projections.
  • Resolution problem: texture look pixelated. We can use ‘sharpen’ node to solve it, however, we can also use a more efficient solution which is adding ‘reformat’ node and set the ‘type’ as ‘scale’, to then link node to ‘scanline render’ which would be the connected to a second ‘reformat’ node with the resolution of the original plate.

Lastly, we also saw how to build a 3D model taking as a reference a 2D image. Using ‘model builder’ node, we can create and adjust cards following the perspective of the 2D image, to then ‘bake’ this geometry into a 3D space. We can add ‘point light’ nodes to set illumination with different intensity, colours, and cast shadows. Another illumination node is the ‘direct light’ which is used as a filling light directed to a specific point or direction.

Once we finished reviewing this week’s theory, we were also asked to make the roto of the hole in the scene of the Garage project and to remove the markers with patch projections. I made the roto pretty quick and had no issues with it, but I struggled with two specific markers clean up: in the two markers positioned by the hole in the wall, when I added the roto, the patch made with rotopaint was showing outside the roto boundaries (right on top of this roto), so it was showing the wrong patch.

After asking the professor for some help, he figured out that I missed the lens distortion node on both the beginning and the end of the clean up set up (to undistorted the scene and the redistort it back).

Another issue I noticed is that the patches added on the floor marks were showing through the roto of the wall. I asked the professor again and found out that this part needs to be merged differently as it is outside the roto. So added a ‘merge (stencil)’ just to these part of the clean-up, then ‘shuffle (alpha-alpha)’ and connected it to the roto ‘scanline render’ node. This will create an stencil of the patches taking the roto as reference and it will not show through the wall.

Final clean-up + roto

I had a lot of troubles with this homework and spent a lot of time trying to figure out why it was not working, but I feel that this struggle was useful to familiarise a bit more and feel more confident towards the nodes system used in Nuke.

Categories
Advanced & Experimental Advanced Nuke

Week 2: 3D Clean-up and 3D Projections

In this class, we learnt how to use the 3D projection in Nuke to clean up scenes or add elements with textured cards, rotopaint, rotoscoping, and UVs.

In Nuke, we can use a ‘3D project’ node to project anything onto a 3D object through a camera. We can use this node with different techniques:

  • 3D Patch with a textured card. We can use a ‘text’ node, or image, or texture projected on a ‘card’ node which would be linked to the ‘scene’ and ‘premult’ nodes, merged to the main plate.
  • 3D Patch with project on mm geo. First, we need to find a reference frame and add a ‘Framehold’ node to freeze this frame. Then, we clone the area using ‘Rotopaint’ node followed by a ‘Roto’ and a ‘Blur’ nodes, that would be premultiplied. Then we add another ‘Framehold’ (so it shows in all the timeline) or, alternatively, we can select ‘Lifetime’ in ‘all frames’ in the ‘Rotopaint’ node. However, it is recommended to use the second ‘Framehold’. Afterwards, we add the ‘Project3D’ node linked to a ‘Camera’ that would be the projection camera and we add another ‘Framehold’ node to this camera. Finally, we add a ‘card’ node where we are going to project the ‘Rotopaint’ job and then we will link this ‘card’ to the ‘scene’ that will be merged to the main plate.
  • 3D Patch with project roto. This time, we start with a ‘Project3D’ node to input in the ‘card’ (linked to the camera projector with a ‘Framehold’ connected to a ‘Scanline render’ node). Afterwards, we add and do the ‘roto’ in one or two frames only (a tick ‘replace’). Then, we add another ‘Project3D’ node to input it in a second ‘card’ (must be same ‘card’ as first one) that would be linked to a second ‘Scanline render’. Then we can add a ‘Grade’ node connected from main plate to the second ‘Scanline render’ to grade the roto that we have previously created.
  • 3D Patch with project UV. The starting point is a ‘Project3D’ node (linked to ‘camera’ and last ‘Scanline render) connected to a ‘card’. This ‘card’ is first input on first ‘Scanline render’ that will be at the same time connected to a ‘constant’ node of a 1:1 aspect (this will fix the frame for us). Then we can ‘Rotopaint’ the part we need patch and ‘Premult’. We ‘Reformat’ again to go back to our video original resolution. Then we project this on a ‘card’ that will be connected to the second ‘Scanline render’. We ‘Reformat’ again the second ‘Scanline render’ and merge to main plate.

To review our final shot after adding these 3D patches, we use a ‘Merge’ node connected to the final output and the main plate, and then set up as ‘difference’.

In order to see the point cloud generated by the 3D camera tracker in the 3D space, we can use the ‘Point cloud generator‘ node. We will just need to connect it to a ‘Camera’ and the main plate (source), then ‘analyse sequence’ in the ‘Point cloud generator’ node, and link it to a ‘Poisson mesh‘ node. Alternatively, in the ‘Point cloud generator’ node, we could select all the vertex of the cloud in the 3D space, create a group, and select ‘Bake selected groups to mesh’ option. This option ‘Model builder’ node to create a model taking as reference our point cloud. To do this, we connect the’Model builder’ to a ‘Camera’ and the main plate or source, then we enter in the node and create a ‘Card’ from there. We can place it and drag its corners wherever we wish. We will then readjust through other frames (just need like 1 or 2 frames adjustment).

This week’s homework consisted in practice all the techniques we have seen today, and 3D track a plate provided and place the floors and back wall grids, add cones on markers, and place two 3D geometries (all these elements need to be match-moved with scene’s camera movement.

The following images and videos show the process I followed and the final outcome of my practice.

Final 3D projections practice
Final 3D tracking and matchmove practice

This 3D tracking has been a bit hard to put together and understand what I am doing and why I am doing it, as I needed to think in both the 2D and the 3D space. Once I have the nodes figured out then the rest can be set really easy. I guess practice and experience is the key to get the hang of this.

Categories
Advanced & Experimental Advanced Nuke

Week 1: 3D Tracking in Nuke

In this first class, we started to dig into the 3D space in Nuke for first time. We learnt how to correct the camera lens or distortion of the scene and how to use 3D tracking to add geometry or texture to a scene.

In order to change the distortion of an image depending on the type of lens effect desired, we can use a ‘Lens distortion‘ node. One of the options we can use is the automatic option, where the programme analyses the scene, detects the horizontals and verticals of the scene, and corrects the distortion of the scene accordingly. On the other hand, we can also set the horizontals and verticals of the scene manually, to then ask the programme to solve the scene distortion following those lines we have created. Another way to change the distortion of a scene is using an ‘STMap‘ node instead. This node is based on 2 colours map of the scene, created after adding a ‘shuffle’ node set to shuffle forward to red and green. After we shuffle, we can add the ‘STMap’ node and set the ‘RGB’ channel to ‘RGBA’ UV channels. we can add distortion to the scene. We can also remove the distortion using same ‘shuffle’ node but set to shuffle backwards instead.

After this, we saw how to create geometry in a 3D space such as spheres, cubes, cards, etc. In order to import or export geometry we can use ‘ReadGeo’ (to import) and ‘WriteGeo’ (to export) nodes. We can also transform this geometry using ‘TransformGeo’ node, or change the texture/surface features like specular or transparency, with ‘Basic Material’ node. Once the geometry is set, we can also add illumination to the scene with ‘Light’ node adding more or less intensity, direct or indirect light, and colour of the light. The ‘Sharpen’ node can also be used to improve the image details, so Nuke can read it better (for tracking purposes).

Since all these settings make our project heavier and it takes longer to render, we can ‘Precomp’ a part of our map that is already finished so Nuke does not have to calculate all those features from that side every time we render.

Following on, we also studied the way to jump from a 2D scene to a 3D space using the ‘Scanline Render‘ node. Pressing ‘tab’ in the keyboard we can jump from 2D to 3D in Nuke. We could also add a ‘Camera‘ node to decide the camera movement and the framing of the scene want.

Lastly, we saw how to 3D track a live action shot so we can add objects or texture in the 3D space:

  1. Using a ‘Camera Tracker‘ node, we will set up the type of camera lens used to film that shot, and fill up all the rest of the features of the scene (such as range, camera motion, lens distortion, focal lens, etc.). We could also leave it without that information, so the programme just tracks it automatically.
  2. Once everything is set, we track our scene so the programme detects and creates several tracking points along the scene (we can choose how many tracking points we want the programme to create).
  3. Once the programme finished creating the tracking marks, we can then see the number of errors of track that have been originated and if it is over 1, it is recommended to make the tracking again as this will give problems later on. If this number is below 1, we can then delete the unsolved or rejected tracking marks.
  4. Next, we proceed to select a specific point in the centre of the scene and we set it as origin point of the shot.
  5. Then we select the track marks that forms the ground of the scene and we tell the programme that this is our ground plane.
  6. After our scene is tracked and properly set, we can then export this ‘scene map‘ keeping the output linked to our 3D tracker node so every change we made is reflected in the scene map created. We could also export the ‘camera‘ only but with the output unlinked so the changes we make in the 3D tracker node is not reflected in this ‘camera’ export.
  7. Finally, we can now add geometry, cards, etc., to our scene and place it, following the ‘camera cloud‘ created in the scene exported. These elements added to the scene will now follow the camera movement and 3D space of the scene.

As our assignment of the week, we were asked to play around with what we learnt today and to try to add geometry and cards planes to the scene shot provided, using the ‘camera tracker’ node.

3D tracked scene with planes and geometry included

I was a bit intimidated by 3D spaces and Nuke’s node system, however, at the end I found it quite straight forward and easy to set up and control.