Categories
Advanced & Experimental Advanced Nuke

Week 4: CG Compositing in Nuke

This week, we studied how to do a CG beauty rebuild, using channels or passes of our CG to see its layers to then adjust them separately, relight them, and put them back together.

To start with the CG beauty rebuild, first we need our CG layers (usually the CG has already been exported like this). We can see all these layers separated in the ‘layer contact sheet‘ which contains a view of passes in EXR (e.g. diffuse, specular, reflection, etc). The separation of the EXR in layers or passes (channels) is used for adjusting each pass separately to match the lighting and colour conditions of the background. In order to adjust each pass, we first need a ‘shuffle‘ node set with the specific pass (input layer) we need to then ‘merge (plus)‘ (+) for the lights (diffuse, indirect, specular, and reflections) and ‘merge (multiply)‘ (*) for shadows (AO or ambient occlusion, and shadow). Every pass must be graded separately and then we could add a final ‘grade’ or/and ‘colour correct’ to the entire asset if needed.

There are several types of ‘render passes’ or ‘AOVs’ (Arbitrary Output Variable):

  1. Beauty Rebuilt Passes:
    • Material AOVs. To adjust material attributes (shader).
    • Light Groups. To adjust individual lights of a scene.
  2. Data Passes:
    • Utilities. Combined with tools to get various effects (e.g. motion blur, defocus, etc.).
    • IDs. To create alphas or mattes for different areas of the render.

There are some elements that can be used to double check or improve our CG beauty rebuild quality:

  • Cryptomatte. To see different parts of the scene colours.
  • KeyID. To create a mask of the ID pass.
  • AO pass. It creates a fake shadow, produced by proximity of geometry to other geometry or background.
  • Motion pass. It let us see the blur of the motion clearly.

The process to subtract a pass to edit it is the following:

  1. Unpremult (all)
  2. Link to ‘shuffle’ node (set with pass needed)
  3. ‘Grade’ and make adjustments needed
  4. Add back with ‘merge (plus)’ or ‘merge (multiply)’
  5. ‘Remove (keep)’ node
  6. ‘Permult’

Once we have our colour correction and grading made, we can relight the scene with ‘position pass’ which is the 3D scene but in colour values (red=X, green=Y, blue=Z). In order to have a reference of the 3D space, we could use a ‘position to points’ node set with ‘surface point’ to ‘position’ and ‘surface normal’ to ‘normal’. We then adjust the point size how we want and we will see a 3D representation of colour values. Once the representation is made we can start to add lights with ‘points’ nodes linked to the ‘scene’ node to put them together. This scene is then connected to a ‘relight’ node which puts light, colour, material, and camera together (use alpha, and link ‘normal vector’ to ‘normal’ and ‘point positions’ to ‘point’). To merge over original background, we then ‘shuffle’ and ‘merge’.

As a homework of the week, we need to composite a 3D modelled car in a background of out choice:

Final car compositing

I feel like this practice was simpler than last week’s homework, however, I still encountered some challenges that I would like to research and study, such as the addition of ‘fake’ lights to the car lights to look like they are turned on, and also to get rid of a specific area glow like the one on the right door of the car which does not really make sense it shows there.

Categories
Advanced & Experimental Advanced Nuke

Week 3: Types of 3D Projections in Nuke

In this lesson, we saw the different techniques that can be used for 3D project, such as patch projection, coverage projection, or nested projection, and we also analysed how to add texture and lighting onto a 3D object as well as the general problems we can encounter with this.

In 3D tracking, we need to try to avoid to include the sky, as it would give us problems later on, in the same way that we avoid objects that move or reflections in roto.

When adding a ‘rotopaint’ to a card in a 3D space, we need to first freeze the frame with a ‘frame hold’ node at the best position in the sequence for visibility and tracking a specific point. Then we add the ‘rotopaint’ or the patch we need, and add another ‘frame hold’ to ‘unfreeze’ the frame. Then we premultiply it to create an alpha and use a ‘project 3D’ node to project it in our card (the ‘project 3D’ node must be connected to the projection camera and another ‘frame hold’ node). Lastly, we connect our card to the ‘scanline render’ node which will be merged with the main plate.

In order to add texture to a ‘card’ in 3D space, we will use the same method as before, but this time we will take the texture or picture that we want to add which we can ‘colour correct’ and ‘grade’ if needed, to then ‘roto’ the part we want to add from it, premultiply it, and with ‘corner pin 2D’ we will place it in the perspective we desire. Then we will ‘transform’ it to the dimensions we want and ‘merge’ it to the main plate after adding a ‘frame hold’. Lastly, we need to ‘copy’ the roto and premultiply it so we can project the alpha to our ‘card’.

If we want to roto something in the scene to change its features (colour correct, grade, etc), we can do the same as we did with the ‘rotopaint’ but in this case we adjust the roto every 10 or 20 frames. We do not need to adjust the roto every frame as it will follow our match move previously done so just a few adjustments should be sufficient.

When we have several 3D projections that we want to put together, we can use ‘Merge mat’ node, as if we use a regular ‘merge’ node, the quality of the image can decrease and look different.

After seeing these 3D projection techniques, we were asked to practice them using the following a footage of a street provided by the lecturer. For example, we could add something on the wall or floor, change the windows texture, colour correct a specific element of the scene, etc. This is the result of my practice:

When 3D projecting on top of a 3D object or artefact, the types of projections we can use are:

  • Patch projection
  • Coverage projection
  • Nested projection (projection inside another projection)

We can find some issues when doing artefact projections that can be solved we the following techniques:

  • Stretching problem: texture is stretched and not showing in the correct place. This issue can be fixed adding a second camera projector on top.
  • Doubling problem: texture is doubled. We can fix it doing two separate projections.
  • Resolution problem: texture look pixelated. We can use ‘sharpen’ node to solve it, however, we can also use a more efficient solution which is adding ‘reformat’ node and set the ‘type’ as ‘scale’, to then link node to ‘scanline render’ which would be the connected to a second ‘reformat’ node with the resolution of the original plate.

Lastly, we also saw how to build a 3D model taking as a reference a 2D image. Using ‘model builder’ node, we can create and adjust cards following the perspective of the 2D image, to then ‘bake’ this geometry into a 3D space. We can add ‘point light’ nodes to set illumination with different intensity, colours, and cast shadows. Another illumination node is the ‘direct light’ which is used as a filling light directed to a specific point or direction.

Once we finished reviewing this week’s theory, we were also asked to make the roto of the hole in the scene of the Garage project and to remove the markers with patch projections. I made the roto pretty quick and had no issues with it, but I struggled with two specific markers clean up: in the two markers positioned by the hole in the wall, when I added the roto, the patch made with rotopaint was showing outside the roto boundaries (right on top of this roto), so it was showing the wrong patch.

After asking the professor for some help, he figured out that I missed the lens distortion node on both the beginning and the end of the clean up set up (to undistorted the scene and the redistort it back).

Another issue I noticed is that the patches added on the floor marks were showing through the roto of the wall. I asked the professor again and found out that this part needs to be merged differently as it is outside the roto. So added a ‘merge (stencil)’ just to these part of the clean-up, then ‘shuffle (alpha-alpha)’ and connected it to the roto ‘scanline render’ node. This will create an stencil of the patches taking the roto as reference and it will not show through the wall.

Final clean-up + roto

I had a lot of troubles with this homework and spent a lot of time trying to figure out why it was not working, but I feel that this struggle was useful to familiarise a bit more and feel more confident towards the nodes system used in Nuke.

Categories
Advanced & Experimental Advanced Nuke

Week 2: 3D Clean-up and 3D Projections

In this class, we learnt how to use the 3D projection in Nuke to clean up scenes or add elements with textured cards, rotopaint, rotoscoping, and UVs.

In Nuke, we can use a ‘3D project’ node to project anything onto a 3D object through a camera. We can use this node with different techniques:

  • 3D Patch with a textured card. We can use a ‘text’ node, or image, or texture projected on a ‘card’ node which would be linked to the ‘scene’ and ‘premult’ nodes, merged to the main plate.
  • 3D Patch with project on mm geo. First, we need to find a reference frame and add a ‘Framehold’ node to freeze this frame. Then, we clone the area using ‘Rotopaint’ node followed by a ‘Roto’ and a ‘Blur’ nodes, that would be premultiplied. Then we add another ‘Framehold’ (so it shows in all the timeline) or, alternatively, we can select ‘Lifetime’ in ‘all frames’ in the ‘Rotopaint’ node. However, it is recommended to use the second ‘Framehold’. Afterwards, we add the ‘Project3D’ node linked to a ‘Camera’ that would be the projection camera and we add another ‘Framehold’ node to this camera. Finally, we add a ‘card’ node where we are going to project the ‘Rotopaint’ job and then we will link this ‘card’ to the ‘scene’ that will be merged to the main plate.
  • 3D Patch with project roto. This time, we start with a ‘Project3D’ node to input in the ‘card’ (linked to the camera projector with a ‘Framehold’ connected to a ‘Scanline render’ node). Afterwards, we add and do the ‘roto’ in one or two frames only (a tick ‘replace’). Then, we add another ‘Project3D’ node to input it in a second ‘card’ (must be same ‘card’ as first one) that would be linked to a second ‘Scanline render’. Then we can add a ‘Grade’ node connected from main plate to the second ‘Scanline render’ to grade the roto that we have previously created.
  • 3D Patch with project UV. The starting point is a ‘Project3D’ node (linked to ‘camera’ and last ‘Scanline render) connected to a ‘card’. This ‘card’ is first input on first ‘Scanline render’ that will be at the same time connected to a ‘constant’ node of a 1:1 aspect (this will fix the frame for us). Then we can ‘Rotopaint’ the part we need patch and ‘Premult’. We ‘Reformat’ again to go back to our video original resolution. Then we project this on a ‘card’ that will be connected to the second ‘Scanline render’. We ‘Reformat’ again the second ‘Scanline render’ and merge to main plate.

To review our final shot after adding these 3D patches, we use a ‘Merge’ node connected to the final output and the main plate, and then set up as ‘difference’.

In order to see the point cloud generated by the 3D camera tracker in the 3D space, we can use the ‘Point cloud generator‘ node. We will just need to connect it to a ‘Camera’ and the main plate (source), then ‘analyse sequence’ in the ‘Point cloud generator’ node, and link it to a ‘Poisson mesh‘ node. Alternatively, in the ‘Point cloud generator’ node, we could select all the vertex of the cloud in the 3D space, create a group, and select ‘Bake selected groups to mesh’ option. This option ‘Model builder’ node to create a model taking as reference our point cloud. To do this, we connect the’Model builder’ to a ‘Camera’ and the main plate or source, then we enter in the node and create a ‘Card’ from there. We can place it and drag its corners wherever we wish. We will then readjust through other frames (just need like 1 or 2 frames adjustment).

This week’s homework consisted in practice all the techniques we have seen today, and 3D track a plate provided and place the floors and back wall grids, add cones on markers, and place two 3D geometries (all these elements need to be match-moved with scene’s camera movement.

The following images and videos show the process I followed and the final outcome of my practice.

Final 3D projections practice
Final 3D tracking and matchmove practice

This 3D tracking has been a bit hard to put together and understand what I am doing and why I am doing it, as I needed to think in both the 2D and the 3D space. Once I have the nodes figured out then the rest can be set really easy. I guess practice and experience is the key to get the hang of this.

Categories
Advanced & Experimental Advanced Nuke

Week 1: 3D Tracking in Nuke

In this first class, we started to dig into the 3D space in Nuke for first time. We learnt how to correct the camera lens or distortion of the scene and how to use 3D tracking to add geometry or texture to a scene.

In order to change the distortion of an image depending on the type of lens effect desired, we can use a ‘Lens distortion‘ node. One of the options we can use is the automatic option, where the programme analyses the scene, detects the horizontals and verticals of the scene, and corrects the distortion of the scene accordingly. On the other hand, we can also set the horizontals and verticals of the scene manually, to then ask the programme to solve the scene distortion following those lines we have created. Another way to change the distortion of a scene is using an ‘STMap‘ node instead. This node is based on 2 colours map of the scene, created after adding a ‘shuffle’ node set to shuffle forward to red and green. After we shuffle, we can add the ‘STMap’ node and set the ‘RGB’ channel to ‘RGBA’ UV channels. we can add distortion to the scene. We can also remove the distortion using same ‘shuffle’ node but set to shuffle backwards instead.

After this, we saw how to create geometry in a 3D space such as spheres, cubes, cards, etc. In order to import or export geometry we can use ‘ReadGeo’ (to import) and ‘WriteGeo’ (to export) nodes. We can also transform this geometry using ‘TransformGeo’ node, or change the texture/surface features like specular or transparency, with ‘Basic Material’ node. Once the geometry is set, we can also add illumination to the scene with ‘Light’ node adding more or less intensity, direct or indirect light, and colour of the light. The ‘Sharpen’ node can also be used to improve the image details, so Nuke can read it better (for tracking purposes).

Since all these settings make our project heavier and it takes longer to render, we can ‘Precomp’ a part of our map that is already finished so Nuke does not have to calculate all those features from that side every time we render.

Following on, we also studied the way to jump from a 2D scene to a 3D space using the ‘Scanline Render‘ node. Pressing ‘tab’ in the keyboard we can jump from 2D to 3D in Nuke. We could also add a ‘Camera‘ node to decide the camera movement and the framing of the scene want.

Lastly, we saw how to 3D track a live action shot so we can add objects or texture in the 3D space:

  1. Using a ‘Camera Tracker‘ node, we will set up the type of camera lens used to film that shot, and fill up all the rest of the features of the scene (such as range, camera motion, lens distortion, focal lens, etc.). We could also leave it without that information, so the programme just tracks it automatically.
  2. Once everything is set, we track our scene so the programme detects and creates several tracking points along the scene (we can choose how many tracking points we want the programme to create).
  3. Once the programme finished creating the tracking marks, we can then see the number of errors of track that have been originated and if it is over 1, it is recommended to make the tracking again as this will give problems later on. If this number is below 1, we can then delete the unsolved or rejected tracking marks.
  4. Next, we proceed to select a specific point in the centre of the scene and we set it as origin point of the shot.
  5. Then we select the track marks that forms the ground of the scene and we tell the programme that this is our ground plane.
  6. After our scene is tracked and properly set, we can then export this ‘scene map‘ keeping the output linked to our 3D tracker node so every change we made is reflected in the scene map created. We could also export the ‘camera‘ only but with the output unlinked so the changes we make in the 3D tracker node is not reflected in this ‘camera’ export.
  7. Finally, we can now add geometry, cards, etc., to our scene and place it, following the ‘camera cloud‘ created in the scene exported. These elements added to the scene will now follow the camera movement and 3D space of the scene.

As our assignment of the week, we were asked to play around with what we learnt today and to try to add geometry and cards planes to the scene shot provided, using the ‘camera tracker’ node.

3D tracked scene with planes and geometry included

I was a bit intimidated by 3D spaces and Nuke’s node system, however, at the end I found it quite straight forward and easy to set up and control.

Categories
Showreels

Term 1 Showreel

Categories
Nuke VFX Fundamentals

Week 10: Real Scenarios in Production and Balloon Festival Comp Review

In this lesson we analysed the different scenarios we can face in production as a VFX compositor and then we reviewed the final composition of our Balloon Festival project.

In production for film, the stages followed are:

  1. Temps/Postviz. The temps are the preview of how the movie is going to look in low quality and the postviz is the preview of the movie but with higher quality (even there are specialized companies in this).
  2. Trailers. It shows the several shots of the movie that are finished at a good level of quality.
  3. Finals. Final product of the film. Usually it is exported to EXR., and two different Quick Times with specific settings ready for being reviewed.
  4. QC. The quality control of the final product is done by the VFX Supervisor, and they decide which one is the best product to send to the client.

There is specific software for the project management that improves the organisation and communication between the team, such us Google docs and sheetsFtrack, and Shotgun. They are useful to publish the final scenes that are ready for review, to request tasks, to agree meetings, etc.

The production roles existing in a film are:

  • Line Producer. The person that is below the Producer and is in touch or checking in with the VFX supervisor, director, editor, internal producers, producers, and artists. They manage the client, the timing, and the budget.
  • VFX Producer. This person makes sure that the studio completes the project, that they comply with the deadline agreed with the client, and that it is completed within the budget set.

A way to share and review a project development is to set VFX dailies. This is an important meeting to see that everyone is in the same direction and to receive the feedback of the film director, the client, the producer, and/or supervisor. It is usually written and recorded, and what it is agreed there cannot be changed later outside that meeting.

Once we have finished the scene we were assigned to, we will publish it so the lead or VFX supervisor reviews it. A good habit to develop is to make sure that what we are publishing is final and it does not have any errors that makes the review difficult as the schedules and deadlines in films use to be tight. Before publishing a scene, it is good practice to follow this tech check process:

  • Check notes for the shot
  • Compare new version with old one
  • Check editorial (shot that editor sent to take as a reference with the original)
  • Check if there is any retime in the shot
  • Check that our shot has the latest ‘Match move’
  • Write in the comments if we have any personal notes
  • If we have any alternatives for one shot, inform the line producer before adding this to our published scene.

Balloon Festival Comp

Once, we analysed the different scenarios in VFX production, we proceeded to review the final compositing of the balloon festival project.

For this project I modelled my air balloon in Maya (as shown in Week 3: Maya Modelling Tools Overview Part 2 – Air Balloon and Week 4: UV Set Up & Texturing in Maya).

During Maya lectures, we learnt how to model a 3D hot air balloon and animate it with a simple 360° spin animation. Then, using this and a mountain shot provided by the professor, we were asked to composite a short sequence for a ‘Balloon Festival’. There was no rules, just to put in practice all that we learnt and to have fun with the compositing.

Since I really enjoy designs with an 80s neon style with a dark background that highly contrasts with neon colours, I decided to focus in this thematic. I started trying to colour correct the scene as I wanted a night ambience and the scene was shot in plain daylight.

I tried colour correcting the scene following a tip the professor taught to us in class about separating the colour correction process in primary colours, secondary colours, and shadows. I rotoscoped some sections of the mountains and colour corrected them separately to create a bit more depth and tying to avoid a ‘flat’ look. Then I checked how it would look like with a grey background an refined the roto.

I reformatted the video to fit the size of the main comp, and then I retimed it because, since it was a time lapse video originally, it was playing way too fast. Then I linked it to one of the trackers previously created so it follows the movement of the main plate. Also, I colour corrected it slightly to make it look a bit darker, and also created a roto so it did not overlap with the mountains (created an alpha of the roto with ‘shuffle’ node and copied it in the sky nodes trail).

I did not look the look of the grass in the foreground as it had some gold colour from the original lighting so I decided to add some coloured fog in front to disguise this. So I found a fog video online and added to the comp. I also colour corrected it and made it purple so it matched the palette of colours I wanted to achieved (black, blue/green, and purple).

Following on, I added my 3D air balloon model to the comp. I added four air balloons with different scale position and movement, and also I colour corrected them adding some purple and blue highlights and making them a bit darker. To make the comp a bit more interesting, I also added like magical and colourful trails to two of the air balloons, again, with purple and blue tones.

Then I wanted to add the text ‘Balloon Festival’ like if this were the promotion video of an actual festival. I created a neon effect adding a ‘Glow’ node so the middle of the type is white and the borders have a blue glow. I also used ‘Neon 80’ font to make it look more realistic. Then I added a roto mask to the text to create the transition of the air balloon passing and the text appearing behind it.

Moreover, I added a frame of blue and purple animated neon lights with a foggy texture that I found online. Like I did with the fog and the colour trails, I merged them to the main plate using ‘screen’ option in the ‘merge’ node so the black background is not visible and it only shows the neon lights.

Since Nuke is not very good working with sound, I exported the final sequence with the write node and imported it to After Effects to add the sound. I could also have done it with Premiere Pro, but I was having some problems with my version of the programme so I decided to use After Effects as a quicker solution. I found an 80s style royalty free music ironically called ‘stranger things’ (Music Unlimited, 2022), so I imported it to After Effects and just added a fade out at the end.

Final result

The final result has a funny and eye catching look and the 80s music sets the ambiance suitable for the style. It has been a long and hard process for me as I was struggling a bit with the order of the nodes and when to add certain nodes like ‘premult’, ‘shuffle’, ‘copy’, and when to link nodes using the ‘roto’ mask link or regular link. At the end of the day, with practice everything started to make sense and now I can say that I feel comfortable with Nuke’s compositing process and structure.

References

Apisit Suwannaka. Drifting Smoke Motion Design on Black Background Free Video [online]. Available at https://www.vecteezy.com/video/2973097-drifting-smoke-motion-design-on-black-background [Accessed 19 November 2022]

Distill, 2016. Time Lapse Video Of Aurora Borealis [online]. Available at https://www.pexels.com/video/time-lapse-video-of-aurora-borealis-852435/ [Accessed 19 November 2022]

John Studio. Beautiful colorful particles or smoke abstract background Free Video [online]. Available at https://www.vecteezy.com/video/3052087-beautiful-colorful-particles-or-smoke-abstract-background [Accessed 19 November 2022]

Mim Boon. Neon frame background animation Free Video [online]. Available at https://www.vecteezy.com/video/12276978-neon-frame-background-animation [Accessed 19 November 2022]

Music Unlimited, 2022. Stranger Things [online]. Available at https://pixabay.com/music/synthwave-stranger-things-124008/ [Accessed 27 November 2022]

Categories
Nuke VFX Fundamentals

Week 9: Blur, Defocus, and 2D Clean-up in Nuke

In this session, we learnt how to use ‘Blur’ and ‘Defocus’ in a scene and how to do a 2D clean-up using ‘Roto Paint’, ‘Difference’, ‘Regrain’, and ‘Grain’ tools.

In order to add realism to a scene, it is a good technique to add some ‘Blur’ or ‘Defocus’ to it. However, depending the desired effect, we use one or the other. We use ‘defocus’ to emulate what happens with a real lens when unfocused, so since this is a more realistic and natural effect than ‘blur’, this is more commonly used for a more cinematic and more visible effect. On the other hand, ‘Blur’ is used when we need to defocus a colour or something minimum that is going to be barely visible (more for correcting purposes rather than effect wise).

‘Z Defocus’ and ‘Z Blur’ are used to defocus or blur specific areas of the plate and can be also used to blur or defocus taking in consideration the depth of the shot when the alpha is converted to depth. With these nodes we could also defocus or blur following a shape like a disc, bladed or following a roto we made. These nodes can be used together with ‘Convolve’ node in order to defocus or blur with a roto shape in different forms.

Nuke is also used for cleaning up a scene. This can be made using ‘Roto Paint’ node with which we can paint, clone, blur, dodge, and burn specific areas of he shot. After this, we could add a ‘Difference’ node to subtract the alpha taken from the ‘Roto Paint’ area followed by a ‘Copy’ and’Premult’ nodes. Also, we could add a ‘Frame Hold’ node to freeze the reference frame where we are going to do the roto painting.

Once we added the patch or correction to our shot, it is good practice to add a ‘Grain’ effect to match the grainy texture of the video and the patch blends in. We can use ‘Grain’ node which is applied through the alpha so it does not affect to whole plate but just the alpha area, or the ‘ReGrain’ node which will affect the whole plate as double grain (so it cannot be applied multiple times).

This week’s task was to practice what we learnt today trying to do a clean-up of the school shot provided: removing some papers that are on the wall, adding some roto paint in the side of the lockers, adding something to the background door (in my case I added some animated text), adding something in perspective in the left side doors (I added a video of what it looks like a circular magic portal), and adding something interesting on the floor (I added another magic portal).

Original Plate

To start, I wanted to remove some papers from the pin board on the right. To do so, I added a ‘Roto Paint’ node and used the ‘clone’ tool to paint on top of the papers using the texture of the board. Then with a regular ‘Roto’ node I created the alpha of the painted area followed by a ‘Filter Erode’ to soften the edges and a ‘Premult’ to transform it into an alpha. All of this has been done with a ‘Frame Hold’ node so it is easier to build up the roto. Then I tracked the area with 4 tracker points and created a ‘Transform (Match Move)’ tracker node to match the move of the scene. Finally, I added the ‘Grain’ node to match the grain of the Roto Paint with the scene grain and merged it with the main comp.

Secondly, I added an animated ‘Roto Paint’ to the side of the lockers. I used the already existing tracker node that was used to remove the poster that was in the same place that I wanted to add the new ‘Roto Paint’. I created a ‘Transform (Match Move)’ tracker node and attached it to the ‘Roto Paint’ node with the animation. To animate the painting, I played with the colour and opacity adding key frames in these features. Then, I linked this the the main comp.

Thirdly, I added some text in the back doors tracking the area first and then adding ‘Corner Pin 2D’ first baked to fix the frame and then another one to match move the scene movement. I also added an animation to the text key framing the colour section, and the merge it to the main comp.

For both magic portals I used the same technique that we used last week with the ‘Planar Tracker’ and creating a ‘Corner Pin 2D (Relative)’ to fix the image to the area selected. I reformatted both clips and corrected the saturation and grades. Then I merged them to the main comp using ‘screen’ option so the black background disappears and there is an transparency effect in the colours.

‘Merge (screen)’ node
Categories
Nuke VFX Fundamentals

Week 8: Planar Tracking in Nuke

In this lesson, we checked further nodes in Nuke and we learnt how to use a planar track to add a flat image to a sequence.

We reviewed nodes such as ‘Reformat’ (to change sequence format to match main plate), ‘Crop’ (to crop an image or a video as required), ‘Merge’ (we saw how to use it to fix the size of the bounding box of a sequence to the Alpha layer or the Background layer), and ‘Shuffle’ (to add or remove channels – R, G, B, Alpha, and Depth).

We also learnt how important is the concatenation in a Nuke comp. Concatenation is the process of moving pictures/frames in a sequence. Nuke does calculations that need to follow a logic and if this logic is broken, the final result will not work. Following on this, we analysed several ways to organise the nodes in Nuke so they follow an order and, therefore, we achieve the desired result without any error.

Finally, we also studied how to use the ‘Planar trackers’ to add a 2D image to a 3D space and how to make it follow the movement of the sequence. First we added the ‘Planar tracker’ node, select the area we want with tracking points and track like we do with a regular ‘Tracker’ node. Then we turn on and align the grid to the tracking points to create the perspective desired, and finally, we create a ‘CornerPin2D (absolute) to create the tracker node that we are going to link to the image that we want to add. We can track translation, scale, and rotation together or separately if desired. When there is an object in front of the area that we want to track, we can track the object separately with another ‘bezier’ in the same ‘Planar tracker’ node, so Nuke recognises that object as an area of exclusion (so it does not take it in consideration when tracking the area that we want to).

As a homework, this week we were asked to add an image to the following sequence using what we learnt today in class.

First poster planar tracker showing bezier and grid lines adjustment

I added both posters using a ‘planar track’ node to track the plane where I wanted to add the poster. For the left poster I just tracked it, adjusted the grid lines to the perspective plane I wanted, and then created a ‘corner pin 2D (relative)’ that will be linked to the poster. This node will let the poster or image added to follow the movement of the shot that we have tracked.

For the second poster, it was necessary to add second bezier that tracks the pole that passes in front of the poster so the programme understands that the area of the second bezier does not have to be taking in consideration when tracking the first bezier area (it is excluded). The roto of the pole was already added in the comp by the professor so I just had to ‘merge’ the second poster ‘corner pin 2D’ to the main comp. I also adjusted the ‘grade’ and ‘saturation’ of the posters, skewed them a little bit with ‘transform’ node to fit 100% the perspective, and added some ‘blur’ to remove the sharp edges from the posters and blend them in to the comp.

My Nuke comp with both poster’s added
Poster’s in street added using ‘Planar tracking’ technique

This practice seemed pretty easy to me compared with other assignments as ‘Planar tracking’ is a straight forward tool. However, at the beginning I had a problem with the middle poster that has the pole obstructing part of the view in front of it. The ‘Planar tracker’ was not reading the area properly as the tracking points were jumping from the area selected to a completely different area and was not keeping the perspective I wanted to keep. I solved this making the tracking area bigger so the programme had more information to create the track along the frames. I also colour corrected the posters to blend them with the scene and make it more realistic. Overall, I am very happy with the result.

Categories
Nuke VFX Fundamentals

Week 7: Match moving – point tracking in Nuke

In this lesson, we learnt how to stabilise a shot using 2D tracks

With a ‘2D Track’ node, we can track the camera movement of a scene frame by frame to match it with another element. Then with a ‘Transform’ node we can change the translation, rotation, and scale of the frame to stabilise it. We can also create several tracking nodes from the main ‘2D Track’ node to automatically stabilize the scene, to match-move the scene, and to remove or add jitter.

Sometimes the scene has too much noise or grain and the 2D tracker is not able to track it properly.  In this case, we can use a ‘Denoise’ node to reduce the image noise or grain, so the camera tracker does not struggle the read the pixel sequence in between frames. We can also use ‘Laplacian’, ‘Median’, or ‘Grade Contrast’ to correct the grain.

As usual, it is important to set a Quality Control (QC) backdrop so we can check that the tracking or any rotoscoping added is properly done.

The assignment of this week is to stabilize the IPhone shot and to add the phone’s screen animation with ‘Rotoscoping’ and ‘Corner Pin’ nodes.

Full comp
Final result

Iphone comp improved: I tried to improve the fingers roto using the green despill set up that the professor sent to us and also improved the screen animation using the ‘curve editor’ to soften the starting and end points of the movement.

Improved comp with green despill
Improved comp

I struggled a bit with the fingers rotoscoping as when the fingers are moving faster, it is hard to roto the motion blur. The green despill set up we got from he professor helped a bit but I still do not fully understand how it works so I am sure that I could improve this comp once I learn how the green despill technique works.

Categories
Nuke VFX Fundamentals

Week 6: Merging and Colour Matching in Nuke

In this lecture, we learnt how to colour correct a sequence, the different colour spaces of a file, and how to import and export it.

We saw how to use ‘Grade, ‘ColourCorrect’, ‘Toe’, and ‘Blackmatch’ nodes to correct the colour of a sequence. These nodes can be used to correct specific parts of a sequence using rotos or to colour grade an alpha. Alphas need to be premultiplied to be added to the background plate, however, some alphas already come premultiplied, so in this case, we will add an ‘Unpremult’ node, then add the ‘Grade’ and/or ‘ColourCorrect’ nodes and then ‘Prebuilt’ node again.

It is also important to take in consideration the codec or colour space and linearization of the file imported as depending of what we are going to use the file for, we will need more information preserved in the file or a smaller size file. The files in a film production can be shared with compositors as LUTS, CDLs or graded footage. We also discovered the new ‘OCIOColorSpace’ node, which is used when the footage provided has already been graded.

And lastly, we saw proper ways to build up a map for grade and colour correct a footage, separating the primary and secondary colours correction, and then correcting the shadows in the last step. This way, if more amendments are requested, we can make the changes quicker.

The assignments of this week were to colour correct and airplane alpha to match its background and to carry on making some colour corrections in the previous mountains video using the roto created last week.

We also were asked to plan our air balloon sequence which we will be building up until the end of the term 1. My main idea for my air balloon video is to add a dark style, with neons and glowing lights, and add mist and thunders around the mountains.