Categories
Advanced & Experimental Advanced Nuke

Week 6: HSV Correction Process & Chroma Key in Nuke, & Garage Homework WIP

In this lecture, we learnt how to correct Hue, Saturation, and Value (Luminance), and how to use chroma key in Nuke. We also reviewed our garage comp WIP.

The meaning of HSV break down is the following:

  • H – Hue. Applies to colour (R, red value)
  • S – Saturation. Applies to the intensity of the hue/colour (G, green value)
  • V – Value. Applies to luminance or brightness (B, blue value)
HSV illustration (Cini, 2023)

This is important to understand as the quality of a HSV correction in a comp depends of our understanding of this elements.

When making hue corrections, we can use the ‘Hue correct‘ node to mute (colour channel as ‘0’), suppress (suppressed colour channel as ‘0’), or desaturate (saturation channel as ‘0’) a specific colour. This is useful for example to remove a green screen with the green colour channel suppressed.

With the ‘Keyer‘ node set with ‘luminance key‘ operation, we can determine the quantity of luminance we want to remove from the alpha of an image (black parts are not affected, but white parts only). We could also set the operation for ‘red keyer’, ‘blue keyer’, etc. Then we could ‘shuffle’ to green only, for example, and the colour correction will only affect saturation. We can also use this node to add, subtract, and multiply elements:

  • We can remove a colour channel with a ‘Merge (minus)‘ node linked to the colour we want as background and the colour we want to remove.
  • With ‘Add (Math)‘ node we can add colour when linked to a ‘merge (minus)’ node.
  • We could also use ‘Roto‘ to add or remove colour of a specific area.

Texturing can also be made with a ‘keyer (luminance)‘ so we use the alpha of the texture to adjust the luminance. Then this would be blurred, graded, and merged (minus). Moreover, we could also use the keyer to add noise or denoise in certain areas.

Some extra nodes and techniques can be used to create some effects that will give more credibility to our image:

  • Volume rays‘ node. Used to create rays effects or motion effect.
  • Edge detect‘ node. To select the edges of an image’s alpha and colour correct those specific edges.
  • Ramp‘ node. To balance image with gradient (used with ‘Merge (minus)’ node).
  • To add a new channel in ‘Roto’. We create a new ‘output’ (name it, click ‘rgba’ and ‘ok’), so when adding different features like blur or grade, we can link that change in the node and it would only affect that new channel created. We could also use ‘Add (channel)‘ node instead, select the channel as ‘Matte’, and choose that it will only affect to certain colour. We could also add a ‘Rotopaint’ to this and add shapes linked to different channels.

We can use keying nodes and techniques for chroma key such as:

  • IBK‘ (Image Based Keyer). We can subtract or difference with this node. It is considered the best option for getting detail out of little areas like hair or severely motion blurred edges:
    • IBK colour‘ node. Frame by frame rebuilding background taking blue or green colour.
    • IBK Gizmo‘ node. Can select specific colour.
  • Chroma key‘ node. First we can unselect ‘use GPU if available’ if our computer starts lagging. This node works better with evenly lit screens and with more saturated colour. We could use it for despill, but better not to as what we want is to extract the alpha.
  • Key light‘ node. This is used for colour spill.
  • Primate‘ node. This is a 3D keyer that puts colour into 3D colour space and creates a 3D geometric shape to select colours from it. We first select ‘Smart select background colour’, we pick a colour while holding ctrl+shift, then we change to ‘Clean background noise’, and holding ctrl+shift, we pick the colour parts that are still showing in the alpha (and need to be removed). We could also click on ‘Auto compute’ to create an automatic alpha and then retouch areas back to alpha with ‘Clean foreground noise’.
  • Ultimate‘ node. This is used for fine detail, and to pull shadows and transparency from the same image.
  • Green/Blue despill technique. We create an alpha with ‘Keylight’ node and ‘Merge (difference)’ to plate. Then we desaturate the background with a ‘Colour correct’ node and ‘Merge (plus)’ with the ‘Keylight’ node. Then we ‘Shuffle’ and put ‘alpha’ in black. Additionally, we could reduce the light in the background (saturate and grade) with ‘IBK Gizmo/Colour’. There are some companies that have created their own gizmo with all the required presets to despill.
  • Edge Extend‘ node. This is used to extend edges so we can correct the darken bits (smoother edges and not as pixelated).

A standard chroma key process would have the following steps:

  1. Denoise plate
  2. White balance
  3. Alpha pipe
    1. Core matte
    2. Base matte
    3. Hair matte
    4. Gamma matte
    5. Edges alpha
  4. Despill pipe
    1. Edges despill (specific parts)
    2. Core despill (overall)
  5. QC
  6. Light wrap
  7. Regrain alpha and background
  8. ‘Addmix’ to link alpha and background

Green screen homework

The homework of this week is to improve the garage comp and to make a chroma key of a sequence provided by the professor so we can put in practice all the techniques learnt in class.

Final green screen replacement
Alpha version

Garage comp WIP

Regarding my garage comp work in progress, the professor also sent us a Nuke comp with examples of how to set lighting and shadows with projections or geometry. I tried to follow the example with the geometry as I only had 3D objects in my comp, however, I had some problems with the shadows as they were not showing at all in the final result. I could see them showing in the alpha created with the ‘shuffle’ node, however, since I could not see them in the final output, I guess I have something wrong with the ‘merge’ node or with the concatenation of the comp. I will try to ask Gonzalo in the next lecture about this. I also added a texture to the right wall so it looks like it was previously painted but the paint is being degraded and it is peeling off the wall. I roto the part of the texture that I was interested in showing and then used a projection of the texture on a card in a 3D scene.

References

Cini, A (2023). Color Theory HSV or Hue, Saturation, Value Brightness Illustration Chart Vector (online). Available at: https://www.dreamstime.com/color-theory-hsv-hue-saturation-value-brightness-illustration-chart-color-theory-hsv-hue-saturation-value-brightness-image237125365 [Accessed 19 February 2023]

Categories
Advanced & Experimental Advanced Maya

Week 5: MASH Tool in Maya, & Satisfying Loop Animation Moodboard & First Draft Design

This week, we learnt how to use MASH tool in Maya and we also started to figure out how out loop animation would look like.

Moodboard

I did some previous brain storming ahead to this class to have some idea of what I would like my animation to look like.

  • Loop animation possible themes:
    • Zen garden
    • My day routine loop (train trip)
    • Dough like texture getting reshaped
    • Laser cut
    • Double perspective sculpture rotating
    • Imposible shapes
    • Simple face expression changing because of interaction with other object
    • Solar system

I also checked some oddly satisfying videos in YouTube with some animation examples, and one of them caught my eye in the minute 7:04 of the video:

Oddly satisfying animation examples (arbenl1berateme, 2019)

I liked the style and the ‘impossible’ movement visual effect that was giving with the rotating torus and the zig zagging ball.

However, I was not sure about these standard oddly satisfying loop animations as they looked pretty much the same to me and I felt like it could be hard to do something different if I follow this style.

I also found an animation of a rolling ball following a rail in ArtStation (see animation here), which was simple but the look reminded me to Alphonse Mucha’s Art Nouveau designs:

Later on, as I am also very interested in astronomy, I also founded interesting these solar system models that spin due to a gear mechanism added to it:

My main inspiration was this artwork of ‘The Astronomy Tower’ made by Cathleen McAllister, which conveyed, in my opinion, both Art Nouveau aesthetic and astronomy:

The astronomy tower (McAllister)

Once I had my design idea settled, I continued to research how to approach the animation in Maya.

MASH

In the lecture of this week, the professor introduced us to MASH tool in Maya, which could be used to make our loop animation.

With MASH tool, after we ‘create MASH network’, we can create procedural effects with nodes such as:

  • Distribute. To arranges several copies of an object in formations.
  • Curve. To animate objects following a curve.
  • Influence. To use an object as a guide to influence the transforms of the network in MASH.
  • Signal. It adds noise to our animation so it varies like a signal wave.
  • Amongst other features…

I did not have the time to fully explore all MASH features but the few I discovered were really interesting and fun to play with. I tried to implement MASH in my design but it seemed to be way easier to just key frame every movement by hand (and also I would achieve a better result).

First draft design

I started taking as reference a picture of the solar system to see the position, shape, and distance of each planet and satellites towards the Sun. I did intend to do this solar system recreation as much accurate as possible, but as it would not look too appealing to the viewer (the planets and satellites would look too small and the Sun too big), I tweaked them a little bit so it would fit nicer in the frame. I made the planets slightly bigger than they are in relation to the Sun, and just added the most important satellites of each planet (Saturn and Jupiter have way too many satellites to be able to fit them all in this model).

Once I had a definitive position of my solar system, I started to animate it. This animation took a bit longer than I thought, as I had to calculate how many times each planet would rotate around the Sun in 300 frames (length of 1 full loop of the animation) so the looping cannot be noticed. As I also wanted to make it as accurate as possible, I also researched online how long each planet uses to take to rotate around the Sun. Since Neptune is slowest of all, I took this planet as the reference one to loop the animation, so it would rotate 360° in relation to the Sun in 300 frames of animation. The rest of the planets are rotating more times being Mercury the quickest. I set the rotation to start from slower to quick in the mid point of the animation and slowing down towards the end until they all stop in the same initial position. Obviously, it is not accurate rotation as if it were, Mercury’s rotation would be invisible to the eye in relation to Neptune’s. Then I did the same with the satellites of each planet, but these animation were more approximated than the planets as it will not be as noticeable. I also parented the satellites to their respective planets so they would rotate around the planets but would also follow the rotation of the planet around the Sun. Then, I gave some rotation movement to the Sun, but as I wanted to add glow to it, I do not think it would be visible. Lastly, I added the gears that would be attached to the planets and also parented them to their respective planets so they would have the same rotation. I am not too convinced about the gears shape so more than possible I would change their design.

I am happy with the planets look and animation, however, I am thinking in changing the model of the gears as they look to ‘spiky’ to me and not too realistic.

References


arbenl1berateme (2019). Oddly Satisfying 3D Animations [Compilation 5] – arbenl1berateme (online). Available at: https://www.youtube.com/watch?v=iLRsCtd5P9s [Accessed on 12 February 2023]

Cogito (2015). 1900 Alphonse Mucha “Dessin de Montre” Jewelry Design Illustration for Georges Fouquet (online). Available at: https://www.collectorsweekly.com/stories/150738-1900-alphonse-mucha-dessin-de-montre-j [Accessed on 12 February 2023]

McAllister, C. Cathleen McAllister (Online). Available at: http://www.cathleenconcepts.com [Accessed on 12 February 2023]

Müller, B (2020). Impossible Oddly Satisfying 3D Animation (online). Available at: https://www.artstation.com/artwork/Ye43ed [Accessed on 12 February 2023]

Staines & Son. The Diary Of An Orrery Maker (online). Available at: https://www.orrerydesign.com [Accessed on 12 February 2023]

Willard, Jr., A. Willard Orrery. National Museum of American History (online). Available at: https://www.si.edu/object/willard-orrery:nmah_1183736 [Accessed on 12 February 2023]

Categories
Advanced & Experimental Advanced Nuke

Week 5: 3D Compositing Process in Nuke & Garage Homework WIP

This week, we learnt the 3D compositing process to put together plates and CG, to do clean-ups, to regrain, and to do a beauty rebuild with a multipass comp.

The 3D compositing general process looks like the following:

  • Main plate clean-up and roto work. After finishing the clean-up, it is recommended to render the cleaned plate so we can use this pre-rendered version to do the rest of the comp. This is done so Nuke has less nodes to calculate each time and the preview of the work done goes quicker.
  • CG compositing. In this part, we can move on with the beauty rebuild, adjusting the AOVs or passes with subtle grade and/or colour correction
    • Basic grade match. With a ‘grade’ node, we first do the white balance of the CG that we are going to integrate in the plate, measuring the ‘whitepoint’ (whitest part of the CG) and the ‘blackpoint’ (darkest part of the CG) while holding ‘ctrl+shift+alt’. Subsequently, we go to our background plate and measure ‘gain’ (for the whites) and the ‘lift’ (for the darks) while holding ‘ctrl+shift’. This will balance both plate and CG’s darks and shadows and will integrate them together.
    • Multipass comp. In this technique, we need first to ‘unpremult (all)’ our CG so we can start splitting the AOVs or passes. This split is made using ‘shuffle’ node and setting it to the desired pass we want to correct. Before editing the passes, we need to make sure to structure the nodes from the CG plate to a ‘copy’ node with all passes merged together, and double check that the CG plate looks exactly the same from initial point (original plate) to the ‘copy’ node. Sometimes this may look different as some of the passes could have been exported wrong. Once we split our passes we can proceed to ‘grade’ them individually. We ‘merge (plus)’ the light passes and we ‘merge (multiply)’ the shadows. We can also select an ID to create a colour map with ‘Keylight’ node. With this node we can select a specific area of the model that we want to adjust as its features will be separated in different saturated colour mattes. This way we could then re-texture a part of the model using a ‘ST-map’ node connected to the texture source. We can then re-light with ‘position pass’ and ‘normal pass’, followed by a ‘grade’ of the master CG plate. We can finish our beauty rebuild with a ‘copy (alpha-alpha)’ to copy the original alpha to the one created, and we we ‘premult’.
  • Motion Blur. Motion blur will add more realism and dynamism to the movement of the CG added as in 3D everything looks sharp and in focus so it is not as realistic. We can add motion blur following two methods:
    • Method 1: adding a ‘vector blur (rgba)’ node, then link it to ‘camera’, and adjust ‘motion amount’ in the ‘vector blur’ node as desired.
    • Method 2: ‘remove (keep)’ node linked to ‘motion blur 3d’ nodes, and adjust this last one’s ‘motion amount’ as desired.
  • Chroma aberration and defocus. We can add an ‘aberration’ node to match the original camera aberration of the live-footage plate, so we make the scene more credible. Also, with ‘defocus’ node we can add depth to the scene to be able to differentiate between sharp image and out of focus image (depth of field). After adjusting these, we need to add a ‘remove (keep)’ node connected to an ‘ST map’ node to put the original distortion back to the scene.
  • Regrain. We also could add some grain to the scene with ‘grain’ node. Then with ‘key mix (all)’ node linked to previous changes and ‘grain’, we can mix channels and add a mask to the previous changes made in the comp.
  • Effect card. We can add effects like smoke with a ‘card’ node. We will need to connect it to ‘shuffle (rgba to rgba with R to alpha)’ node to ‘card’, and ‘grade’ it. Then we ‘copy (alpha to alpha)’ and ‘premult’ to create the alpha of the effect and then we ‘defocus’. This will be projected on a ‘card’ (connected to ‘scene’, ‘scanline render’, and ‘camera’). Finally, we add the ‘ST map’ to unfreeze the frame and ‘multiply’ to show alpha created.
  • Lightwrap. We use this to add light to the edges, which could be adjusted with ‘diffuse’ and ‘intensity’. Then we will ‘merge (plus)’ as this is light feature.
  • QC. Using the ‘merge (difference)’ node, we can see and assess the changes made and there is any error. The ‘colour space’ node with the ‘output’ set as ‘HSV’ can be used to check the colours hue (R), saturation (G), and luminance (B) quality.
  • Final colour correction.
  • Export. The main preferred format to export our comp would be EXR. Some companies will also want a photo ‘JPEG’, or ‘AppleProRes’, or even ‘Avid DNxHD’, but that depends of the pipeline of each company.

The homework for this week was to start to put together the elements that would form part of our garage comp, and also, include the machine provided by the professor following all the steps we have learnt today.

Following the reference pictures we got with the brief, I started to research for 3D objects I could include such as tools, tyres, a table, etc.

I also decided to re-watch this week’s recording of the lecture to make sure I followed step by step the compositing process. This way, I started to understand the functionality of each node and technique, and to become more confident at the time of creating a whole comp by myself without having to look at references in other comps. The first thing I added was the machine in the back room. I did a beauty rebuilt with the separation of the passes and added a smoke effect with a card 3D projection. I feel like this part went really well as I did not have any issues along the process and the final look is pretty realistic.

Garage comp WIP with machine

After my back machine was fully set, I continued to add the 3D geometry to the comp with its textures. One problem that I had with the objects is the fact that they were really heavy and really jumpy when following the movement of the scene so it was hard to work with.

My work in progress comp looks like the following:

Garage comp WIP with 3D objects
Categories
Advanced & Experimental Advanced Nuke

Week 4: CG Compositing in Nuke

This week, we studied how to do a CG beauty rebuild, using channels or passes of our CG to see its layers to then adjust them separately, relight them, and put them back together.

To start with the CG beauty rebuild, first we need our CG layers (usually the CG has already been exported like this). We can see all these layers separated in the ‘layer contact sheet‘ which contains a view of passes in EXR (e.g. diffuse, specular, reflection, etc). The separation of the EXR in layers or passes (channels) is used for adjusting each pass separately to match the lighting and colour conditions of the background. In order to adjust each pass, we first need a ‘shuffle‘ node set with the specific pass (input layer) we need to then ‘merge (plus)‘ (+) for the lights (diffuse, indirect, specular, and reflections) and ‘merge (multiply)‘ (*) for shadows (AO or ambient occlusion, and shadow). Every pass must be graded separately and then we could add a final ‘grade’ or/and ‘colour correct’ to the entire asset if needed.

There are several types of ‘render passes’ or ‘AOVs’ (Arbitrary Output Variable):

  1. Beauty Rebuilt Passes:
    • Material AOVs. To adjust material attributes (shader).
    • Light Groups. To adjust individual lights of a scene.
  2. Data Passes:
    • Utilities. Combined with tools to get various effects (e.g. motion blur, defocus, etc.).
    • IDs. To create alphas or mattes for different areas of the render.

There are some elements that can be used to double check or improve our CG beauty rebuild quality:

  • Cryptomatte. To see different parts of the scene colours.
  • KeyID. To create a mask of the ID pass.
  • AO pass. It creates a fake shadow, produced by proximity of geometry to other geometry or background.
  • Motion pass. It let us see the blur of the motion clearly.

The process to subtract a pass to edit it is the following:

  1. Unpremult (all)
  2. Link to ‘shuffle’ node (set with pass needed)
  3. ‘Grade’ and make adjustments needed
  4. Add back with ‘merge (plus)’ or ‘merge (multiply)’
  5. ‘Remove (keep)’ node
  6. ‘Permult’

Once we have our colour correction and grading made, we can relight the scene with ‘position pass’ which is the 3D scene but in colour values (red=X, green=Y, blue=Z). In order to have a reference of the 3D space, we could use a ‘position to points’ node set with ‘surface point’ to ‘position’ and ‘surface normal’ to ‘normal’. We then adjust the point size how we want and we will see a 3D representation of colour values. Once the representation is made we can start to add lights with ‘points’ nodes linked to the ‘scene’ node to put them together. This scene is then connected to a ‘relight’ node which puts light, colour, material, and camera together (use alpha, and link ‘normal vector’ to ‘normal’ and ‘point positions’ to ‘point’). To merge over original background, we then ‘shuffle’ and ‘merge’.

As a homework of the week, we need to composite a 3D modelled car in a background of out choice:

Final car compositing

I feel like this practice was simpler than last week’s homework, however, I still encountered some challenges that I would like to research and study, such as the addition of ‘fake’ lights to the car lights to look like they are turned on, and also to get rid of a specific area glow like the one on the right door of the car which does not really make sense it shows there.

Categories
Advanced & Experimental Advanced Maya

Week 4: Rube Goldberg machine camera set and render in Maya

In this lecture, we focused on finishing our Rube Goldberg machine texturing, camera set up, and rendering the final outcome.

I continued adding the last textures and finishing touches of the design, such as the finish lines numbers, and some more neon lights in the edges of the planks and of other components. I also modelled the light bulbs’ buttons to switch them on and textured them with glow.

Moreover, I decided to animate some arrow lights on the top of the initial ramp to add another point of interest in the animation:

Arrow lights animated on ramp

After I finished with the texturing, I continued to set the camera movement using ‘camera and aim’. This way, I only have to set the ‘translate’ of the camera since the ‘rotation’ is adjusted with the aim. I tried to follow both balls switching priority between one and the other depending on the point of the animation and which one was more important to follow each time. Therefore, I not only framed the scene from the front view but I also made the camera rotate 360 degrees around the machine, showing its back too.

Camera and aim set up with keyframes on ‘translate’

In the last bit of the scene when the second ball has to reach the finish line, I had to reduce the duration of this since it was way too slow. Therefore, I selected all the elements of the scene and in the ‘graph editor’ I scaled down the number of frames required for this last movement. I reduced from 800 to 700 frames. The following video shows a preview of the camera movement I set:

Camera movement preview

When I had my animation fully set, I proceeded to set the render. Thought of adding a chrome textured background with the lighting of the skydome I had previously, however, it turned out to be problematic as there were too many reflections so the render would take too much time to finish. Maya also started to crash every time I tried to preview the render. Therefore, I decided to get rid of this chrome background and leave it with the original workshop background. I just lowered the light a bit so the glows added were more pronounced.

I was playing around with ‘Camera (AA)’, ‘Diffuse’, ‘Specular’, and ‘Transmission’ to get the best result without having to render for too long.

After two days rendering, this is the final result:

Final render

I really enjoyed this project and I feel enthusiastic about 3D modelling and animation. I also feel like I could improve the render, amending some details like adding a dark and reflective background to darken the scene and to make the neon lights more visible. However, due to limited time I was not able to do this (but I definitely will if I find some spare time before the end of term 2).

Categories
Advanced & Experimental Advanced Nuke

Week 3: Types of 3D Projections in Nuke

In this lesson, we saw the different techniques that can be used for 3D project, such as patch projection, coverage projection, or nested projection, and we also analysed how to add texture and lighting onto a 3D object as well as the general problems we can encounter with this.

In 3D tracking, we need to try to avoid to include the sky, as it would give us problems later on, in the same way that we avoid objects that move or reflections in roto.

When adding a ‘rotopaint’ to a card in a 3D space, we need to first freeze the frame with a ‘frame hold’ node at the best position in the sequence for visibility and tracking a specific point. Then we add the ‘rotopaint’ or the patch we need, and add another ‘frame hold’ to ‘unfreeze’ the frame. Then we premultiply it to create an alpha and use a ‘project 3D’ node to project it in our card (the ‘project 3D’ node must be connected to the projection camera and another ‘frame hold’ node). Lastly, we connect our card to the ‘scanline render’ node which will be merged with the main plate.

In order to add texture to a ‘card’ in 3D space, we will use the same method as before, but this time we will take the texture or picture that we want to add which we can ‘colour correct’ and ‘grade’ if needed, to then ‘roto’ the part we want to add from it, premultiply it, and with ‘corner pin 2D’ we will place it in the perspective we desire. Then we will ‘transform’ it to the dimensions we want and ‘merge’ it to the main plate after adding a ‘frame hold’. Lastly, we need to ‘copy’ the roto and premultiply it so we can project the alpha to our ‘card’.

If we want to roto something in the scene to change its features (colour correct, grade, etc), we can do the same as we did with the ‘rotopaint’ but in this case we adjust the roto every 10 or 20 frames. We do not need to adjust the roto every frame as it will follow our match move previously done so just a few adjustments should be sufficient.

When we have several 3D projections that we want to put together, we can use ‘Merge mat’ node, as if we use a regular ‘merge’ node, the quality of the image can decrease and look different.

After seeing these 3D projection techniques, we were asked to practice them using the following a footage of a street provided by the lecturer. For example, we could add something on the wall or floor, change the windows texture, colour correct a specific element of the scene, etc. This is the result of my practice:

When 3D projecting on top of a 3D object or artefact, the types of projections we can use are:

  • Patch projection
  • Coverage projection
  • Nested projection (projection inside another projection)

We can find some issues when doing artefact projections that can be solved we the following techniques:

  • Stretching problem: texture is stretched and not showing in the correct place. This issue can be fixed adding a second camera projector on top.
  • Doubling problem: texture is doubled. We can fix it doing two separate projections.
  • Resolution problem: texture look pixelated. We can use ‘sharpen’ node to solve it, however, we can also use a more efficient solution which is adding ‘reformat’ node and set the ‘type’ as ‘scale’, to then link node to ‘scanline render’ which would be the connected to a second ‘reformat’ node with the resolution of the original plate.

Lastly, we also saw how to build a 3D model taking as a reference a 2D image. Using ‘model builder’ node, we can create and adjust cards following the perspective of the 2D image, to then ‘bake’ this geometry into a 3D space. We can add ‘point light’ nodes to set illumination with different intensity, colours, and cast shadows. Another illumination node is the ‘direct light’ which is used as a filling light directed to a specific point or direction.

Once we finished reviewing this week’s theory, we were also asked to make the roto of the hole in the scene of the Garage project and to remove the markers with patch projections. I made the roto pretty quick and had no issues with it, but I struggled with two specific markers clean up: in the two markers positioned by the hole in the wall, when I added the roto, the patch made with rotopaint was showing outside the roto boundaries (right on top of this roto), so it was showing the wrong patch.

After asking the professor for some help, he figured out that I missed the lens distortion node on both the beginning and the end of the clean up set up (to undistorted the scene and the redistort it back).

Another issue I noticed is that the patches added on the floor marks were showing through the roto of the wall. I asked the professor again and found out that this part needs to be merged differently as it is outside the roto. So added a ‘merge (stencil)’ just to these part of the clean-up, then ‘shuffle (alpha-alpha)’ and connected it to the roto ‘scanline render’ node. This will create an stencil of the patches taking the roto as reference and it will not show through the wall.

Final clean-up + roto

I had a lot of troubles with this homework and spent a lot of time trying to figure out why it was not working, but I feel that this struggle was useful to familiarise a bit more and feel more confident towards the nodes system used in Nuke.

Categories
Advanced & Experimental Advanced Maya

Week 3: Rube Goldberg Machine Simulation Bake & Texture in Maya

This week, we focused on baking our bullet simulation to proceed to add texture and set our camera movement.

After all the bullet system is built and set up, we need to bake the simulation so the programme creates the animation’s keyframes of each active rigid body. In order to do this, I selected all the active rigid bodies, then selected ‘Bake Simulation’ on ‘Edit->Keys’ tab. Once Maya has created the keyframes of each element and since we no longer need the bullet system set up, I selected ‘Delete Entire Bullet System’ on ‘Bullet’ tab so all the bullet elements are deleted. I also manually animated with keyframes the background gears since I struggled a bit trying to animate them with bullet; every time I added a new hinge, the whole animation stopped working as I had it set up so it was really time consuming to adjust it all over again each time.

After baking the simulation, I proceeded to texture my design. I liked the cyberpunk mixed with steampunk look that my machine was getting and decided to add some metal textures such as copper, gold, chrome, and brushed metal, as well as glass texture on the helix slide, on the top part of the machine and on the light bulbs. These reflective materials gave me the opportunity to add glow to the balls and to some parts of some elements such as to the ring holders of the helix slide, to some neons on the finish line, and to the filaments of the light bulbs. The following examples inspired me with the colours, mood, and composition of the scene.

Before adding the textures, I searched an HDR in polyhaven.com and downloaded a wood workshop HDR with low and warm light conditions. I wanted to give the feeling that this machine was made in this workshop from random materials found in it. I also researched textures and references like wood, old gears and light bulbs:

Wood workshop

I also found a tutorial in YouTube of how to make glow effect:

https://www.youtube.com/watch?v=E9iIf95BCQ4

The following sequence of rendered previews show the textures I used:

The light bulbs were modelled and textured later on as I thought that the space at the end of the base was looking a bit empty and boring. So I modelled them with the idea that they would turn on when the ball hits the finish line planks and a switch is triggered. I modelled the base and outer side of an old school light bulb and added inside the filaments that I textured separately to give the glow effect. Also the glass of the light bulb is doubled so it gives this thickness and volume effect.

I could not finish the final design this week as I added more elements of what I initially planned and it took me longer than expected, but overall I am very happy with how this is turning out.

Textures and HDR:

  • Base wood texture and planks wood texture – https://polyhaven.com/a/wood_cabinet_worn_long
  • Metal texture with marks – https://quixel.com/megascans/home?category=imperfection&search=metal&assetId=uh4obghc
  • Untreated wood texture – https://quixel.com/megascans/home?category=surface&category=bark&assetId=wghjcggn
  • Vintage number 1 – https://www.freepik.com/free-vector/ornamental-1-background_1138096.htm#query=no%201&position=31&from_view=search&track=sph
  • Vintage number 2 – https://www.freepik.com/free-vector/ornamental-2-background_1138095.htm#from_view=detail_alsolike
  • Wood workshop – https://hdri-haven.com/hdri/repair-facility
Categories
Advanced & Experimental Advanced Nuke

Week 2: 3D Clean-up and 3D Projections

In this class, we learnt how to use the 3D projection in Nuke to clean up scenes or add elements with textured cards, rotopaint, rotoscoping, and UVs.

In Nuke, we can use a ‘3D project’ node to project anything onto a 3D object through a camera. We can use this node with different techniques:

  • 3D Patch with a textured card. We can use a ‘text’ node, or image, or texture projected on a ‘card’ node which would be linked to the ‘scene’ and ‘premult’ nodes, merged to the main plate.
  • 3D Patch with project on mm geo. First, we need to find a reference frame and add a ‘Framehold’ node to freeze this frame. Then, we clone the area using ‘Rotopaint’ node followed by a ‘Roto’ and a ‘Blur’ nodes, that would be premultiplied. Then we add another ‘Framehold’ (so it shows in all the timeline) or, alternatively, we can select ‘Lifetime’ in ‘all frames’ in the ‘Rotopaint’ node. However, it is recommended to use the second ‘Framehold’. Afterwards, we add the ‘Project3D’ node linked to a ‘Camera’ that would be the projection camera and we add another ‘Framehold’ node to this camera. Finally, we add a ‘card’ node where we are going to project the ‘Rotopaint’ job and then we will link this ‘card’ to the ‘scene’ that will be merged to the main plate.
  • 3D Patch with project roto. This time, we start with a ‘Project3D’ node to input in the ‘card’ (linked to the camera projector with a ‘Framehold’ connected to a ‘Scanline render’ node). Afterwards, we add and do the ‘roto’ in one or two frames only (a tick ‘replace’). Then, we add another ‘Project3D’ node to input it in a second ‘card’ (must be same ‘card’ as first one) that would be linked to a second ‘Scanline render’. Then we can add a ‘Grade’ node connected from main plate to the second ‘Scanline render’ to grade the roto that we have previously created.
  • 3D Patch with project UV. The starting point is a ‘Project3D’ node (linked to ‘camera’ and last ‘Scanline render) connected to a ‘card’. This ‘card’ is first input on first ‘Scanline render’ that will be at the same time connected to a ‘constant’ node of a 1:1 aspect (this will fix the frame for us). Then we can ‘Rotopaint’ the part we need patch and ‘Premult’. We ‘Reformat’ again to go back to our video original resolution. Then we project this on a ‘card’ that will be connected to the second ‘Scanline render’. We ‘Reformat’ again the second ‘Scanline render’ and merge to main plate.

To review our final shot after adding these 3D patches, we use a ‘Merge’ node connected to the final output and the main plate, and then set up as ‘difference’.

In order to see the point cloud generated by the 3D camera tracker in the 3D space, we can use the ‘Point cloud generator‘ node. We will just need to connect it to a ‘Camera’ and the main plate (source), then ‘analyse sequence’ in the ‘Point cloud generator’ node, and link it to a ‘Poisson mesh‘ node. Alternatively, in the ‘Point cloud generator’ node, we could select all the vertex of the cloud in the 3D space, create a group, and select ‘Bake selected groups to mesh’ option. This option ‘Model builder’ node to create a model taking as reference our point cloud. To do this, we connect the’Model builder’ to a ‘Camera’ and the main plate or source, then we enter in the node and create a ‘Card’ from there. We can place it and drag its corners wherever we wish. We will then readjust through other frames (just need like 1 or 2 frames adjustment).

This week’s homework consisted in practice all the techniques we have seen today, and 3D track a plate provided and place the floors and back wall grids, add cones on markers, and place two 3D geometries (all these elements need to be match-moved with scene’s camera movement.

The following images and videos show the process I followed and the final outcome of my practice.

Final 3D projections practice
Final 3D tracking and matchmove practice

This 3D tracking has been a bit hard to put together and understand what I am doing and why I am doing it, as I needed to think in both the 2D and the 3D space. Once I have the nodes figured out then the rest can be set really easy. I guess practice and experience is the key to get the hang of this.

Categories
Advanced & Experimental Advanced Maya

Week 2: Rube Goldberg Machine Modelling & Animation in Maya

In this class, we learnt how to bake the simulation that we already set up, to then add texture, refine the design of our Rube Goldberg machine, and animated the camera movement of our scene.

After all our bullet actions are adjusted and we are happy with the dynamics of the animation, we will proceed to ‘bake simulation’ of all the active rigid bodies so all bullet set up is removed and converted in key frames instead. It is also important to select ‘delete entire bullet system’ to get rid of any bullet set up left in our outliner.

Once the dynamics of our scene are sorted, we can proceed to animate the camera movement of our scene, creating a new ‘camera and aim’, and setting its position and aim at the same time. We can also add texture to our scene and finish building up the final touches to make it look presentable.

This week, I focused in finalising my Rube Goldberg machine’s design and dynamics. I added a different route for the second ball, with a helix slide, a clock gear, and a second finish line. I also refined some of the elements, adding some edge loops, to then smooth them down pressing ‘3’.

The next step of this project would be to bake the active rigid bodies, so the programme creates the key frames of the movements set with bullet tool and continue to texture it and set the camera movement of the scene.

Categories
Advanced & Experimental Advanced Nuke

Week 1: 3D Tracking in Nuke

In this first class, we started to dig into the 3D space in Nuke for first time. We learnt how to correct the camera lens or distortion of the scene and how to use 3D tracking to add geometry or texture to a scene.

In order to change the distortion of an image depending on the type of lens effect desired, we can use a ‘Lens distortion‘ node. One of the options we can use is the automatic option, where the programme analyses the scene, detects the horizontals and verticals of the scene, and corrects the distortion of the scene accordingly. On the other hand, we can also set the horizontals and verticals of the scene manually, to then ask the programme to solve the scene distortion following those lines we have created. Another way to change the distortion of a scene is using an ‘STMap‘ node instead. This node is based on 2 colours map of the scene, created after adding a ‘shuffle’ node set to shuffle forward to red and green. After we shuffle, we can add the ‘STMap’ node and set the ‘RGB’ channel to ‘RGBA’ UV channels. we can add distortion to the scene. We can also remove the distortion using same ‘shuffle’ node but set to shuffle backwards instead.

After this, we saw how to create geometry in a 3D space such as spheres, cubes, cards, etc. In order to import or export geometry we can use ‘ReadGeo’ (to import) and ‘WriteGeo’ (to export) nodes. We can also transform this geometry using ‘TransformGeo’ node, or change the texture/surface features like specular or transparency, with ‘Basic Material’ node. Once the geometry is set, we can also add illumination to the scene with ‘Light’ node adding more or less intensity, direct or indirect light, and colour of the light. The ‘Sharpen’ node can also be used to improve the image details, so Nuke can read it better (for tracking purposes).

Since all these settings make our project heavier and it takes longer to render, we can ‘Precomp’ a part of our map that is already finished so Nuke does not have to calculate all those features from that side every time we render.

Following on, we also studied the way to jump from a 2D scene to a 3D space using the ‘Scanline Render‘ node. Pressing ‘tab’ in the keyboard we can jump from 2D to 3D in Nuke. We could also add a ‘Camera‘ node to decide the camera movement and the framing of the scene want.

Lastly, we saw how to 3D track a live action shot so we can add objects or texture in the 3D space:

  1. Using a ‘Camera Tracker‘ node, we will set up the type of camera lens used to film that shot, and fill up all the rest of the features of the scene (such as range, camera motion, lens distortion, focal lens, etc.). We could also leave it without that information, so the programme just tracks it automatically.
  2. Once everything is set, we track our scene so the programme detects and creates several tracking points along the scene (we can choose how many tracking points we want the programme to create).
  3. Once the programme finished creating the tracking marks, we can then see the number of errors of track that have been originated and if it is over 1, it is recommended to make the tracking again as this will give problems later on. If this number is below 1, we can then delete the unsolved or rejected tracking marks.
  4. Next, we proceed to select a specific point in the centre of the scene and we set it as origin point of the shot.
  5. Then we select the track marks that forms the ground of the scene and we tell the programme that this is our ground plane.
  6. After our scene is tracked and properly set, we can then export this ‘scene map‘ keeping the output linked to our 3D tracker node so every change we made is reflected in the scene map created. We could also export the ‘camera‘ only but with the output unlinked so the changes we make in the 3D tracker node is not reflected in this ‘camera’ export.
  7. Finally, we can now add geometry, cards, etc., to our scene and place it, following the ‘camera cloud‘ created in the scene exported. These elements added to the scene will now follow the camera movement and 3D space of the scene.

As our assignment of the week, we were asked to play around with what we learnt today and to try to add geometry and cards planes to the scene shot provided, using the ‘camera tracker’ node.

3D tracked scene with planes and geometry included

I was a bit intimidated by 3D spaces and Nuke’s node system, however, at the end I found it quite straight forward and easy to set up and control.