Categories
Advanced & Experimental Advanced Nuke

Week 7: Despill Corrections Tips, Creating Gizmos in Nuke, & Garage Homework WIP

In this lesson, we learnt to make despill corrections when removing green/blue screen, also we saw how to create our own personalised gizmos in Nuke, and lastly, we asked questions we had related to our garage homework WIP.

Despill correction tips

  1. When keying to remove green screen and then remove saturation, we can roto some parts of the shot and then link this roto to ‘invert’ node so the despill does not affect that specific part.
  2. We can also correct edges with ‘IBK Colour’ node set to ‘blue’ colour only, then we add a ‘Grade (alpha)’ so it only affects the alpha, then we correct the edge with ‘Filter Erode’ and ‘Blur’, and lastly, we ‘Merge (screen)’. We could also add an ‘Edge Blur’ to soften sharp edges and a ‘Clamp’ to make sure all merged alphas value is 0.
  3. With ‘Add mix’ node, we can merge alphas and can set how much alpha we want to see.
  4. Additive key: after a ‘Merge (minus)’ we desaturate and grade, and then we add a ‘Merge (plus)’ to ‘Constant’ node with the green colour as reference.
  5. Divide/Mult key: we rerplace spill with ‘Merge (divide)’ from both the chroma plate and the chroma reference plate, to then ‘Merge (multiply)’ with the background plate.
  6. When the green/blur screen have different luminance along the shot, we correct it taking a ‘Constant’ node with the darkest part colour of the green/blue screen connected to a ‘Merge (average)’ node so we create a ‘Constant’ with a colour with the same luminance. Then we ‘Merge (minus)’ with a ‘Keylight’ for despill.
  7. We could add a ‘Light wrap’ node to add a light glow around specific areas. We will ‘Merge (plus)’ to the background in this case.
  8. An inverted matte can be used to delete light from an outside edge. We just ‘Invert’ the matte, then ‘Roto’ the required parts, and ‘Merge (mask)’ to the matte. We could also add a ‘Grade’ node with a mask link to this ‘Merge (mask)’ node so we colour correct that specific edge.

How to create gizmos

First, we select the nodes we want in the gizmo and group them (ctrl + G). Then in the creed node options, we click the ‘edit’ button and drag and drop the features that we want (controllers). We can label these controllers by clicking on the little circle next to it. We then link each controller with the node controller (hold ctrl + drag and drop from main node to grouped node).

Green screen and despill homework

The homework for this week was to remove the green screen of a hero shot girl scene and add it to a snowing forest background, as well as add some of the background snow to the foreground.

First, I use a ‘Keylight’ node set to detect just the green colour and ‘Merge (minus)’ to the main plate to only see the greens of the shot. Then I aded a ‘Roto’ to the eyes of he girl and ‘Invert’ it linked as a mask to he saturation node to preserve the little amount of green of her eyes. Then I linked a ‘Merge (multiply)’ node from the background plate to the foreground plate to take some of the luminance of the background to the girl. This also was ‘Merge (plus)’ to the foreground to add that luminance to the scene.

Separately in another block of nodes, I used an ‘IBK Colour’ node to key the green screen of the foreground. Then I desaturated it, and added another ‘Grade’ with ‘Filter erode’ and ‘Blur’ and ‘Merge (Screen)’ to previous ‘Grade’ so I get more details and luminance from the girls hair. Then I ‘Copy’ this alpha to the main foreground alpha, to then ‘Add mix’ these alphas to the background plate.

In the background plate, I used a luminance ‘Keyer (alpha)’ node, to select only the colour of the snowflakes falling. then I ‘Copy’ this alpha to the background plate and also ‘Premult’ to create the alpha that will be added to the foreground with ‘Merge (over)’ node.

Finally, I colour corrected and graded the overall result and rendered the alpha and the final comp.

Final hero shot
Alpha version

I am not totally sure about the amount of hair detail that is visible in this version so will ask the professor on the next class (corrected version added to Advanced Nuke – Week 9 post).

Garage homework WIP

Lastly, I asked Gonzalo about my issue with the shadows not showing in my garage comp. He found out that the ‘Grade’ node that was after the ‘Shuffle’ node to create the shadows alpha, had ‘black clamp’ option ticked, so I had to deselect this and select ‘white clamp’ option instead so the blacks of the shadows started to show. However, despite the shadows were finally showing, I feel like they are too harsh and saturated and I could not figure out how to soften them. I tried to grade them and desaturate them but still looked to black and unnatural to me. Also, the hole in the wall is receiving the shadow from the chain hang on the wall and it looks like there is a plane receiving this shadow. This issue is due to the card added on that wall to receive the cast shadow so I tried adding a ‘Merge (stencil)’ from the roto I have from that wall but it did not work for some reason. I will have to ask Gonzalo in the next class.

Categories
Collaborative

Week 7: Team Meeting – Environment, Characters, Objects, Animation, & Interactions

This week, I had a meeting with the girls at the beginning of the week to review what we had done and organise what we need to do, and also had another meeting with the lectures and external studio partners to agree on final designs and interactions.

This week I finished modelling the child’s ghost with textures taking as references the following:

Also, we considered that the team needed to gather in a meeting with lectures in order to discuss technicalities and issues we could have when importing mesh, textures, and animation into Unity. The VR girls confirmed that Unity could accept simple textures with normals, bump maps, roughness, etc., but that this should be embedded in the fbx file exported from Maya.

Since I am not familiar exporting fbx from Maya and even less with Unity, I downloaded Unity so I could test my models directly into it before sending the final ones. VR girls explained that it seemed that the textures needed to be relinked in Unity and this could be a problem and time consuming. Therefore, we reached an agreement that, since I know how and where the textures should be linked, I would test the models in Unity, relink textures and export the Unity package with the model and textures already set. Also, I had to re-set the UV maps of the ghosts models as when I combined all the parts of the model into one unique mesh in Maya, the UV maps got messed up. I also tried to simplify the model removing the double faces of every part and getting rid of unnecessary mesh that was not going to be seen (like torso, arms, legs and feet).

We also thought of the possible memories that can appear when triggered by objects interaction (we made a list in Miro):

Also, MA VFX had our first collaboration project presentation to the class and the tutor, so we put together a Power Point presentation taking as reference some of the slides used in 3D animation and VR presentations (all of us contributed in these presentations).

Team meeting

Later this week, we had the meeting with the lecturers and the external studios partners. We discussed the possibility to buy assets taking the budget offered by the university as we had lots of assets to model and very limited time. Ana the asked us to make a list of assets and add which ones we had to model, which ones we could download for free and which one we needed to buy.

We also reviewed the model of the environment Martyna and Jess have been putting together:

  • Corridor:
    • It needs to be longer and have the final room in between (not at the end). So each side would have 5 doors and then the third door on the left would be the final pod.
    • It should be a dark place with just one point of natural light at the end.
    • It could have stripe lights that are not working (or one can be flickering). These lights need to be organic and not straight industrial lights.
    • It should have broken doors with holes in them. We need to add something in the way of the broken doors, so the main character cannot access the rooms.
    • We considered the idea of these corridors being underground or flooded but due to time limitations and the fact that the whole structure would need to be redesigned, we disregarded it. Instead, we considered the addition of ramps that change slightly the level of the floor from the transition from the waiting area to the corridors.
    • One of the rooms could have an image of trees moving with the wind shown in a screen.
    • The environment will need to look close to us now (familiar), so the user can empathise with it (do not make it futuristic or high tech).
    • Natural lights overall. We can have working lights (when switched on), but not all of them are working (could be flickering or hanging from the ceiling).
  • Final room:
    • It has natural light (single lights source would be more dramatic).
    • It has a broken wall where the deer will appear.
    • Everything is decrepit and broken.
    • The light could be moving with the reflection of the water ripples (from the have flooded floor).
  • Main waiting room:
    • We need to agree textures of the walls which would be broken, moulded, with rusty pipes showing, etc.
    • The ceiling will also have some broken parts with natural light coming through.
    • The walls could have panels, decoration from the past, murals that are aged, etc.
    • We need to add more than one corridor that would be accessible from the waiting room (but not operational for this beta version of the VR experience, just one corridor will be accessible).
    • In the middle, there is a pond with water and dead fish floating.
  • Objects/memories:
    • Teddy bear. We thought of making it look that it was made out of scrapped materials like metallic stuff found, however, we need to be careful with making it look too futuristic so we disregarded this idea. Maybe we could change it for a car toy for example. This toy will show the memory of a child playing with it and then being grabbed by their mother to get euthanised, so the child drops the toy.
    • Fingerprint scan. In this memory, we thought of showing someone being forced to scan their fingerprint so they signed their contract for being euthanised. However, after discussing this with the lecturers, we felt like these should be shown like an option and not like being forced to do it. So instead, we thought of a parent grabbing the child’s hand to help them scan it.
    • Headset. It could reproduce the sound of a radio station that was trying to bring some happiness to people (happy music being cut by the voice of the radio presenter).
    • Diary. This object will only trigger a voice over of somebody reading the diary. When the main character picks the diary and opens it, some bugs would come out from the bottom of it.
    • Poster. It could show health and safety advice like wear a mask, etc.
    • Daily objects. These would not trigger any memory but could show some bits from the past, like a converse shoe, a diary, etc.
    • Megaphone. From were they made the announcements to call people’s turn to die.

We also talked about how VR will construct the UI of the VR experience. It was pointed out by the lectures that this needed to be more an emotional experience than a game itself, so the user’s attention should be driven through the environment using sounds or vague scenes from the past (memories) that are triggered by them passing nearby a place or when touching an object. Some points that were mention regarding the UI were the following:

  • Main character’s POV. It could show like the texture of a helmet (like scratched or dirty glass). We could also add some info showing on the helmet’s glass, like icons (these are less distracting than letters, and less intrusive). We could show UI for the memory object but we should not give away much information.
  • Lobby/entrance. We could add a welcome hologram.
  • Final room. There is a mechanism in the wall that brings up a cell with the human skeletons of the mother and the child hugging. Furthermore, in the collapsed wall, there is a deer that looks us up and then leaves (showing first the deer and then the skeletons). We could hear the singing from the ghosts.

References

Cottonbro Studios. Free Dystopia Photos (online). Available at: https://www.pexels.com/photo/person-in-black-and-white-hoodie-wearing-black-gas-mask-4888482/ [Accessed 22 February 2023]

Categories
Advanced & Experimental Advanced Maya

Week 7: Loop Animation MASH, Texturing, & Final Design in Maya

This week, I focused on finalising the model and adding textures. Additionally, I also added a last animation using MASH.

I decided to change the base of the model to a more straight surface with ‘windows’ on it and stars as decoration. I tried to add a tinted glass texture to the windows interior so mixed with the surface underneath would look like a translucent glass. Then I added the platform and the dome of the model, and also, I created a ‘mechanism’ based on a comet shape pushing a spinning plank that was holding the handle from rotating (using manual key framing). In addition, I also added some stars rotating around the model using a torus as a reference shape and then positioning and rotating the stars using MASH distribute tool.

Once I had the final model modelled, I started adding textures and lights. I added a space like background to thee interior of the dome and a yellow glow ring to the edge of the dome. Then, I added a brushed metal and gold texture to poles, gears and satellites rings. For the planet’s actual rings (not the satellites rings), I added a texture of a picture of the real rings of Saturn, so they can be differenciated. For the Sun, I added a bubble like texture that when changed to yellow-orangish colour, it looks like the Sun’s surface. I added this texture so the glow did not look that flat pale yellow colour. I also added a more saturated glow to the planets and satellites, and also added some neon yellows, purples, and blues to some edges to make it stand out from the dark background. The base is mostly purple, with wooden window frames, purple translucent glass for the windows, and golden stars for decoration. The handle and the spinning plank are textured in wood and brushed metal, and the comet has like ‘fire’ texture and glow (like I did with the Sun). The base has a wooden floor pattern with a golden edge and a golden trail where the comet rotating mechanism is supposed to be attached.

Categories
VFX Careers Research

VFX Careers Research – Job 3

VFX Compositor

Several weeks ago, I was not sure if compositing was for me. However, after develop my skills in Nuke and seeing how several assets in different formats, lighting, texture, etc, can be put together using lots of different techniques to create a final a scene or sequence, really has taken my full attention and interest. I love modelling and texturing, but I also enjoy putting everything together and create different environments using varied effects, lighting and colours. Being responsible for the final look of a piece could be overwhelming but also really rewarding at the end.

A VFX Compositor is in charge to create the final look of a scene or sequence, taking all digital materials needed such as live-action footage, CGI, and matte paintings, and connecting them all in a single shot. These materials are connected in a way that they look like they belong together in the same scene. A key aspect for compositors is to be able to create realistic lighting settings since relighting to make the shot convincing to the viewers eye is important to the success of the sequence. Another aspect to take in consideration is ‘chroma keying’, technique in which a specific colour or lighting of a shot is picked to be altered or replaced. This method is commonly used in ‘green/blue screen’ where a saturated green/blue background is placed to shot live-action footage with it and then be replaced in post-production with the desired background or CGI.

What I like the most about this, is the numerous environments that can be put together with this, since the only limit established is the imagination. There are a lot of examples about amazing environments created like this, but one example that has caught my attention is the ‘upside-down world’ created for the Netflix series Stranger Things:

Stranger Things series’ upside-down environment (DNEG, 2022)

This shot was a one minute and six seconds master shot with blue screen background which was composited with five different plates, four characters, CGI creatures, and an environment that was made of both real scenography and VFX. In order to make it look ‘realistic’, the visual effects had to match the live-action footage so there is no feeling of low quality background screen. Therefore, they used highly detailed assets and textures that at the same time had to be optimised to be more efficient. This demonstrates the level of expertise a Compositor needs to be able to keep a balance between high quality and efficiency, as this requires a vast knowledge and resources usually acquired with years of practice and experience.

Apart from the technical and creative side of the job, it is also important to be aware or at least being an observant person regarding the physics of our surroundings, for example, the difference between the movements that the leaves of a tree do when the wind blows or when a helicopter is approaching; or how the cast of shadows can differ depending on the time of the day, texture of the surface, artificial light added, etc. DNEG work in Uncharted movie, shows a lot of examples and techniques used to make these physics as realistic as possible:

Uncharted VFX breakdown (DNEG, 2022)

This position requires a lot of attention to detail and full understanding about the software used such as Nuke, since many times I have found myself changing the aesthetic of a scene for not having the knowledge enough to tweak certain features as I want to. Getting stuck in the process is certainly how a person learns and develops their knowledge, and also demonstrates their problem-solving skills, however, I can understand why this position is not offered as ‘entry level’ since it requires refined skills and efficiency, which I hope one day to achieve.

References

DNEG (2022). Behind the VFX | Stranger Things Season 4 | DNEG (online). Available at: https://www.youtube.com/watch?v=RYP8yscXFyY [Accessed 24 February 2023]

DNEG (2022). Uncharted VFX Breakdown | DNEG (online). Available at: https://www.youtube.com/watch?v=McI9uFac_hw [Accessed 24 February 2023]

Categories
Advanced & Experimental Advanced Maya

Week 6: Satisfying Loop Animation Basic Model & Animation

This week, I tried to finish the base model adding all the details and animations required so it is ready to be textured.

This week I focused on adding all the details like the gears, planet’s rings and satellites, handle, base, and further decorative details like the star and the half ring around the Sun. I also animated with key framing the gears and the handle rotation. The handle would start rotating and the gears and planets attached to them would move at the same time.

The overall model follows a consistent aesthetic, however, I feel like I am going to change the base as I am not that convinced with the shape I gave it. Also, I am thinking in adding a dome that could have like a space texture as a background.

Categories
Advanced & Experimental Advanced Nuke

Week 6: HSV Correction Process & Chroma Key in Nuke, & Garage Homework WIP

In this lecture, we learnt how to correct Hue, Saturation, and Value (Luminance), and how to use chroma key in Nuke. We also reviewed our garage comp WIP.

The meaning of HSV break down is the following:

  • H – Hue. Applies to colour (R, red value)
  • S – Saturation. Applies to the intensity of the hue/colour (G, green value)
  • V – Value. Applies to luminance or brightness (B, blue value)
HSV illustration (Cini, 2023)

This is important to understand as the quality of a HSV correction in a comp depends of our understanding of this elements.

When making hue corrections, we can use the ‘Hue correct‘ node to mute (colour channel as ‘0’), suppress (suppressed colour channel as ‘0’), or desaturate (saturation channel as ‘0’) a specific colour. This is useful for example to remove a green screen with the green colour channel suppressed.

With the ‘Keyer‘ node set with ‘luminance key‘ operation, we can determine the quantity of luminance we want to remove from the alpha of an image (black parts are not affected, but white parts only). We could also set the operation for ‘red keyer’, ‘blue keyer’, etc. Then we could ‘shuffle’ to green only, for example, and the colour correction will only affect saturation. We can also use this node to add, subtract, and multiply elements:

  • We can remove a colour channel with a ‘Merge (minus)‘ node linked to the colour we want as background and the colour we want to remove.
  • With ‘Add (Math)‘ node we can add colour when linked to a ‘merge (minus)’ node.
  • We could also use ‘Roto‘ to add or remove colour of a specific area.

Texturing can also be made with a ‘keyer (luminance)‘ so we use the alpha of the texture to adjust the luminance. Then this would be blurred, graded, and merged (minus). Moreover, we could also use the keyer to add noise or denoise in certain areas.

Some extra nodes and techniques can be used to create some effects that will give more credibility to our image:

  • Volume rays‘ node. Used to create rays effects or motion effect.
  • Edge detect‘ node. To select the edges of an image’s alpha and colour correct those specific edges.
  • Ramp‘ node. To balance image with gradient (used with ‘Merge (minus)’ node).
  • To add a new channel in ‘Roto’. We create a new ‘output’ (name it, click ‘rgba’ and ‘ok’), so when adding different features like blur or grade, we can link that change in the node and it would only affect that new channel created. We could also use ‘Add (channel)‘ node instead, select the channel as ‘Matte’, and choose that it will only affect to certain colour. We could also add a ‘Rotopaint’ to this and add shapes linked to different channels.

We can use keying nodes and techniques for chroma key such as:

  • IBK‘ (Image Based Keyer). We can subtract or difference with this node. It is considered the best option for getting detail out of little areas like hair or severely motion blurred edges:
    • IBK colour‘ node. Frame by frame rebuilding background taking blue or green colour.
    • IBK Gizmo‘ node. Can select specific colour.
  • Chroma key‘ node. First we can unselect ‘use GPU if available’ if our computer starts lagging. This node works better with evenly lit screens and with more saturated colour. We could use it for despill, but better not to as what we want is to extract the alpha.
  • Key light‘ node. This is used for colour spill.
  • Primate‘ node. This is a 3D keyer that puts colour into 3D colour space and creates a 3D geometric shape to select colours from it. We first select ‘Smart select background colour’, we pick a colour while holding ctrl+shift, then we change to ‘Clean background noise’, and holding ctrl+shift, we pick the colour parts that are still showing in the alpha (and need to be removed). We could also click on ‘Auto compute’ to create an automatic alpha and then retouch areas back to alpha with ‘Clean foreground noise’.
  • Ultimate‘ node. This is used for fine detail, and to pull shadows and transparency from the same image.
  • Green/Blue despill technique. We create an alpha with ‘Keylight’ node and ‘Merge (difference)’ to plate. Then we desaturate the background with a ‘Colour correct’ node and ‘Merge (plus)’ with the ‘Keylight’ node. Then we ‘Shuffle’ and put ‘alpha’ in black. Additionally, we could reduce the light in the background (saturate and grade) with ‘IBK Gizmo/Colour’. There are some companies that have created their own gizmo with all the required presets to despill.
  • Edge Extend‘ node. This is used to extend edges so we can correct the darken bits (smoother edges and not as pixelated).

A standard chroma key process would have the following steps:

  1. Denoise plate
  2. White balance
  3. Alpha pipe
    1. Core matte
    2. Base matte
    3. Hair matte
    4. Gamma matte
    5. Edges alpha
  4. Despill pipe
    1. Edges despill (specific parts)
    2. Core despill (overall)
  5. QC
  6. Light wrap
  7. Regrain alpha and background
  8. ‘Addmix’ to link alpha and background

Green screen homework

The homework of this week is to improve the garage comp and to make a chroma key of a sequence provided by the professor so we can put in practice all the techniques learnt in class.

Final green screen replacement
Alpha version

Garage comp WIP

Regarding my garage comp work in progress, the professor also sent us a Nuke comp with examples of how to set lighting and shadows with projections or geometry. I tried to follow the example with the geometry as I only had 3D objects in my comp, however, I had some problems with the shadows as they were not showing at all in the final result. I could see them showing in the alpha created with the ‘shuffle’ node, however, since I could not see them in the final output, I guess I have something wrong with the ‘merge’ node or with the concatenation of the comp. I will try to ask Gonzalo in the next lecture about this. I also added a texture to the right wall so it looks like it was previously painted but the paint is being degraded and it is peeling off the wall. I roto the part of the texture that I was interested in showing and then used a projection of the texture on a card in a 3D scene.

References

Cini, A (2023). Color Theory HSV or Hue, Saturation, Value Brightness Illustration Chart Vector (online). Available at: https://www.dreamstime.com/color-theory-hsv-hue-saturation-value-brightness-illustration-chart-color-theory-hsv-hue-saturation-value-brightness-image237125365 [Accessed 19 February 2023]

Categories
Collaborative

Week 6: Third Team Meeting – Environment & Ghosts Basic Models & Unity Compatibility

This week, I had one meeting with the group students another with the lecturers and collaboration partners. We reviewed the first models from ghosts and environment as well as optimisation for importing into Unity.

I started to model the ghost of the mother this week so the 3D animation girls could carry on with the rigging and animation of the characters as soon as possible. To start off, I took a reference female model from Maya’s ‘Content Browser’. Then I researched about possible looks that were more suitable for an apocalyptic ambience. I started to think about what a person would wear the day that they are supposed to be euthanised, and I reached to conclusion that a person that is going to kill their child to then kill themselves, would not care much about appearance, so basic jeans and jacket would be the most logical option.

Then using symmetry and the soft brush, I started to shape the features of this character since the preset model from Maya is pretty standard. I also duplicated the body, dissected it in sections, and adjusted the scale so I had the basic shapes for the model’s clothes. Then with a smaller soft brush, I also added some creases to the clothes so they would look more realistic.

Later on, I researched about how to model hair as I have never model this. I found a hair tutorial with XGen in Maya:

Maya XGen tutorial to create realistic hair (Karimi, 2021)

This tutorial uses XGen tool in Maya to generate ‘guides’ that would lead the direction of each strand of hair. After all the guides have been placed the tool creates a duplicate of this in the form of hair strands. We could add more or less density, width, and even texture. Then we also have to set up painted maps so the tool recognises the direction that the hair needs to follow. Lastly, with the help of modifiers we can refine the final look with tools like ‘clamp’ (to fix hair to place), ‘frequency’ (to add ‘noise’ and randomness). I followed most of the tricks and techniques from this tutorial and came up with the following hair model:

I was so proud of this hair but it needed some tidying up as it looked a bit messy. Also, as this was only a practice I also started to think about what kind of hair style this character could have. I think straight hair would be the easiest as I also have limited time and to shape the hair takes me quite some time.

Main weekly meeting

In this meeting, the VR team showed us their floor plan of the building’s interior we are going recreate. They also set some possible routes and interactions the main character could follow.

From VFX side, I presented my ghost model of the mother. They liked it, however they mentioned that it could be nice to add eyes and face features. Since this was going to be distorted with particles effect in Unity I considered unnecessary, however, they said that it could help to have something that gives a bit of characteristic features to the ghost. I came up then with the idea of a face mask, taking in consideration that the people cannot really breathe properly because of air pollution, so they could wear like a face bandana tied to the back of the head. Also, VR girls mentioned that the hair could give some issues when importing into Unity since it is not made with geometry so the programme will just ignore it.

Regarding the environment, a draft model of the map was presented by Martyna and Jess. It also looked very good but we needed to discuss dimensions as it needed to be tested into Unity to see the scale of the ceilings, walls and how it would look like from the viewer’s perspective. Overall, it seemed to need higher ceilings, the waiting area needed to be rounded with some exterior light coming through the ceiling (windows, or broken parts), and the shape of the door frames should be in a transitional form and not flat (see sketch below). We could also see some parts of London from the broken parts of the walls so the audience can be situated.

Model hair optimisation, face mask, and eyes

Following the review received in the last meeting, I decided to optimise the hair top convert it into mesh so Unity would be able to read it. But before to do so, I wanted to adjust the hair style of the model first to a low pony tail and also model a hair band.

I optimised the hair following a tutorial I found in YouTube, which explained how to convert an xGen curve into mesh:

Hair optimisation tutorial (My Oh Maya, 2016)

First, I needed to select the hair guides created with xGen, and then in the xGen tool ’utilities’ tab, select ’guides to curves’ and ’create curves ’ to be able to create physical curves that can be used to create a mesh. Then in ’create’ we select ’sweep mesh (dialogue box)’ and we click on ’one node for each curve’, so each curve can have a mesh that can be modified individually. Then we adjust the mesh to the shape we want. In my case, I used a flat card shape to then add the texture with ’standard surface’ and link the hair texture with an alpha and a normal map. The alpha will be linked to opacity so the black parts of the texture card do not show in the render, the normal will be linked to the bump map to give texture, and the diffuse is linked to the base colour.

In addition, taking a plane divided in half and forming a triangle shape with this, I also started to fold it on top of the character’s face to form the face mask. I also added some creases and with a deformed and scaled down torus I created the back nod of the bandana.

Bandana mesh and texture

For the eyes, I simply took a sphere and duplicated it, and then adjusted the UV map so it fits the sphere. I added a texture of eyes that I had saved from one of my previous projects from term 1 (lip synch face model). Lastly, I added the rest of the textures (clothes and skin).

References


Charro, J. Fade to White Characters (online). Available at: https://charro.artstation.com/projects/PY5e3 [Accessed 18 February]

Karimi, H. (2021). Creating Realistic Hair with Maya XGen (online). Available at: https://www.youtube.com/watch?v=RkpJ4LGJrf8 [Accessed 18 February 2023]

My Oh Maya (2016). XGen for Game Character Hair (Part 1) (online). Available at: https://youtu.be/1Fs6rle_IbE [Accessed 18 February 2023]

Nam, H (2014). Marlene : Last of Us by Hyoung Nam (online). Available at: https://www.artstation.com/artwork/gw9x [Accessed 18 February 2023]

Categories
VFX Careers Research

VFX Careers Research – Job 2

Modelling Artist

Lately, I have been enjoying 3D organic modelling of humans, animals, and objects. It is a task that I could do for hours without getting tired or bored. Creating things from scratch allows me to use my most creative side and pushes me to overcome the technical issues that could arise, to finally have the result that I have in my mind. I also consider modelling intriguing as it is not only based in creating the mesh of something, but figure out the features that define that model so it makes it interesting and memorable.

The task of a Modelling Artist starts with the concept artists design which will be taken as reference, or simply from photographs, or any type of sketch. Then the model is digitally sculpted using modelling programmes such as Maya, ZBrush, or Blender. Later, these models can be textured and animated by Texture Artists and Animators. In small businesses, it is usual that this position is blended with Texture Artist, which I am interested in too. I consider that starting up in a small business could give me the opportunity to learn more general skills so that I can experiment with as many areas of interest I have, to later determine which one I would like to specialise in.

While researching the 3D modelling process, from concept art until final texturing and animation, I found the following video showing the design process of Smaug, the dragon from the Peter Jackson’s movie The Hobbit: The Desolation of Smaug.

Smaug design process (The Hobbit The Battle of the Five Armies, 2014)

When creating the mesh of a 3D model, it is important to take in consideration certain technical aspects such as optimisation of the mesh, what is it going to be used, compatibility, etc. The attention to detail and thoroughness in the process will allow Texture Artists and Animators to do their part of the job easily. I have been lately playing around with hair modelling and the creation of various textures effects, such as clothes creases, and it is definitely a challenge to be as photorealistic as possible and at the same time try to keep the topology simple. This area would be a good one to explore and develop if I want to try my luck as a 3D modeller. I also found a few examples of 3D model optimisation:

Another inspiration I found about modelling is how professionals of the industry have managed to develop new techniques to increase the quality of the models and to make 3D artists task more manageable. In the next example it is shown how they managed to start implementing curly hair in Disney’s characters. Before this, the animated characters had mostly straight hair due to being more suitable to animate and to look with the appropriate quality. However, with the advancement of technology, and 3D software, 3D modellers have found the way to create a tool that focuses in curls. This demonstrates that as a Modelling Artist, I could be learning new skills in a daily basis and also could be developing my own design process and ideas.

Example of 3D modelling technique improvement to make curly hair (Insider, 2022)

I found this process interesting and inspiring: despite it means to work towards tight deadlines and to be able to make the impossible in the short timeframes, I consider this a rewarding job that can be enjoyable from beginning to end every step of the way.

References


Alison & Co, 2018. Character Creator 3 and InstaLOD partner to optimize game character design (online). Available at https://invisioncommunity.co.uk/character-creator-3-and-instalod-partner-to-optimize-game-character-design/. [Accessed 17 February 2023]

Insider, 2022. How Disney’s Animated Hair Became So Realistic, From ‘Tangled’ To ‘Encanto’ | Movies Insider (online). Available at: https://www.youtube.com/watch?v=cvTchBdrqdw. [Accessed 17 February 2023]

The Hobbit The Battle of the Five Armies, 2014. The Hobbit : The Desolation of Smaug – Smaug Featurette (online). Available at: https://www.youtube.com/watch?v=Pvr7DSEHcic. [Accessed 17 February 2023]

Categories
Advanced & Experimental Advanced Maya

Week 5: MASH Tool in Maya, & Satisfying Loop Animation Moodboard & First Draft Design

This week, we learnt how to use MASH tool in Maya and we also started to figure out how out loop animation would look like.

Moodboard

I did some previous brain storming ahead to this class to have some idea of what I would like my animation to look like.

  • Loop animation possible themes:
    • Zen garden
    • My day routine loop (train trip)
    • Dough like texture getting reshaped
    • Laser cut
    • Double perspective sculpture rotating
    • Imposible shapes
    • Simple face expression changing because of interaction with other object
    • Solar system

I also checked some oddly satisfying videos in YouTube with some animation examples, and one of them caught my eye in the minute 7:04 of the video:

Oddly satisfying animation examples (arbenl1berateme, 2019)

I liked the style and the ‘impossible’ movement visual effect that was giving with the rotating torus and the zig zagging ball.

However, I was not sure about these standard oddly satisfying loop animations as they looked pretty much the same to me and I felt like it could be hard to do something different if I follow this style.

I also found an animation of a rolling ball following a rail in ArtStation (see animation here), which was simple but the look reminded me to Alphonse Mucha’s Art Nouveau designs:

Later on, as I am also very interested in astronomy, I also founded interesting these solar system models that spin due to a gear mechanism added to it:

My main inspiration was this artwork of ‘The Astronomy Tower’ made by Cathleen McAllister, which conveyed, in my opinion, both Art Nouveau aesthetic and astronomy:

The astronomy tower (McAllister)

Once I had my design idea settled, I continued to research how to approach the animation in Maya.

MASH

In the lecture of this week, the professor introduced us to MASH tool in Maya, which could be used to make our loop animation.

With MASH tool, after we ‘create MASH network’, we can create procedural effects with nodes such as:

  • Distribute. To arranges several copies of an object in formations.
  • Curve. To animate objects following a curve.
  • Influence. To use an object as a guide to influence the transforms of the network in MASH.
  • Signal. It adds noise to our animation so it varies like a signal wave.
  • Amongst other features…

I did not have the time to fully explore all MASH features but the few I discovered were really interesting and fun to play with. I tried to implement MASH in my design but it seemed to be way easier to just key frame every movement by hand (and also I would achieve a better result).

First draft design

I started taking as reference a picture of the solar system to see the position, shape, and distance of each planet and satellites towards the Sun. I did intend to do this solar system recreation as much accurate as possible, but as it would not look too appealing to the viewer (the planets and satellites would look too small and the Sun too big), I tweaked them a little bit so it would fit nicer in the frame. I made the planets slightly bigger than they are in relation to the Sun, and just added the most important satellites of each planet (Saturn and Jupiter have way too many satellites to be able to fit them all in this model).

Once I had a definitive position of my solar system, I started to animate it. This animation took a bit longer than I thought, as I had to calculate how many times each planet would rotate around the Sun in 300 frames (length of 1 full loop of the animation) so the looping cannot be noticed. As I also wanted to make it as accurate as possible, I also researched online how long each planet uses to take to rotate around the Sun. Since Neptune is slowest of all, I took this planet as the reference one to loop the animation, so it would rotate 360° in relation to the Sun in 300 frames of animation. The rest of the planets are rotating more times being Mercury the quickest. I set the rotation to start from slower to quick in the mid point of the animation and slowing down towards the end until they all stop in the same initial position. Obviously, it is not accurate rotation as if it were, Mercury’s rotation would be invisible to the eye in relation to Neptune’s. Then I did the same with the satellites of each planet, but these animation were more approximated than the planets as it will not be as noticeable. I also parented the satellites to their respective planets so they would rotate around the planets but would also follow the rotation of the planet around the Sun. Then, I gave some rotation movement to the Sun, but as I wanted to add glow to it, I do not think it would be visible. Lastly, I added the gears that would be attached to the planets and also parented them to their respective planets so they would have the same rotation. I am not too convinced about the gears shape so more than possible I would change their design.

I am happy with the planets look and animation, however, I am thinking in changing the model of the gears as they look to ‘spiky’ to me and not too realistic.

References


arbenl1berateme (2019). Oddly Satisfying 3D Animations [Compilation 5] – arbenl1berateme (online). Available at: https://www.youtube.com/watch?v=iLRsCtd5P9s [Accessed on 12 February 2023]

Cogito (2015). 1900 Alphonse Mucha “Dessin de Montre” Jewelry Design Illustration for Georges Fouquet (online). Available at: https://www.collectorsweekly.com/stories/150738-1900-alphonse-mucha-dessin-de-montre-j [Accessed on 12 February 2023]

McAllister, C. Cathleen McAllister (Online). Available at: http://www.cathleenconcepts.com [Accessed on 12 February 2023]

Müller, B (2020). Impossible Oddly Satisfying 3D Animation (online). Available at: https://www.artstation.com/artwork/Ye43ed [Accessed on 12 February 2023]

Staines & Son. The Diary Of An Orrery Maker (online). Available at: https://www.orrerydesign.com [Accessed on 12 February 2023]

Willard, Jr., A. Willard Orrery. National Museum of American History (online). Available at: https://www.si.edu/object/willard-orrery:nmah_1183736 [Accessed on 12 February 2023]

Categories
Advanced & Experimental Advanced Nuke

Week 5: 3D Compositing Process in Nuke & Garage Homework WIP

This week, we learnt the 3D compositing process to put together plates and CG, to do clean-ups, to regrain, and to do a beauty rebuild with a multipass comp.

The 3D compositing general process looks like the following:

  • Main plate clean-up and roto work. After finishing the clean-up, it is recommended to render the cleaned plate so we can use this pre-rendered version to do the rest of the comp. This is done so Nuke has less nodes to calculate each time and the preview of the work done goes quicker.
  • CG compositing. In this part, we can move on with the beauty rebuild, adjusting the AOVs or passes with subtle grade and/or colour correction
    • Basic grade match. With a ‘grade’ node, we first do the white balance of the CG that we are going to integrate in the plate, measuring the ‘whitepoint’ (whitest part of the CG) and the ‘blackpoint’ (darkest part of the CG) while holding ‘ctrl+shift+alt’. Subsequently, we go to our background plate and measure ‘gain’ (for the whites) and the ‘lift’ (for the darks) while holding ‘ctrl+shift’. This will balance both plate and CG’s darks and shadows and will integrate them together.
    • Multipass comp. In this technique, we need first to ‘unpremult (all)’ our CG so we can start splitting the AOVs or passes. This split is made using ‘shuffle’ node and setting it to the desired pass we want to correct. Before editing the passes, we need to make sure to structure the nodes from the CG plate to a ‘copy’ node with all passes merged together, and double check that the CG plate looks exactly the same from initial point (original plate) to the ‘copy’ node. Sometimes this may look different as some of the passes could have been exported wrong. Once we split our passes we can proceed to ‘grade’ them individually. We ‘merge (plus)’ the light passes and we ‘merge (multiply)’ the shadows. We can also select an ID to create a colour map with ‘Keylight’ node. With this node we can select a specific area of the model that we want to adjust as its features will be separated in different saturated colour mattes. This way we could then re-texture a part of the model using a ‘ST-map’ node connected to the texture source. We can then re-light with ‘position pass’ and ‘normal pass’, followed by a ‘grade’ of the master CG plate. We can finish our beauty rebuild with a ‘copy (alpha-alpha)’ to copy the original alpha to the one created, and we we ‘premult’.
  • Motion Blur. Motion blur will add more realism and dynamism to the movement of the CG added as in 3D everything looks sharp and in focus so it is not as realistic. We can add motion blur following two methods:
    • Method 1: adding a ‘vector blur (rgba)’ node, then link it to ‘camera’, and adjust ‘motion amount’ in the ‘vector blur’ node as desired.
    • Method 2: ‘remove (keep)’ node linked to ‘motion blur 3d’ nodes, and adjust this last one’s ‘motion amount’ as desired.
  • Chroma aberration and defocus. We can add an ‘aberration’ node to match the original camera aberration of the live-footage plate, so we make the scene more credible. Also, with ‘defocus’ node we can add depth to the scene to be able to differentiate between sharp image and out of focus image (depth of field). After adjusting these, we need to add a ‘remove (keep)’ node connected to an ‘ST map’ node to put the original distortion back to the scene.
  • Regrain. We also could add some grain to the scene with ‘grain’ node. Then with ‘key mix (all)’ node linked to previous changes and ‘grain’, we can mix channels and add a mask to the previous changes made in the comp.
  • Effect card. We can add effects like smoke with a ‘card’ node. We will need to connect it to ‘shuffle (rgba to rgba with R to alpha)’ node to ‘card’, and ‘grade’ it. Then we ‘copy (alpha to alpha)’ and ‘premult’ to create the alpha of the effect and then we ‘defocus’. This will be projected on a ‘card’ (connected to ‘scene’, ‘scanline render’, and ‘camera’). Finally, we add the ‘ST map’ to unfreeze the frame and ‘multiply’ to show alpha created.
  • Lightwrap. We use this to add light to the edges, which could be adjusted with ‘diffuse’ and ‘intensity’. Then we will ‘merge (plus)’ as this is light feature.
  • QC. Using the ‘merge (difference)’ node, we can see and assess the changes made and there is any error. The ‘colour space’ node with the ‘output’ set as ‘HSV’ can be used to check the colours hue (R), saturation (G), and luminance (B) quality.
  • Final colour correction.
  • Export. The main preferred format to export our comp would be EXR. Some companies will also want a photo ‘JPEG’, or ‘AppleProRes’, or even ‘Avid DNxHD’, but that depends of the pipeline of each company.

The homework for this week was to start to put together the elements that would form part of our garage comp, and also, include the machine provided by the professor following all the steps we have learnt today.

Following the reference pictures we got with the brief, I started to research for 3D objects I could include such as tools, tyres, a table, etc.

I also decided to re-watch this week’s recording of the lecture to make sure I followed step by step the compositing process. This way, I started to understand the functionality of each node and technique, and to become more confident at the time of creating a whole comp by myself without having to look at references in other comps. The first thing I added was the machine in the back room. I did a beauty rebuilt with the separation of the passes and added a smoke effect with a card 3D projection. I feel like this part went really well as I did not have any issues along the process and the final look is pretty realistic.

Garage comp WIP with machine

After my back machine was fully set, I continued to add the 3D geometry to the comp with its textures. One problem that I had with the objects is the fact that they were really heavy and really jumpy when following the movement of the scene so it was hard to work with.

My work in progress comp looks like the following:

Garage comp WIP with 3D objects