Categories
Advanced & Experimental Advanced Nuke

Week 6: HSV Correction Process & Chroma Key in Nuke, & Garage Homework WIP

In this lecture, we learnt how to correct Hue, Saturation, and Value (Luminance), and how to use chroma key in Nuke. We also reviewed our garage comp WIP.

The meaning of HSV break down is the following:

  • H – Hue. Applies to colour (R, red value)
  • S – Saturation. Applies to the intensity of the hue/colour (G, green value)
  • V – Value. Applies to luminance or brightness (B, blue value)
HSV illustration (Cini, 2023)

This is important to understand as the quality of a HSV correction in a comp depends of our understanding of this elements.

When making hue corrections, we can use the ‘Hue correct‘ node to mute (colour channel as ‘0’), suppress (suppressed colour channel as ‘0’), or desaturate (saturation channel as ‘0’) a specific colour. This is useful for example to remove a green screen with the green colour channel suppressed.

With the ‘Keyer‘ node set with ‘luminance key‘ operation, we can determine the quantity of luminance we want to remove from the alpha of an image (black parts are not affected, but white parts only). We could also set the operation for ‘red keyer’, ‘blue keyer’, etc. Then we could ‘shuffle’ to green only, for example, and the colour correction will only affect saturation. We can also use this node to add, subtract, and multiply elements:

  • We can remove a colour channel with a ‘Merge (minus)‘ node linked to the colour we want as background and the colour we want to remove.
  • With ‘Add (Math)‘ node we can add colour when linked to a ‘merge (minus)’ node.
  • We could also use ‘Roto‘ to add or remove colour of a specific area.

Texturing can also be made with a ‘keyer (luminance)‘ so we use the alpha of the texture to adjust the luminance. Then this would be blurred, graded, and merged (minus). Moreover, we could also use the keyer to add noise or denoise in certain areas.

Some extra nodes and techniques can be used to create some effects that will give more credibility to our image:

  • Volume rays‘ node. Used to create rays effects or motion effect.
  • Edge detect‘ node. To select the edges of an image’s alpha and colour correct those specific edges.
  • Ramp‘ node. To balance image with gradient (used with ‘Merge (minus)’ node).
  • To add a new channel in ‘Roto’. We create a new ‘output’ (name it, click ‘rgba’ and ‘ok’), so when adding different features like blur or grade, we can link that change in the node and it would only affect that new channel created. We could also use ‘Add (channel)‘ node instead, select the channel as ‘Matte’, and choose that it will only affect to certain colour. We could also add a ‘Rotopaint’ to this and add shapes linked to different channels.

We can use keying nodes and techniques for chroma key such as:

  • IBK‘ (Image Based Keyer). We can subtract or difference with this node. It is considered the best option for getting detail out of little areas like hair or severely motion blurred edges:
    • IBK colour‘ node. Frame by frame rebuilding background taking blue or green colour.
    • IBK Gizmo‘ node. Can select specific colour.
  • Chroma key‘ node. First we can unselect ‘use GPU if available’ if our computer starts lagging. This node works better with evenly lit screens and with more saturated colour. We could use it for despill, but better not to as what we want is to extract the alpha.
  • Key light‘ node. This is used for colour spill.
  • Primate‘ node. This is a 3D keyer that puts colour into 3D colour space and creates a 3D geometric shape to select colours from it. We first select ‘Smart select background colour’, we pick a colour while holding ctrl+shift, then we change to ‘Clean background noise’, and holding ctrl+shift, we pick the colour parts that are still showing in the alpha (and need to be removed). We could also click on ‘Auto compute’ to create an automatic alpha and then retouch areas back to alpha with ‘Clean foreground noise’.
  • Ultimate‘ node. This is used for fine detail, and to pull shadows and transparency from the same image.
  • Green/Blue despill technique. We create an alpha with ‘Keylight’ node and ‘Merge (difference)’ to plate. Then we desaturate the background with a ‘Colour correct’ node and ‘Merge (plus)’ with the ‘Keylight’ node. Then we ‘Shuffle’ and put ‘alpha’ in black. Additionally, we could reduce the light in the background (saturate and grade) with ‘IBK Gizmo/Colour’. There are some companies that have created their own gizmo with all the required presets to despill.
  • Edge Extend‘ node. This is used to extend edges so we can correct the darken bits (smoother edges and not as pixelated).

A standard chroma key process would have the following steps:

  1. Denoise plate
  2. White balance
  3. Alpha pipe
    1. Core matte
    2. Base matte
    3. Hair matte
    4. Gamma matte
    5. Edges alpha
  4. Despill pipe
    1. Edges despill (specific parts)
    2. Core despill (overall)
  5. QC
  6. Light wrap
  7. Regrain alpha and background
  8. ‘Addmix’ to link alpha and background

Green screen homework

The homework of this week is to improve the garage comp and to make a chroma key of a sequence provided by the professor so we can put in practice all the techniques learnt in class.

Final green screen replacement
Alpha version

Garage comp WIP

Regarding my garage comp work in progress, the professor also sent us a Nuke comp with examples of how to set lighting and shadows with projections or geometry. I tried to follow the example with the geometry as I only had 3D objects in my comp, however, I had some problems with the shadows as they were not showing at all in the final result. I could see them showing in the alpha created with the ‘shuffle’ node, however, since I could not see them in the final output, I guess I have something wrong with the ‘merge’ node or with the concatenation of the comp. I will try to ask Gonzalo in the next lecture about this. I also added a texture to the right wall so it looks like it was previously painted but the paint is being degraded and it is peeling off the wall. I roto the part of the texture that I was interested in showing and then used a projection of the texture on a card in a 3D scene.

References

Cini, A (2023). Color Theory HSV or Hue, Saturation, Value Brightness Illustration Chart Vector (online). Available at: https://www.dreamstime.com/color-theory-hsv-hue-saturation-value-brightness-illustration-chart-color-theory-hsv-hue-saturation-value-brightness-image237125365 [Accessed 19 February 2023]

Categories
Collaborative

Week 6: Third Team Meeting – Environment & Ghosts Basic Models & Unity Compatibility

This week, I had one meeting with the group students another with the lecturers and collaboration partners. We reviewed the first models from ghosts and environment as well as optimisation for importing into Unity.

I started to model the ghost of the mother this week so the 3D animation girls could carry on with the rigging and animation of the characters as soon as possible. To start off, I took a reference female model from Maya’s ‘Content Browser’. Then I researched about possible looks that were more suitable for an apocalyptic ambience. I started to think about what a person would wear the day that they are supposed to be euthanised, and I reached to conclusion that a person that is going to kill their child to then kill themselves, would not care much about appearance, so basic jeans and jacket would be the most logical option.

Then using symmetry and the soft brush, I started to shape the features of this character since the preset model from Maya is pretty standard. I also duplicated the body, dissected it in sections, and adjusted the scale so I had the basic shapes for the model’s clothes. Then with a smaller soft brush, I also added some creases to the clothes so they would look more realistic.

Later on, I researched about how to model hair as I have never model this. I found a hair tutorial with XGen in Maya:

Maya XGen tutorial to create realistic hair (Karimi, 2021)

This tutorial uses XGen tool in Maya to generate ‘guides’ that would lead the direction of each strand of hair. After all the guides have been placed the tool creates a duplicate of this in the form of hair strands. We could add more or less density, width, and even texture. Then we also have to set up painted maps so the tool recognises the direction that the hair needs to follow. Lastly, with the help of modifiers we can refine the final look with tools like ‘clamp’ (to fix hair to place), ‘frequency’ (to add ‘noise’ and randomness). I followed most of the tricks and techniques from this tutorial and came up with the following hair model:

I was so proud of this hair but it needed some tidying up as it looked a bit messy. Also, as this was only a practice I also started to think about what kind of hair style this character could have. I think straight hair would be the easiest as I also have limited time and to shape the hair takes me quite some time.

Main weekly meeting

In this meeting, the VR team showed us their floor plan of the building’s interior we are going recreate. They also set some possible routes and interactions the main character could follow.

From VFX side, I presented my ghost model of the mother. They liked it, however they mentioned that it could be nice to add eyes and face features. Since this was going to be distorted with particles effect in Unity I considered unnecessary, however, they said that it could help to have something that gives a bit of characteristic features to the ghost. I came up then with the idea of a face mask, taking in consideration that the people cannot really breathe properly because of air pollution, so they could wear like a face bandana tied to the back of the head. Also, VR girls mentioned that the hair could give some issues when importing into Unity since it is not made with geometry so the programme will just ignore it.

Regarding the environment, a draft model of the map was presented by Martyna and Jess. It also looked very good but we needed to discuss dimensions as it needed to be tested into Unity to see the scale of the ceilings, walls and how it would look like from the viewer’s perspective. Overall, it seemed to need higher ceilings, the waiting area needed to be rounded with some exterior light coming through the ceiling (windows, or broken parts), and the shape of the door frames should be in a transitional form and not flat (see sketch below). We could also see some parts of London from the broken parts of the walls so the audience can be situated.

Model hair optimisation, face mask, and eyes

Following the review received in the last meeting, I decided to optimise the hair top convert it into mesh so Unity would be able to read it. But before to do so, I wanted to adjust the hair style of the model first to a low pony tail and also model a hair band.

I optimised the hair following a tutorial I found in YouTube, which explained how to convert an xGen curve into mesh:

Hair optimisation tutorial (My Oh Maya, 2016)

First, I needed to select the hair guides created with xGen, and then in the xGen tool ’utilities’ tab, select ’guides to curves’ and ’create curves ’ to be able to create physical curves that can be used to create a mesh. Then in ’create’ we select ’sweep mesh (dialogue box)’ and we click on ’one node for each curve’, so each curve can have a mesh that can be modified individually. Then we adjust the mesh to the shape we want. In my case, I used a flat card shape to then add the texture with ’standard surface’ and link the hair texture with an alpha and a normal map. The alpha will be linked to opacity so the black parts of the texture card do not show in the render, the normal will be linked to the bump map to give texture, and the diffuse is linked to the base colour.

In addition, taking a plane divided in half and forming a triangle shape with this, I also started to fold it on top of the character’s face to form the face mask. I also added some creases and with a deformed and scaled down torus I created the back nod of the bandana.

Bandana mesh and texture

For the eyes, I simply took a sphere and duplicated it, and then adjusted the UV map so it fits the sphere. I added a texture of eyes that I had saved from one of my previous projects from term 1 (lip synch face model). Lastly, I added the rest of the textures (clothes and skin).

References


Charro, J. Fade to White Characters (online). Available at: https://charro.artstation.com/projects/PY5e3 [Accessed 18 February]

Karimi, H. (2021). Creating Realistic Hair with Maya XGen (online). Available at: https://www.youtube.com/watch?v=RkpJ4LGJrf8 [Accessed 18 February 2023]

My Oh Maya (2016). XGen for Game Character Hair (Part 1) (online). Available at: https://youtu.be/1Fs6rle_IbE [Accessed 18 February 2023]

Nam, H (2014). Marlene : Last of Us by Hyoung Nam (online). Available at: https://www.artstation.com/artwork/gw9x [Accessed 18 February 2023]

Categories
VFX Careers Research

VFX Careers Research – Job 2

Modelling Artist

Lately, I have been enjoying 3D organic modelling of humans, animals, and objects. It is a task that I could do for hours without getting tired or bored. Creating things from scratch allows me to use my most creative side and pushes me to overcome the technical issues that could arise, to finally have the result that I have in my mind. I also consider modelling intriguing as it is not only based in creating the mesh of something, but figure out the features that define that model so it makes it interesting and memorable.

The task of a Modelling Artist starts with the concept artists design which will be taken as reference, or simply from photographs, or any type of sketch. Then the model is digitally sculpted using modelling programmes such as Maya, ZBrush, or Blender. Later, these models can be textured and animated by Texture Artists and Animators. In small businesses, it is usual that this position is blended with Texture Artist, which I am interested in too. I consider that starting up in a small business could give me the opportunity to learn more general skills so that I can experiment with as many areas of interest I have, to later determine which one I would like to specialise in.

While researching the 3D modelling process, from concept art until final texturing and animation, I found the following video showing the design process of Smaug, the dragon from the Peter Jackson’s movie The Hobbit: The Desolation of Smaug.

Smaug design process (The Hobbit The Battle of the Five Armies, 2014)

When creating the mesh of a 3D model, it is important to take in consideration certain technical aspects such as optimisation of the mesh, what is it going to be used, compatibility, etc. The attention to detail and thoroughness in the process will allow Texture Artists and Animators to do their part of the job easily. I have been lately playing around with hair modelling and the creation of various textures effects, such as clothes creases, and it is definitely a challenge to be as photorealistic as possible and at the same time try to keep the topology simple. This area would be a good one to explore and develop if I want to try my luck as a 3D modeller. I also found a few examples of 3D model optimisation:

Another inspiration I found about modelling is how professionals of the industry have managed to develop new techniques to increase the quality of the models and to make 3D artists task more manageable. In the next example it is shown how they managed to start implementing curly hair in Disney’s characters. Before this, the animated characters had mostly straight hair due to being more suitable to animate and to look with the appropriate quality. However, with the advancement of technology, and 3D software, 3D modellers have found the way to create a tool that focuses in curls. This demonstrates that as a Modelling Artist, I could be learning new skills in a daily basis and also could be developing my own design process and ideas.

Example of 3D modelling technique improvement to make curly hair (Insider, 2022)

I found this process interesting and inspiring: despite it means to work towards tight deadlines and to be able to make the impossible in the short timeframes, I consider this a rewarding job that can be enjoyable from beginning to end every step of the way.

References


Alison & Co, 2018. Character Creator 3 and InstaLOD partner to optimize game character design (online). Available at https://invisioncommunity.co.uk/character-creator-3-and-instalod-partner-to-optimize-game-character-design/. [Accessed 17 February 2023]

Insider, 2022. How Disney’s Animated Hair Became So Realistic, From ‘Tangled’ To ‘Encanto’ | Movies Insider (online). Available at: https://www.youtube.com/watch?v=cvTchBdrqdw. [Accessed 17 February 2023]

The Hobbit The Battle of the Five Armies, 2014. The Hobbit : The Desolation of Smaug – Smaug Featurette (online). Available at: https://www.youtube.com/watch?v=Pvr7DSEHcic. [Accessed 17 February 2023]

Categories
Advanced & Experimental Advanced Maya

Week 5: MASH Tool in Maya, & Satisfying Loop Animation Moodboard & First Draft Design

This week, we learnt how to use MASH tool in Maya and we also started to figure out how out loop animation would look like.

Moodboard

I did some previous brain storming ahead to this class to have some idea of what I would like my animation to look like.

  • Loop animation possible themes:
    • Zen garden
    • My day routine loop (train trip)
    • Dough like texture getting reshaped
    • Laser cut
    • Double perspective sculpture rotating
    • Imposible shapes
    • Simple face expression changing because of interaction with other object
    • Solar system

I also checked some oddly satisfying videos in YouTube with some animation examples, and one of them caught my eye in the minute 7:04 of the video:

Oddly satisfying animation examples (arbenl1berateme, 2019)

I liked the style and the ‘impossible’ movement visual effect that was giving with the rotating torus and the zig zagging ball.

However, I was not sure about these standard oddly satisfying loop animations as they looked pretty much the same to me and I felt like it could be hard to do something different if I follow this style.

I also found an animation of a rolling ball following a rail in ArtStation (see animation here), which was simple but the look reminded me to Alphonse Mucha’s Art Nouveau designs:

Later on, as I am also very interested in astronomy, I also founded interesting these solar system models that spin due to a gear mechanism added to it:

My main inspiration was this artwork of ‘The Astronomy Tower’ made by Cathleen McAllister, which conveyed, in my opinion, both Art Nouveau aesthetic and astronomy:

The astronomy tower (McAllister)

Once I had my design idea settled, I continued to research how to approach the animation in Maya.

MASH

In the lecture of this week, the professor introduced us to MASH tool in Maya, which could be used to make our loop animation.

With MASH tool, after we ‘create MASH network’, we can create procedural effects with nodes such as:

  • Distribute. To arranges several copies of an object in formations.
  • Curve. To animate objects following a curve.
  • Influence. To use an object as a guide to influence the transforms of the network in MASH.
  • Signal. It adds noise to our animation so it varies like a signal wave.
  • Amongst other features…

I did not have the time to fully explore all MASH features but the few I discovered were really interesting and fun to play with. I tried to implement MASH in my design but it seemed to be way easier to just key frame every movement by hand (and also I would achieve a better result).

First draft design

I started taking as reference a picture of the solar system to see the position, shape, and distance of each planet and satellites towards the Sun. I did intend to do this solar system recreation as much accurate as possible, but as it would not look too appealing to the viewer (the planets and satellites would look too small and the Sun too big), I tweaked them a little bit so it would fit nicer in the frame. I made the planets slightly bigger than they are in relation to the Sun, and just added the most important satellites of each planet (Saturn and Jupiter have way too many satellites to be able to fit them all in this model).

Once I had a definitive position of my solar system, I started to animate it. This animation took a bit longer than I thought, as I had to calculate how many times each planet would rotate around the Sun in 300 frames (length of 1 full loop of the animation) so the looping cannot be noticed. As I also wanted to make it as accurate as possible, I also researched online how long each planet uses to take to rotate around the Sun. Since Neptune is slowest of all, I took this planet as the reference one to loop the animation, so it would rotate 360° in relation to the Sun in 300 frames of animation. The rest of the planets are rotating more times being Mercury the quickest. I set the rotation to start from slower to quick in the mid point of the animation and slowing down towards the end until they all stop in the same initial position. Obviously, it is not accurate rotation as if it were, Mercury’s rotation would be invisible to the eye in relation to Neptune’s. Then I did the same with the satellites of each planet, but these animation were more approximated than the planets as it will not be as noticeable. I also parented the satellites to their respective planets so they would rotate around the planets but would also follow the rotation of the planet around the Sun. Then, I gave some rotation movement to the Sun, but as I wanted to add glow to it, I do not think it would be visible. Lastly, I added the gears that would be attached to the planets and also parented them to their respective planets so they would have the same rotation. I am not too convinced about the gears shape so more than possible I would change their design.

I am happy with the planets look and animation, however, I am thinking in changing the model of the gears as they look to ‘spiky’ to me and not too realistic.

References


arbenl1berateme (2019). Oddly Satisfying 3D Animations [Compilation 5] – arbenl1berateme (online). Available at: https://www.youtube.com/watch?v=iLRsCtd5P9s [Accessed on 12 February 2023]

Cogito (2015). 1900 Alphonse Mucha “Dessin de Montre” Jewelry Design Illustration for Georges Fouquet (online). Available at: https://www.collectorsweekly.com/stories/150738-1900-alphonse-mucha-dessin-de-montre-j [Accessed on 12 February 2023]

McAllister, C. Cathleen McAllister (Online). Available at: http://www.cathleenconcepts.com [Accessed on 12 February 2023]

Müller, B (2020). Impossible Oddly Satisfying 3D Animation (online). Available at: https://www.artstation.com/artwork/Ye43ed [Accessed on 12 February 2023]

Staines & Son. The Diary Of An Orrery Maker (online). Available at: https://www.orrerydesign.com [Accessed on 12 February 2023]

Willard, Jr., A. Willard Orrery. National Museum of American History (online). Available at: https://www.si.edu/object/willard-orrery:nmah_1183736 [Accessed on 12 February 2023]

Categories
Advanced & Experimental Advanced Nuke

Week 5: 3D Compositing Process in Nuke & Garage Homework WIP

This week, we learnt the 3D compositing process to put together plates and CG, to do clean-ups, to regrain, and to do a beauty rebuild with a multipass comp.

The 3D compositing general process looks like the following:

  • Main plate clean-up and roto work. After finishing the clean-up, it is recommended to render the cleaned plate so we can use this pre-rendered version to do the rest of the comp. This is done so Nuke has less nodes to calculate each time and the preview of the work done goes quicker.
  • CG compositing. In this part, we can move on with the beauty rebuild, adjusting the AOVs or passes with subtle grade and/or colour correction
    • Basic grade match. With a ‘grade’ node, we first do the white balance of the CG that we are going to integrate in the plate, measuring the ‘whitepoint’ (whitest part of the CG) and the ‘blackpoint’ (darkest part of the CG) while holding ‘ctrl+shift+alt’. Subsequently, we go to our background plate and measure ‘gain’ (for the whites) and the ‘lift’ (for the darks) while holding ‘ctrl+shift’. This will balance both plate and CG’s darks and shadows and will integrate them together.
    • Multipass comp. In this technique, we need first to ‘unpremult (all)’ our CG so we can start splitting the AOVs or passes. This split is made using ‘shuffle’ node and setting it to the desired pass we want to correct. Before editing the passes, we need to make sure to structure the nodes from the CG plate to a ‘copy’ node with all passes merged together, and double check that the CG plate looks exactly the same from initial point (original plate) to the ‘copy’ node. Sometimes this may look different as some of the passes could have been exported wrong. Once we split our passes we can proceed to ‘grade’ them individually. We ‘merge (plus)’ the light passes and we ‘merge (multiply)’ the shadows. We can also select an ID to create a colour map with ‘Keylight’ node. With this node we can select a specific area of the model that we want to adjust as its features will be separated in different saturated colour mattes. This way we could then re-texture a part of the model using a ‘ST-map’ node connected to the texture source. We can then re-light with ‘position pass’ and ‘normal pass’, followed by a ‘grade’ of the master CG plate. We can finish our beauty rebuild with a ‘copy (alpha-alpha)’ to copy the original alpha to the one created, and we we ‘premult’.
  • Motion Blur. Motion blur will add more realism and dynamism to the movement of the CG added as in 3D everything looks sharp and in focus so it is not as realistic. We can add motion blur following two methods:
    • Method 1: adding a ‘vector blur (rgba)’ node, then link it to ‘camera’, and adjust ‘motion amount’ in the ‘vector blur’ node as desired.
    • Method 2: ‘remove (keep)’ node linked to ‘motion blur 3d’ nodes, and adjust this last one’s ‘motion amount’ as desired.
  • Chroma aberration and defocus. We can add an ‘aberration’ node to match the original camera aberration of the live-footage plate, so we make the scene more credible. Also, with ‘defocus’ node we can add depth to the scene to be able to differentiate between sharp image and out of focus image (depth of field). After adjusting these, we need to add a ‘remove (keep)’ node connected to an ‘ST map’ node to put the original distortion back to the scene.
  • Regrain. We also could add some grain to the scene with ‘grain’ node. Then with ‘key mix (all)’ node linked to previous changes and ‘grain’, we can mix channels and add a mask to the previous changes made in the comp.
  • Effect card. We can add effects like smoke with a ‘card’ node. We will need to connect it to ‘shuffle (rgba to rgba with R to alpha)’ node to ‘card’, and ‘grade’ it. Then we ‘copy (alpha to alpha)’ and ‘premult’ to create the alpha of the effect and then we ‘defocus’. This will be projected on a ‘card’ (connected to ‘scene’, ‘scanline render’, and ‘camera’). Finally, we add the ‘ST map’ to unfreeze the frame and ‘multiply’ to show alpha created.
  • Lightwrap. We use this to add light to the edges, which could be adjusted with ‘diffuse’ and ‘intensity’. Then we will ‘merge (plus)’ as this is light feature.
  • QC. Using the ‘merge (difference)’ node, we can see and assess the changes made and there is any error. The ‘colour space’ node with the ‘output’ set as ‘HSV’ can be used to check the colours hue (R), saturation (G), and luminance (B) quality.
  • Final colour correction.
  • Export. The main preferred format to export our comp would be EXR. Some companies will also want a photo ‘JPEG’, or ‘AppleProRes’, or even ‘Avid DNxHD’, but that depends of the pipeline of each company.

The homework for this week was to start to put together the elements that would form part of our garage comp, and also, include the machine provided by the professor following all the steps we have learnt today.

Following the reference pictures we got with the brief, I started to research for 3D objects I could include such as tools, tyres, a table, etc.

I also decided to re-watch this week’s recording of the lecture to make sure I followed step by step the compositing process. This way, I started to understand the functionality of each node and technique, and to become more confident at the time of creating a whole comp by myself without having to look at references in other comps. The first thing I added was the machine in the back room. I did a beauty rebuilt with the separation of the passes and added a smoke effect with a card 3D projection. I feel like this part went really well as I did not have any issues along the process and the final look is pretty realistic.

Garage comp WIP with machine

After my back machine was fully set, I continued to add the 3D geometry to the comp with its textures. One problem that I had with the objects is the fact that they were really heavy and really jumpy when following the movement of the scene so it was hard to work with.

My work in progress comp looks like the following:

Garage comp WIP with 3D objects
Categories
Collaborative

Week 5: Second Team Meeting – ‘The Departure Lounge’ sequence

In the second meeting of this project, we already started to discuss how the characters, objects, animals, environment, lighting, colour, and interactivity would look like.

Characters

We started the meeting discussing how the ghosts characters would look like. These characters should be faceless figures or they can even have some eyes, mouth, and nose but showing very vaguely.

These models will need to be modelled by 3D modellers to then be rigged and animated by 3D animators, therefore, we will need a clear break down of both processes so the models are created and exported correctly. Also, VR will need to let us know how do they include elements in Unity, and what format do they need for compatibility. Later on, we clarified that they will prefer us to export in FBX but we could also export on OBJ. Also we will need to use simple materials such as colours, reflections, etc, but never use shaders as these are completely ignored by Unity when imported into the programme.

  • Male ghost. This should be the father that is trying to stop the mother to euthanise their child, so he should look threatening initially, but he should also transmit his love and desired to keep his child.
  • Mother ghost. She has chosen to euthanise her child to then kill herself, so she should look like conflicted and desperate.
  • Child ghost. Running from mother? Is he aware of the situation?
  • Main character (POV). One tourist is looking for evidence of life 300 years after the global disaster, (with limited resources as most of technology is lost).

Artefacts

The main character could be receiving hints of what happened in that place with the objects they are finding and the possible interactions they can have with these:

  • Posters/propaganda. Like warning signs, posters (showing health, political, or global warming propaganda), holograms (with instructions or a message)
  • Oxygen bottles. Rusty and empty.
  • Space like suits. To protect from gas/fog and radiation? (thin atmosphere due to global warming lets harmful sun radiation in so can cause skin cancer or really bad sun burns).
  • Futuristic tablet devices. Showing holograms with departure lounge scape procedures, or euthanasia options?
  • Sunk or floating objects. Like books, helmets, bags, clothes, children toys (everyday objects so viewer empathises with story – emotional response).
  • Photographs. Showing ghosts from past memories (happy memories)?
  • CCTV footage. Would this show too much of the story? Would it be too obvious?

Animals/Plants/insects

These remains of nature could show both the extinction of live on Earth and at the same time the new rise of life in the planet with only survivors (it would give hope).

  • Dead fish in water. This could be part of past memory. In the present it would look decomposed.
  • Moss. This could help to minimise number of objects angles such as bricks, broken walls, or other environmental assets.
  • Deer. Showing at the end cell to transmit the new rise of life in the planet (hope).
  • Bugs. Such as cockroaches, which are resilient to extreme conditions.

Environment

The environment would look like an airport terminal like an open space, tall ceilings, and big windows, but this area would be the waiting room for people that has consent to be euthanised. It would also be a half collapsed building with the following features:

  • Broken walls, ceilings, and windows. covered with moss and dirt.
  • Flood. Partly flooded floor with animated ripples and little waves. Can add the floating objects and decomposed fish here.
  • Water drops from ceiling. From condensation as it would be a really warm environment.
  • Fog. As CO2 levels are extreme, this dense fog covers the whole scene giving a monochrome colour feeling.
  • Broken bricks. From collapsed walls and ceilings.
  • Rusty pipelines. Coming out of walls or floor.
  • Door covered on dried blood (stains). It can be one of the cells’ door?
  • Euthanising beds. Inside of cells.
  • Mother and child’s skeletons. In final room, skeletons of mother hugging child.

Lighting/Colour palette

  • Natural light. Coming through broken walls, ceilings, and windows.
  • Flickering lights. Would they be working yet after 300 years?
  • Hologram light.
  • Monochrome palette. With desaturated colours, like a sepia photograph.

Interactions

  • Site specific. So it will trigger glitch memory when reaching certain area.
  • Grabbing objects. Memory will be triggered.

Testing

This week I have also been testing some hologram effects in Maya following this tutorial in YouTube:

Hologram tutorial using MASH tool in Maya (CG Artist Academy, 2020)

The final result I got was this:

Hologram effect test in Maya

Despite the result was good, after speaking with VR, they confirmed to me that these effects cannot be transferred from Maya to Unity, so it would be better to create this effect in Unity directly.

Miro board

We also organised in Miro the tasks to be done, the role of each member of the group, the assets we needed, the timeline with targets of each week, and links to important shared spaces from the group (like Trello, Google Slides, and OneDrive).

Miro board

References


CG Artist Academy (2020). Maya MASH Holographic HUD Procedural Tutorial (online). Available at https://www.youtube.com/watch?v=O8He2bcS6ao [Accessed 11 February 2023]

Categories
VFX Careers Research

VFX Careers Research – Job 1

Environment Artist

I find really satisfying to create scenes from scratch and step by step turn it into something that can be part of a real world or may look like it is a real place. Therefore, I think the role of Environment Artist would be a good option for me when trying to find a job in the VFX industry.

I consider that a video game company such as Frontier, it would be a good start point for this role as they look for realistic landscapes, but they do not require the grade of detail that the film industry usually demands. However, since I overall enjoy the most photorealistic models of environments following a steampunk, cyberpunk, art nouveau styles, or even futuristic sci-fi space or 80’s neon aesthetics, my main goal is to one day make it to a good VFX company that focuses on the filmmaking of movies with these styles. 

Researching about the role’s characteristics, I found that the environment artists are required when the actual environment is too difficult or impossible to film in real life (e.g. post-apocalyptic cities, a space environment, a completely different world, etc). The brief starts with a 2D or 3D digital art made by concept artists to be taken as a reference material (it could also be photographs of similar places or sketches). Then the environment artist would create the ‘wireframe’ or mesh of the landscape to then sculpt the different characteristics of the scene to make it as accurate and realistic as possible to what is fitting. Once the model is done, in big companies where these roles are more specialised, this is passed on to the texturing artist to add texture and make the surfaces look realistic, but in smaller companies, environment artists can be doing both modelling and texturing or even lighting the scene. 

What I like from modelling overall is the fact that it can be made from a studio as an internal employee, or as a freelancer. I think that to start in a studio as an internal employee would give me a bit more stability and security at the beginning of my career, as well as being an opportunity to see how more experienced people do the job and take some useful tips from them.

I also found two example videos in YouTube that shows examples of the role of an environment artist in a video game and in a film. 

In the first example below, it shows how for video games the environments are modelled from scratched, textured, and layered later with the objects and characters of the scene. They also must set colliders on each object or component of the environment so the character that is being used by the player interacts with the scene. Despite with the years, the game industry has been becoming more realistic following the development of new and more powerful software, it still does not reach the photorealism of films. It the game design it is needed a balance between immersive design and fluid gameplay, so it is difficult to be perfectly photorealistic.

Video Game Environment Artist Job (InspirationTuts, 2022)

The second example below shows an environment reel made by MPC artists. It is visible that the environments are not 100% modelled and they are a mixture of green/blue screen, modelling, texturing, environment shots from other places (rotoscoped), and colour correction. It is a more difficult task in my opinion but it has a more satisfying result.

MPC Film Environmental Reel (MPC, 2021)

References

Framestore (2023). Blade Runner 2049: Art Department (online). Available at: https://www.framestore.com/work/blade-runner-2049-art-department?language=en [Accessed 10 February 2023]

InspirationTuts (2022). Video Game Environment Artist Job (online). Available at: https://www.youtube.com/watch?v=Opn3mhFjDyI [Accessed 10 February 2023]

MPC (2021). MPC Film Environment Reel (online). Available at: https://www.youtube.com/watch?v=47GcYCoBHpw [Accessed 10 February 2023]

Thacker, J (2016). Behind the scenes: the concept art of The Expanse (online). Available at: https://magazine.artstation.com/2016/02/scenes-concept-art-expanse/ [Accessed 10 February 2023]

Categories
Advanced & Experimental Advanced Nuke

Week 4: CG Compositing in Nuke

This week, we studied how to do a CG beauty rebuild, using channels or passes of our CG to see its layers to then adjust them separately, relight them, and put them back together.

To start with the CG beauty rebuild, first we need our CG layers (usually the CG has already been exported like this). We can see all these layers separated in the ‘layer contact sheet‘ which contains a view of passes in EXR (e.g. diffuse, specular, reflection, etc). The separation of the EXR in layers or passes (channels) is used for adjusting each pass separately to match the lighting and colour conditions of the background. In order to adjust each pass, we first need a ‘shuffle‘ node set with the specific pass (input layer) we need to then ‘merge (plus)‘ (+) for the lights (diffuse, indirect, specular, and reflections) and ‘merge (multiply)‘ (*) for shadows (AO or ambient occlusion, and shadow). Every pass must be graded separately and then we could add a final ‘grade’ or/and ‘colour correct’ to the entire asset if needed.

There are several types of ‘render passes’ or ‘AOVs’ (Arbitrary Output Variable):

  1. Beauty Rebuilt Passes:
    • Material AOVs. To adjust material attributes (shader).
    • Light Groups. To adjust individual lights of a scene.
  2. Data Passes:
    • Utilities. Combined with tools to get various effects (e.g. motion blur, defocus, etc.).
    • IDs. To create alphas or mattes for different areas of the render.

There are some elements that can be used to double check or improve our CG beauty rebuild quality:

  • Cryptomatte. To see different parts of the scene colours.
  • KeyID. To create a mask of the ID pass.
  • AO pass. It creates a fake shadow, produced by proximity of geometry to other geometry or background.
  • Motion pass. It let us see the blur of the motion clearly.

The process to subtract a pass to edit it is the following:

  1. Unpremult (all)
  2. Link to ‘shuffle’ node (set with pass needed)
  3. ‘Grade’ and make adjustments needed
  4. Add back with ‘merge (plus)’ or ‘merge (multiply)’
  5. ‘Remove (keep)’ node
  6. ‘Permult’

Once we have our colour correction and grading made, we can relight the scene with ‘position pass’ which is the 3D scene but in colour values (red=X, green=Y, blue=Z). In order to have a reference of the 3D space, we could use a ‘position to points’ node set with ‘surface point’ to ‘position’ and ‘surface normal’ to ‘normal’. We then adjust the point size how we want and we will see a 3D representation of colour values. Once the representation is made we can start to add lights with ‘points’ nodes linked to the ‘scene’ node to put them together. This scene is then connected to a ‘relight’ node which puts light, colour, material, and camera together (use alpha, and link ‘normal vector’ to ‘normal’ and ‘point positions’ to ‘point’). To merge over original background, we then ‘shuffle’ and ‘merge’.

As a homework of the week, we need to composite a 3D modelled car in a background of out choice:

Final car compositing

I feel like this practice was simpler than last week’s homework, however, I still encountered some challenges that I would like to research and study, such as the addition of ‘fake’ lights to the car lights to look like they are turned on, and also to get rid of a specific area glow like the one on the right door of the car which does not really make sense it shows there.

Categories
Collaborative

Week 4: First Team Meeting – ‘Before the Fall’ VR Experience Project

In this week, we finally had our first team meeting to received a detailed brief of the project and to agree which sequence are we going to focus on.

We all met through Teams to start discussing this project.

Teams meeting with lecturers and external studio partner

The project is based in a post global warming VR experience where the user walks through a dystopian London environment.

The viewer starts in a landing craft from where they get out with a protective suit, as the environment is surrounded by a dense mist (it could be due to the increase of CO2 caused by an excess of noxious gases produced by humans over the years). In this part we are thinking to add a projection of what happened reflected in the visor of the viewer’s helmet.

The viewer can interact with objects found in their way that would trigger some flashes of past memories from the same place. It should show vague suggestions of what happened but never give the full information.

It could also be shown the story of a mother that has decided to euthanise her child to then kill herself since they could not get into the spaceship that is taking the most fortunate humans to leave the already destroyed planet Earth. Also, it can be shown a father that is trying to stop the mother to kill the child as he does not agree with this measure. These stories can be triggered when the viewer suddenly finds the ghosts of these persons crossing their way. These flashes of ghostly images should be more like blurry and unfocused images (ghosts from the past) mixed with sounds and effects to add to the experience. The viewer could hear the voice of the ghosts from the moment they get out of the craft until they find their dead bodies inside a building (the voices could guide the viewer through the environment).

The aesthetic of the scenery would be like the Eastern European buildings that are collapsed or unfinished, with some brutalist (naked concrete) look.

The lecturers made a Miro board so we can keep every update there and have a place where we can all share our thoughts and progress.

Some aesthetic materials the professors recommended were Alien, were they have some static interference that can be taken as example for the glitches that happen in between memories.

(add example images)

Also Blade Runner 2049 arid and apocalyptic scenography mixed with Solaris & Stalker decade Soviet Union mood can be a notion of how the environment could look like.

An aesthetic that I also found interesting and inspiring is the one found in Alien Covenant. I found the following video showing the CGI and VFX breakdowns of the different sequences created by MPC.

Inspiration from Alien Covenant aesthetic and VFX made by MPC (TheCGBros, 2017)

We also decided to focus only in one section of the story as time wise it is impossible to get the whole experience done. This scene we are going to focus on will be like a beta version of the experience and not a finished piece. After another meeting separately with the MA members only, we decided that the part of the story we want to focus on would be the end scene as it would take place inside a building so it would be easier to model than the whole outside London scenery. Also, we found interesting the interaction of the user with objects found around in the building until they discover the dead bodies in the final room.

References

Framestore (2023). Blade Runner 2049: Art Department (online). Available at: https://www.framestore.com/work/blade-runner-2049-art-department?language=en [Accessed 4 February 2023]

Godwin, K. G. (2017). Andrei Tarkovsky’s Stalker (1979): Criterion Blu-ray review (online). Available at: https://www.cageyfilms.com/2017/07/andrei-tarkovskys-stalker-1979-criterion-blu-ray-review/ [Accessed 4 February 2023]

TheCGBros (2017). CGI & VFX Breakdowns: “Alien Covenant” – by MPC (Online). Available at: https://www.youtube.com/watch?v=Yv5FyBK_u5Q. [Accessed 4 February 2023]

Categories
Advanced & Experimental Advanced Maya

Week 4: Rube Goldberg machine camera set and render in Maya

In this lecture, we focused on finishing our Rube Goldberg machine texturing, camera set up, and rendering the final outcome.

I continued adding the last textures and finishing touches of the design, such as the finish lines numbers, and some more neon lights in the edges of the planks and of other components. I also modelled the light bulbs’ buttons to switch them on and textured them with glow.

Moreover, I decided to animate some arrow lights on the top of the initial ramp to add another point of interest in the animation:

Arrow lights animated on ramp

After I finished with the texturing, I continued to set the camera movement using ‘camera and aim’. This way, I only have to set the ‘translate’ of the camera since the ‘rotation’ is adjusted with the aim. I tried to follow both balls switching priority between one and the other depending on the point of the animation and which one was more important to follow each time. Therefore, I not only framed the scene from the front view but I also made the camera rotate 360 degrees around the machine, showing its back too.

Camera and aim set up with keyframes on ‘translate’

In the last bit of the scene when the second ball has to reach the finish line, I had to reduce the duration of this since it was way too slow. Therefore, I selected all the elements of the scene and in the ‘graph editor’ I scaled down the number of frames required for this last movement. I reduced from 800 to 700 frames. The following video shows a preview of the camera movement I set:

Camera movement preview

When I had my animation fully set, I proceeded to set the render. Thought of adding a chrome textured background with the lighting of the skydome I had previously, however, it turned out to be problematic as there were too many reflections so the render would take too much time to finish. Maya also started to crash every time I tried to preview the render. Therefore, I decided to get rid of this chrome background and leave it with the original workshop background. I just lowered the light a bit so the glows added were more pronounced.

I was playing around with ‘Camera (AA)’, ‘Diffuse’, ‘Specular’, and ‘Transmission’ to get the best result without having to render for too long.

After two days rendering, this is the final result:

Final render

I really enjoyed this project and I feel enthusiastic about 3D modelling and animation. I also feel like I could improve the render, amending some details like adding a dark and reflective background to darken the scene and to make the neon lights more visible. However, due to limited time I was not able to do this (but I definitely will if I find some spare time before the end of term 2).