Categories
Advanced & Experimental Advanced Nuke

Week 9: Nuke Homework Q&A Session

We dedicated this lecture to asked all the questions we had about what we have seen in this term and our weekly homework and projects.

Hero shot green screen removal homework correction

In this comp I had an issue getting the finer details of the girls hair when removing the green screen and comping with the forest background. Also, the snowflakes I keyed to add in the foreground of the scene were barely visible. In order to take the hair details, I had to take a ‘IBK colour’ node and pick the darks and lights of G (green colour) so it selects as much detail of the hair as possible. I can also use ‘Filter erode’ to remove noise and then add ‘patch black’ (like at 20) to remove black part. Then, I can add a ‘IBK gizmo’ set to green, and then link ‘fg’ to green screen plate, and ‘bg’ to background plate (so it takes background features). Then, I can tick ‘use bkg luminance’ in the ‘IBK gizmo’ node, so it takes the background luminance, and tick ‘use bkg chroma’ so it takes background colour too. I can then ‘Merge (over)’ the ‘A’ link to the ‘IBK gizmo’, and ‘B’ link to background. This will take all the details of the hair from the green screen and add it to the luminance of the new background.

Regarding the snow problem, I was taking the luminance with ‘Keyer’ node from the original plate, and I had to connect it to the ‘Transform’ node instead so it takes the correct aspect ratio. Then, every time I ‘Premult’, I always need to use ‘Merge (over)’, so I changed to this node. I can also add more or less effect with ‘Multiply’ node.

Final green screen removal scene
Final green screen removal scene – alpha

Markers clean-up homework correction

I asked the professor for the distortion I was getting from the smart vector and he confirmed to me that the problem was that the node was affecting the whole image. So in order to correct it, I had to add a ‘Premult’ to the ‘Roto paint’ and add a ‘Framehold’ again (before the ‘ST map’), so the distort only affects the alpha created with the ‘Roto’. Also, I need to improve ‘Roto paint’ using the techniques to control light changes.

Final result

Garage comp homework correction

In this comp I had the issue with the shadow being cast on the wall hole. To remove the shadow from the wall hole, I need to take the previous roto made for that wall and ‘Merge (stencil)’ in the shadow part (between ‘Blur’ and ‘Grade’ nodes). Then, before this ‘Merge (stencil)’ node, we add an ‘Invert’ node so the roto alpha only takes the hole instead of the wall. To correct some bits that are outside this previous roto and that now are showing as this has been inverted, we make a quick ‘Roto’ that selects the area we want to keep (the hole in this case), we adjust the position of this ‘Roto’ in several frames, and then we ‘Merge (mask)’ to ‘Invert’ node (‘B’ connection). Lastly, we make the edge of the roto less crisp adding ‘Edge blur’ so it softens it.

I also fixed the back objects as they were looking too dark and the smoke effect was not affecting them so it did not look realistic. I then desaturated the colours adjusting their ‘Grade’ nodes and then added a ‘Merge (over)’ from the smoke card block to these objects.

Final garage comp
Categories
Collaborative

Week 9: Deadline Extension, Retexturing, & Group Showreel

This week, we tried to finish up a testing version of the VR experience to show in the exhibition next week. We also discussed and confirmed with the tutors the application of an extension of the deadline.

Objects Retexturing

I remodelled and retextured a Pikachu toy downloaded from internet. First, I removed some bits of the mesh to show its destruction. Then I sent the model to Mudbox, linked Pikachu textures that came with the downloaded model, then created a new material, and using the stencil textures of the programme, I painted some dirt and mud splashes effect in the model to make it look more weathered. Then I exported the textures created and linked them to the model in Maya. Then exported the model with textures in FBX and relinked them in Unity. One relinked, I exported model and textures into a Unity package so the VR girls could place the objects directly in the VR scene without having to relink textures.

Me and the rest of the group also found some objects online that could be retextured and destroyed to blend in to the dystopian environment, such as spectacles, converse shoe, fire extinguisher, electric box, barriers, etc.

Individual showreel (Not final)

Since we were not sure if we were going to have approval on the deadline extension, we decided to put together a group showreel, in which each one of us would show their work in the project. Therefore, I exported 360s of my models I had so far, and put them together in After Effects with some titles describing what I did on each asset. I then sent uploaded this in our shared drive so Alex would put all the individual showreels together in one.

Team meeting

In this week’s meeting, we got confirmation that the extension of our deadline was 90% sure that it was going to be approved so we focused first on the priorities we had to get the testing version done for the exhibition, and then we checked what we would need to complete the beta version during the Easter break.

We reviewed the objects we had and the objects we needed for the finished beta version of this experience (not the testing version for the exhibition). We agreed that the objects found online needed to be more decayed as the original models looked too polished and new.

We also reviewed the textures that needed to be added to the environment such as rubble, the doors of the corridor cells (some closed, others half open or destroyed), the dead fish in the fountain, and the rest of the objects found online to make the place look more credible and like there were once humans there long ago.

Categories
Advanced & Experimental Advanced Nuke

Week 8: Markers Clean-up Techniques & Homework in Nuke, & Final Garage Homework Review

In this lecture, we learnt how to remove markers from a character’s face in a live footage scene, and how to add texture and corrections that follow the movement of the character.

Degrain/Regrain techniques

Before starting with markers removal from a live footage shot, it is important to degrain our footage so Nuke can read and detect better the pixel information when adding different nodes for cleaning up or tracking techniques. If we use this, then we will need to regrain the plate once we have finish all our changes, so all added elements have the same grain texture and it looks like it has been filmed all in one shot with the same camera and light conditions.

  • Simple degrain. We can denoise plates with ‘Merge (minus)‘ followed by a ‘Merge (plus)‘.
  • ‘F_ReGRain’ node. This is an alternative to regrain node and it is only available in NukeX. It is more precise than a simple regrain, since it shows less of the patches added for clean up plates.
  • ‘DasGrain’ gizmo. This gizmo can be downloaded from Nukepedia where there is also a tutorial on how to use it. We will plug the ‘DasGrain‘ to the original plate and to the denoised plate. Then we plug a ‘Common key‘ gizmo to ‘comp‘ and ‘mark‘ links in ‘DasGrain’. In ‘DasGrain’ node settings, we can set ‘output‘ to desired one (it has different outputs for QC). In the ‘replace‘ tab, we can select the area we want to scan (usually the darkest area), then select ‘activate‘ and then ‘analyse‘. This gizmo is being newly used across VFX companies due to its efficiency and reliability.

Patch changing light techniques

When adding patched to clean up marker in our plate, we need to take care of light changes as the patch could be too obvious:

  • First, we can try to correct lighting manually by using a ‘Unpremult‘ node, then ‘Grade‘ by hand in the needed keyframes, and then ‘Premult‘ back. This technique is not recommended as it is time consuming.
  • Divide/multiply technique. ‘Blur‘ image (add a lot of blur), then clone the ‘Blur‘ node, and add ‘Merge (divide)‘ to merge both ‘Blur’ nodes. Lastly, ‘Merge (multiply)‘ with background.
  • Image Frequency Separation technique. We use ‘Slice Tool‘ gizmo to analyse a specific area of the plate (a face with markers for example), and all frames too (separated gizmo). Then we ‘Blur‘ to see low frequency of image and ‘Merge (from)‘ node to see high frequency. With this, when cloning area with ‘Roto paint‘ to clean markers, we are going to paint only low/high frequencies so the light is not affected (only gamma). This technique is used so light changes do not affect the patched area. With ‘Laplacian‘ node, we could get the same result too. We first need to link with ‘Merge (plus)‘ node to bring back the light and the colours from the original plate, then we ‘Rotopaint’ the part we want, followed by a ‘Blur‘ to add/remove the quantity of light required. Alternatively, we could also ‘Blur‘ and ‘Multiply (divide)‘ to see and correct different values, to then we ‘Merge (multiply)‘ to merge back (like mentioned before).
  • Interaction patch technique. Add patch with ‘Roto paint’ with ‘match move‘, then scan original plate with ‘Transform‘, ‘Copy (alpha -> alpha)‘, and ‘Premult‘. Then ‘Merge (multiply)‘ with plate, ‘Regrain‘, and ‘Merge (over)‘ with main plate.
  • Curve tool’ node. This is used to add/remove info to the plate (for example, to correct flickering of image). First we start by cropping the info we want by adding ‘Curve tool‘ node, selecting an area, setting ‘curve type‘ as ‘max luma pixel‘ and then click ‘go‘ so it starts to analyse the area. Then, in ‘max or min luma data‘ we click on the icon at the end and then right ‘click + copy + copy links‘. Then we go to ‘grade‘ and ‘paste + paste absolute’ on ‘lift‘ (shadows or min luma data) and ‘gain‘ (luminance or max luma data).
  • ‘Roto’ and ‘Transform’ technique. We start with ‘Transform‘ node, followed by a ‘Roto‘ of the part we want, and a ‘Track‘ of the roto. Then we ‘Blur‘ the roto as alpha, ‘Premult‘, and ‘Merge (over)‘ with main plate.
  • Clone patch technique. First we denoise the plate so we can ‘Track‘ the markers properly (1 track per marker). Then we copy translate x and centre x to ‘Rotopaint‘ node. We do the patch with clone tool and add ‘Roto‘ over cloned area. Finally, we ‘Filter erode‘, ‘Blur‘, ‘Regrain‘, and ‘Merge (over)‘ to main plate.
  • ‘Premult’ and ‘Unpremult’ for paint technique. First, ‘Denoise‘ plate and ‘Track‘ marker. Then copy ‘Roto‘ over marker. ‘Invert‘ roto/mask (like a hole), and ‘Merge (mask)‘ to ‘Shuffle‘. then ‘Blur‘ slightly and link as a mask to ‘Edge blur‘ node which previously was linked to ‘Merge (mask)’ node. Then we ‘Unpremult‘, ‘Copy (alpha -> alpha)‘ from ‘Blur‘ to ‘Premult‘. Lastly, we ‘Regrain‘ (linked to original plate), ‘Premult‘, and ‘Merge (over)‘ to main plate.
  • ‘In Paint’ technique. It is nearly the same as the previous technique but, instead of inverting the roto and blur it, this time we use ‘In paint‘ node, which can be tweaked to make the patch blend in.
  • ‘UV map’ technique. When using ‘Expression‘ node, R and G channels (X and Y coordinates) have identical values, and just B value is 1, which has no effect on what ST/UV images do. With ‘Expression‘ node, we can ‘Roto paint‘ specific details such as motion blur or warp of an image, and the we connect ‘ST map‘ node to plate. We could also use ‘Grid warp‘ node, but since this is a really heavy tool, it is recommended to avoid this if no needed.
  • Vectors technique. As usual, first we ‘Denoise‘ the plate, to then use a ‘Smart vector‘ node. This node could work fine with the default settings, however, it is better to increase ‘detail‘ to achieve a better result and to have less problems with image warp later on. Then we can export this with ‘Write‘ node since smart vectors are really heavy and could slow down the preview. Separately, we remove the markers with ‘Roto paint‘, ‘Filter erode‘, and ‘Blur‘, and we also add a ‘Frame hold‘ node in the reference frame where we are doing the cleaning up. Then we add a ‘Vector distort‘ node that will track the movement of the markers (set ‘output‘ to ‘warped src‘ in this case) following the smart vector map created previously, and then we add a ‘Copy (motion -> motion)‘. Apart, we add a ‘Vector to motion‘ node to add motion blur to the movement of the markers and the we link to to the ‘Copy’ node we added before. Then we add a ‘Vector blur‘ node (with the ‘output‘ as ‘result‘), we ‘Regrain‘, ‘Premult‘ and ‘Merge (over)‘ to main plate. We could also use an ‘ST map’ after the ‘Vector distort’ and in the last one, add ‘output’ as ‘ST map’ instead. This way is better than ‘warped src’, since ‘ST map’ is lighter. Smart vectors can also be used to add texture.

Homework – Face markers clean-up

This week’s homework was to remove the markers of a live footage shot of a girl moving her face. I first tried tracking the markers with a regular ‘Tracker’ node to then link it to the patches made on each marker. This technique is quite straight forward for time consuming since the ‘Tracker’ was also failing to track properly so I had to move the tracker point manually to the correct spot in most of the frames. Also, some of the patches are visible when the girl looks to the sides.

I also tried a different technique, using a ‘Smart vector’ node this time. This technique is really quick if it works fine, however, I am struggling with the distortion of the face when the girl moves her head.

I think I may be doing something wrong as it is distorting the whole image and not just the patches added. I will have to ask Gonzalo in the next class (final result added on Advanced Nuke – Week 9 post)

Final Garage Comp

Since this week I could not go to class in person as I was ill, I did not have the chance to ask for the questions I had regarding the shadows in my garage comp. Therefore, I emailed Gonzalo with a version of my comp attached and my question regarding the shadows being too harsh, and he sent me back a solution to this issue. It looks like I had to add another the ‘Shuffle’ + ‘Blur’ and mask link it to another ‘Grade’ node connected to the main plate, as shown below:

Garage Comp

However, I still got the issue of the shadow casting on the wall hole. I tried to add a ‘Merge (stencil)’ node using the previous wall roto I had, however, it was not working as it was cropping the whole wall and not just the hole. I will ask the professor next week about this (final result added on Advanced Nuke – Week 9 post).

Categories
Collaborative

Week 8: Environment/Memory Objects Modelling, Re-texturing, & Deadline Extension

This week, we focused on the interactive objects we had for the memories, and on the environment texturing and props.

Objects

Along this week I worked on some of the needed objects for the memories so 3D Animation could start with the animation of these.

I started with the diary as Veronika already had the bugs model rigged. So I made a standard hard cover book model that can be opened in the middle and one of the pages can be bended and animated too.

Then I also tried to model a fingerprint scan machine for the lobby memory. However, as I was really undecided on how to make this one, I researched some references like the following:

I liked the simple style of this scan and I also liked that is not just fingerprint scan but whole hand scan machine. However, I figured that since the walls were going to be curved, I could also add like a stand underneath to it so it does not need to be attached to the wall. The final model I came up with is the following:

I also found a Pikachu toy online which I plan to remodel a bit in Maya, and then in Mudbox I will add dust and mud texture to show decay and the pass of time. Some other models were bought with the budget we had approved for this project, so in order to keep track of this, we also made a list in Miro.

I also researched a bunch of environment materials to fill up the space to give the dystopian look we are looking for:

Team meeting

In this week’s team meeting, we discussed the possibility to request an extension of our deadline so we had plenty enough time to finish the beta version of this project. The reason why we need an extension is because we got the brief of the project late, and the budget we needed for the models bought on internet also got approved late, so we has been falling behind week after week due to these inconveniences.

Regarding the project feedback, in this meeting we pretty much spoke about the environment and the objects we have and what we still needed to make or improve.

My hand scan model was disregarded as it does not fit the style the professors wanted. They also suggested more objects for the memories and they also provided a list in Miro of what they wanted.

They also requested to make an Excel with all the models we use in the VRR experience including our modelled objects and the ones downloaded from internet. This can be found in the following link – https://artslondon-my.sharepoint.com/:x:/r/personal/r_li0920182_arts_ac_uk/_layouts/15/Doc.aspx?sourcedoc=%7B1FC2C2BE-FCE2-4A4C-938A-1A74FDE8E902%7D&file=Departure%20Lounge%20Bid%20Proposal.xlsx&action=default&mobileredirect=true&DefaultItemOpen=1&login_hint=n.gonzalezsanchez0320221%40arts.ac.uk&ct=1680215854179&wdOrigin=OFFICECOM-WEB.MAIN.REC&cid=b65fb2b8-7592-43fd-a689-103d6a2168cb

References

Mainguet, J. (2018). Biometrics movies 2018 (online). Available at: https://biometrics.mainguet.org/movies/ThePredator_hand.jpg [Accessed 4 March 2023]

Categories
Advanced & Experimental Advanced Maya

Week 8: Lighting in Maya, & Rendering in UAL Render Farm

This week, I set the lighting of the model in Maya to prepare it to render in UAL Render Farm

I started to play with the lighting of the model in Maya and also added a background. I wanted a simple background as the model is crowded enough and I did not want it to take off the attention from the model. Therefore, I found a stone-like texture in black mostly. Then, following the colour palette from the model, I added a back blue and purple light to separate the background from the model and to highlight the texture of the background.

Since the space background in the interior of the dome looked a bit crowded, I did some colour correction in it so the stars looked dimmer and reduced the quantity of them.

Once I got the light set up, I started to test the render settings that would take the maximum detail but at the same time that will not take too long to render.

Also, since I wanted to import it into Nuke to add shadows and some extra illumination, I decided to export it in different layers, like the main solar system structure separated from the dome, dome neon half ring and background. In order to separate the dome neon ring from the dome for rendering, I added a plane in between so when rendering the alpha the rest of the dome would not show.

After everything was ready to render, I transferred the project folder to the university computer and set the UAL Render Farm (Deadline).

Main model render without dome and background

These layers are going to be put together in Nuke and I will add shadows and extra light reflections.

Categories
Advanced & Experimental Advanced Nuke

Week 7: Despill Corrections Tips, Creating Gizmos in Nuke, & Garage Homework WIP

In this lesson, we learnt to make despill corrections when removing green/blue screen, also we saw how to create our own personalised gizmos in Nuke, and lastly, we asked questions we had related to our garage homework WIP.

Despill correction tips

  1. When keying to remove green screen and then remove saturation, we can roto some parts of the shot and then link this roto to ‘invert’ node so the despill does not affect that specific part.
  2. We can also correct edges with ‘IBK Colour’ node set to ‘blue’ colour only, then we add a ‘Grade (alpha)’ so it only affects the alpha, then we correct the edge with ‘Filter Erode’ and ‘Blur’, and lastly, we ‘Merge (screen)’. We could also add an ‘Edge Blur’ to soften sharp edges and a ‘Clamp’ to make sure all merged alphas value is 0.
  3. With ‘Add mix’ node, we can merge alphas and can set how much alpha we want to see.
  4. Additive key: after a ‘Merge (minus)’ we desaturate and grade, and then we add a ‘Merge (plus)’ to ‘Constant’ node with the green colour as reference.
  5. Divide/Mult key: we rerplace spill with ‘Merge (divide)’ from both the chroma plate and the chroma reference plate, to then ‘Merge (multiply)’ with the background plate.
  6. When the green/blur screen have different luminance along the shot, we correct it taking a ‘Constant’ node with the darkest part colour of the green/blue screen connected to a ‘Merge (average)’ node so we create a ‘Constant’ with a colour with the same luminance. Then we ‘Merge (minus)’ with a ‘Keylight’ for despill.
  7. We could add a ‘Light wrap’ node to add a light glow around specific areas. We will ‘Merge (plus)’ to the background in this case.
  8. An inverted matte can be used to delete light from an outside edge. We just ‘Invert’ the matte, then ‘Roto’ the required parts, and ‘Merge (mask)’ to the matte. We could also add a ‘Grade’ node with a mask link to this ‘Merge (mask)’ node so we colour correct that specific edge.

How to create gizmos

First, we select the nodes we want in the gizmo and group them (ctrl + G). Then in the creed node options, we click the ‘edit’ button and drag and drop the features that we want (controllers). We can label these controllers by clicking on the little circle next to it. We then link each controller with the node controller (hold ctrl + drag and drop from main node to grouped node).

Green screen and despill homework

The homework for this week was to remove the green screen of a hero shot girl scene and add it to a snowing forest background, as well as add some of the background snow to the foreground.

First, I use a ‘Keylight’ node set to detect just the green colour and ‘Merge (minus)’ to the main plate to only see the greens of the shot. Then I aded a ‘Roto’ to the eyes of he girl and ‘Invert’ it linked as a mask to he saturation node to preserve the little amount of green of her eyes. Then I linked a ‘Merge (multiply)’ node from the background plate to the foreground plate to take some of the luminance of the background to the girl. This also was ‘Merge (plus)’ to the foreground to add that luminance to the scene.

Separately in another block of nodes, I used an ‘IBK Colour’ node to key the green screen of the foreground. Then I desaturated it, and added another ‘Grade’ with ‘Filter erode’ and ‘Blur’ and ‘Merge (Screen)’ to previous ‘Grade’ so I get more details and luminance from the girls hair. Then I ‘Copy’ this alpha to the main foreground alpha, to then ‘Add mix’ these alphas to the background plate.

In the background plate, I used a luminance ‘Keyer (alpha)’ node, to select only the colour of the snowflakes falling. then I ‘Copy’ this alpha to the background plate and also ‘Premult’ to create the alpha that will be added to the foreground with ‘Merge (over)’ node.

Finally, I colour corrected and graded the overall result and rendered the alpha and the final comp.

Final hero shot
Alpha version

I am not totally sure about the amount of hair detail that is visible in this version so will ask the professor on the next class (corrected version added to Advanced Nuke – Week 9 post).

Garage homework WIP

Lastly, I asked Gonzalo about my issue with the shadows not showing in my garage comp. He found out that the ‘Grade’ node that was after the ‘Shuffle’ node to create the shadows alpha, had ‘black clamp’ option ticked, so I had to deselect this and select ‘white clamp’ option instead so the blacks of the shadows started to show. However, despite the shadows were finally showing, I feel like they are too harsh and saturated and I could not figure out how to soften them. I tried to grade them and desaturate them but still looked to black and unnatural to me. Also, the hole in the wall is receiving the shadow from the chain hang on the wall and it looks like there is a plane receiving this shadow. This issue is due to the card added on that wall to receive the cast shadow so I tried adding a ‘Merge (stencil)’ from the roto I have from that wall but it did not work for some reason. I will have to ask Gonzalo in the next class.

Categories
Collaborative

Week 7: Team Meeting – Environment, Characters, Objects, Animation, & Interactions

This week, I had a meeting with the girls at the beginning of the week to review what we had done and organise what we need to do, and also had another meeting with the lectures and external studio partners to agree on final designs and interactions.

This week I finished modelling the child’s ghost with textures taking as references the following:

Also, we considered that the team needed to gather in a meeting with lectures in order to discuss technicalities and issues we could have when importing mesh, textures, and animation into Unity. The VR girls confirmed that Unity could accept simple textures with normals, bump maps, roughness, etc., but that this should be embedded in the fbx file exported from Maya.

Since I am not familiar exporting fbx from Maya and even less with Unity, I downloaded Unity so I could test my models directly into it before sending the final ones. VR girls explained that it seemed that the textures needed to be relinked in Unity and this could be a problem and time consuming. Therefore, we reached an agreement that, since I know how and where the textures should be linked, I would test the models in Unity, relink textures and export the Unity package with the model and textures already set. Also, I had to re-set the UV maps of the ghosts models as when I combined all the parts of the model into one unique mesh in Maya, the UV maps got messed up. I also tried to simplify the model removing the double faces of every part and getting rid of unnecessary mesh that was not going to be seen (like torso, arms, legs and feet).

We also thought of the possible memories that can appear when triggered by objects interaction (we made a list in Miro):

Also, MA VFX had our first collaboration project presentation to the class and the tutor, so we put together a Power Point presentation taking as reference some of the slides used in 3D animation and VR presentations (all of us contributed in these presentations).

Team meeting

Later this week, we had the meeting with the lecturers and the external studios partners. We discussed the possibility to buy assets taking the budget offered by the university as we had lots of assets to model and very limited time. Ana the asked us to make a list of assets and add which ones we had to model, which ones we could download for free and which one we needed to buy.

We also reviewed the model of the environment Martyna and Jess have been putting together:

  • Corridor:
    • It needs to be longer and have the final room in between (not at the end). So each side would have 5 doors and then the third door on the left would be the final pod.
    • It should be a dark place with just one point of natural light at the end.
    • It could have stripe lights that are not working (or one can be flickering). These lights need to be organic and not straight industrial lights.
    • It should have broken doors with holes in them. We need to add something in the way of the broken doors, so the main character cannot access the rooms.
    • We considered the idea of these corridors being underground or flooded but due to time limitations and the fact that the whole structure would need to be redesigned, we disregarded it. Instead, we considered the addition of ramps that change slightly the level of the floor from the transition from the waiting area to the corridors.
    • One of the rooms could have an image of trees moving with the wind shown in a screen.
    • The environment will need to look close to us now (familiar), so the user can empathise with it (do not make it futuristic or high tech).
    • Natural lights overall. We can have working lights (when switched on), but not all of them are working (could be flickering or hanging from the ceiling).
  • Final room:
    • It has natural light (single lights source would be more dramatic).
    • It has a broken wall where the deer will appear.
    • Everything is decrepit and broken.
    • The light could be moving with the reflection of the water ripples (from the have flooded floor).
  • Main waiting room:
    • We need to agree textures of the walls which would be broken, moulded, with rusty pipes showing, etc.
    • The ceiling will also have some broken parts with natural light coming through.
    • The walls could have panels, decoration from the past, murals that are aged, etc.
    • We need to add more than one corridor that would be accessible from the waiting room (but not operational for this beta version of the VR experience, just one corridor will be accessible).
    • In the middle, there is a pond with water and dead fish floating.
  • Objects/memories:
    • Teddy bear. We thought of making it look that it was made out of scrapped materials like metallic stuff found, however, we need to be careful with making it look too futuristic so we disregarded this idea. Maybe we could change it for a car toy for example. This toy will show the memory of a child playing with it and then being grabbed by their mother to get euthanised, so the child drops the toy.
    • Fingerprint scan. In this memory, we thought of showing someone being forced to scan their fingerprint so they signed their contract for being euthanised. However, after discussing this with the lecturers, we felt like these should be shown like an option and not like being forced to do it. So instead, we thought of a parent grabbing the child’s hand to help them scan it.
    • Headset. It could reproduce the sound of a radio station that was trying to bring some happiness to people (happy music being cut by the voice of the radio presenter).
    • Diary. This object will only trigger a voice over of somebody reading the diary. When the main character picks the diary and opens it, some bugs would come out from the bottom of it.
    • Poster. It could show health and safety advice like wear a mask, etc.
    • Daily objects. These would not trigger any memory but could show some bits from the past, like a converse shoe, a diary, etc.
    • Megaphone. From were they made the announcements to call people’s turn to die.

We also talked about how VR will construct the UI of the VR experience. It was pointed out by the lectures that this needed to be more an emotional experience than a game itself, so the user’s attention should be driven through the environment using sounds or vague scenes from the past (memories) that are triggered by them passing nearby a place or when touching an object. Some points that were mention regarding the UI were the following:

  • Main character’s POV. It could show like the texture of a helmet (like scratched or dirty glass). We could also add some info showing on the helmet’s glass, like icons (these are less distracting than letters, and less intrusive). We could show UI for the memory object but we should not give away much information.
  • Lobby/entrance. We could add a welcome hologram.
  • Final room. There is a mechanism in the wall that brings up a cell with the human skeletons of the mother and the child hugging. Furthermore, in the collapsed wall, there is a deer that looks us up and then leaves (showing first the deer and then the skeletons). We could hear the singing from the ghosts.

References

Cottonbro Studios. Free Dystopia Photos (online). Available at: https://www.pexels.com/photo/person-in-black-and-white-hoodie-wearing-black-gas-mask-4888482/ [Accessed 22 February 2023]

Categories
Advanced & Experimental Advanced Maya

Week 7: Loop Animation MASH, Texturing, & Final Design in Maya

This week, I focused on finalising the model and adding textures. Additionally, I also added a last animation using MASH.

I decided to change the base of the model to a more straight surface with ‘windows’ on it and stars as decoration. I tried to add a tinted glass texture to the windows interior so mixed with the surface underneath would look like a translucent glass. Then I added the platform and the dome of the model, and also, I created a ‘mechanism’ based on a comet shape pushing a spinning plank that was holding the handle from rotating (using manual key framing). In addition, I also added some stars rotating around the model using a torus as a reference shape and then positioning and rotating the stars using MASH distribute tool.

Once I had the final model modelled, I started adding textures and lights. I added a space like background to thee interior of the dome and a yellow glow ring to the edge of the dome. Then, I added a brushed metal and gold texture to poles, gears and satellites rings. For the planet’s actual rings (not the satellites rings), I added a texture of a picture of the real rings of Saturn, so they can be differenciated. For the Sun, I added a bubble like texture that when changed to yellow-orangish colour, it looks like the Sun’s surface. I added this texture so the glow did not look that flat pale yellow colour. I also added a more saturated glow to the planets and satellites, and also added some neon yellows, purples, and blues to some edges to make it stand out from the dark background. The base is mostly purple, with wooden window frames, purple translucent glass for the windows, and golden stars for decoration. The handle and the spinning plank are textured in wood and brushed metal, and the comet has like ‘fire’ texture and glow (like I did with the Sun). The base has a wooden floor pattern with a golden edge and a golden trail where the comet rotating mechanism is supposed to be attached.

Categories
VFX Careers Research

VFX Careers Research – Job 3

VFX Compositor

Several weeks ago, I was not sure if compositing was for me. However, after develop my skills in Nuke and seeing how several assets in different formats, lighting, texture, etc, can be put together using lots of different techniques to create a final a scene or sequence, really has taken my full attention and interest. I love modelling and texturing, but I also enjoy putting everything together and create different environments using varied effects, lighting and colours. Being responsible for the final look of a piece could be overwhelming but also really rewarding at the end.

A VFX Compositor is in charge to create the final look of a scene or sequence, taking all digital materials needed such as live-action footage, CGI, and matte paintings, and connecting them all in a single shot. These materials are connected in a way that they look like they belong together in the same scene. A key aspect for compositors is to be able to create realistic lighting settings since relighting to make the shot convincing to the viewers eye is important to the success of the sequence. Another aspect to take in consideration is ‘chroma keying’, technique in which a specific colour or lighting of a shot is picked to be altered or replaced. This method is commonly used in ‘green/blue screen’ where a saturated green/blue background is placed to shot live-action footage with it and then be replaced in post-production with the desired background or CGI.

What I like the most about this, is the numerous environments that can be put together with this, since the only limit established is the imagination. There are a lot of examples about amazing environments created like this, but one example that has caught my attention is the ‘upside-down world’ created for the Netflix series Stranger Things:

Stranger Things series’ upside-down environment (DNEG, 2022)

This shot was a one minute and six seconds master shot with blue screen background which was composited with five different plates, four characters, CGI creatures, and an environment that was made of both real scenography and VFX. In order to make it look ‘realistic’, the visual effects had to match the live-action footage so there is no feeling of low quality background screen. Therefore, they used highly detailed assets and textures that at the same time had to be optimised to be more efficient. This demonstrates the level of expertise a Compositor needs to be able to keep a balance between high quality and efficiency, as this requires a vast knowledge and resources usually acquired with years of practice and experience.

Apart from the technical and creative side of the job, it is also important to be aware or at least being an observant person regarding the physics of our surroundings, for example, the difference between the movements that the leaves of a tree do when the wind blows or when a helicopter is approaching; or how the cast of shadows can differ depending on the time of the day, texture of the surface, artificial light added, etc. DNEG work in Uncharted movie, shows a lot of examples and techniques used to make these physics as realistic as possible:

Uncharted VFX breakdown (DNEG, 2022)

This position requires a lot of attention to detail and full understanding about the software used such as Nuke, since many times I have found myself changing the aesthetic of a scene for not having the knowledge enough to tweak certain features as I want to. Getting stuck in the process is certainly how a person learns and develops their knowledge, and also demonstrates their problem-solving skills, however, I can understand why this position is not offered as ‘entry level’ since it requires refined skills and efficiency, which I hope one day to achieve.

References

DNEG (2022). Behind the VFX | Stranger Things Season 4 | DNEG (online). Available at: https://www.youtube.com/watch?v=RYP8yscXFyY [Accessed 24 February 2023]

DNEG (2022). Uncharted VFX Breakdown | DNEG (online). Available at: https://www.youtube.com/watch?v=McI9uFac_hw [Accessed 24 February 2023]

Categories
Advanced & Experimental Advanced Maya

Week 6: Satisfying Loop Animation Basic Model & Animation

This week, I tried to finish the base model adding all the details and animations required so it is ready to be textured.

This week I focused on adding all the details like the gears, planet’s rings and satellites, handle, base, and further decorative details like the star and the half ring around the Sun. I also animated with key framing the gears and the handle rotation. The handle would start rotating and the gears and planets attached to them would move at the same time.

The overall model follows a consistent aesthetic, however, I feel like I am going to change the base as I am not that convinced with the shape I gave it. Also, I am thinking in adding a dome that could have like a space texture as a background.