Categories
Advanced & Experimental Advanced Nuke

Week 10: Term 3 Group Project Brief, & Homework Q&A

In this lecture, we were introduced to term 3 group project and reviewed the devil comp homework.

Term 3 group project brief

The deadline of the term 3 group project would be on the 29th June 2023 (8 weeks duration) and we will have this group project and a personal project too.

We will have to create a futuristic spaceship room based on any of these 3 themes: steam punk, cyber punk, or low-fi sci-fi. This project will simulate the work pipeline of a professional VFX studio.

We will have the live footage of a corridor/room and we will have to comp it with CG elements to make it futuristic. We will have to choose our group and each team member will have to pick a roll. We will need to organise the project with our team members, taking care of:

  • Research of ideas, and moodboard build up.
  • Planning of assets that are going to be needed.
  • Planning each team member’s task weekly.

Every week, this project will be reviewed and notes will be shared that will need to be followed for the next review. We will work with F-track programme to be able to see notes, calendar, objectives, etc. Every time we present assets, we will need to present them in detail, showing the topology, texturing, and lighting. Also, Dom Maidlow (CG Generalist) will be teaching us how to track plates and how to import into Nuke.

The final outcome required would be a 10 secs animation, and it will have to follow the requirements stated in the brief provided.

Devil comp homework

In this comp, we were asked to add texture and elements to a live footage of a devil man. I researched some skin texture that shows like scarring or open wounds and I found a dragon-like texture that if graded and colour corrected could look like what I had in mind. I also wanted to add like a wound with surgical stitches to the closed eye, and like satanic tattoo in his forehead. Also, I added some fire in the horns, and for the environment, I found a video with smoke, fire, and sparks.

For the face texture, I ‘Roto paint’ part of the face to get rid of the hair on one of the sides. Then I also added a dragon-like texture to add a more interesting look. I also covered one of the eyes and added a stitches texture on top. Then I added a ‘Vector Distort’ so the alphas created with the textures are warped following the movement of the face.

For the forehead tattoo and the fire in the horns, I used 3D tracking of the man’s face and created 3 cards taking as reference points the middle of the forehead and to top tips of both horns. Then I projected the tattoo texture and the fire footage on those cards and colour corrected them. Since the video of the fire was flat and the head of the man was moving side to side, I had to adjust the rotation of the card adding some keyframes when the head rotated.

Lastly, I added some sparks footage on the foreground, and colour corrected the whole comp.

Final comp

References

Algol, M. W. F. This 666 Devil Satan Pentagram Black (online). Available at: https://www.nicepng.com/ourpic/u2q8t4q8e6o0e6q8_666-devil-satan-pentagram-black-freetoedit-michael-w/ [Accessed on 19 March 2023]

Ektoplazm. Over 50 Skin Textures Free Download (online). Available at: http://www.psd-dude.com/tutorials/resources/over-50-skin-textures-free-download.aspx [Accessed on 19 March 2023]

Ezstudio. Orange cloud smoke Fire sparks rising up Free Video (online). Available at: https://www.vecteezy.com/video/5160760-orange-cloud-smoke-fire-sparks-rising-up [Accessed on 19 March 2023]

Videezy. Fire Stock Video Footage (online). Available at: https://www.videezy.com/free-video/fire?page=5&from=mainsite&in_se=true [Accessed on 19 March 2023]

Categories
Collaborative

Week 10: Project Beta Version, Exhibition, & Partner Studio/Lectures Feedback to Testing

This week, we finished the test version of the VR experience and showed it on the exhibition day.

Exhibition – Testing Version

We arrived one hour earlier to the exhibition so we had time help the VR girls to set up the VR headset in the computer of our designated space. However, arriving one hour in advanced did not seem to be enough since we had some problems linking the VR headset to the programme in the computer. We asked for help to one of the technicians and we changed to another computer since this one did not seem to have the VR application properly set and to fix this would take too long. The other computer did not let us log in either so at the end we switched rooms, and we went to the main room which had a free computer where we could try luck again. This computer had the same issue with the application, so after expending 1 hour trying different computers and VR sets, we decided to use Ria’s personal laptop and connect it to one of the desk computer’s screen so we could see the experience in big screen. At the beginning the application was lagging a bit but then started to work just fine. One of our studio partners tested the experience and was quite happy with what we currently have. He did also mentioned some things we could also improve during the Easter break, like the texture of the walls, addition of more objects, and take care of proportions (for example, one wheel chair looked really big compared with the rest of the environment).

Some lecturers and students tried it too and they all looked impressed about the environment (which is good sign), but obviously since we could not include any interaction with the objects in this version, the experience was only focused in exploring the environment.

Team meeting

  1. Triggered memories: we debated how do we want to show these, as we could show it in the helmet of the viewer as an imposed 2D video, or as a ghostly like effect in 3D space. The VR girls said that the ghosts in 3D space would be easier than the 2D, but we would also need full figures of all ghosts so we could see it from all perspectives.The memory of the man helping the child to scan his hand at the entrance has been changed to a man helping an old lady on a wheel chair to scan her hand. This change is because we also have another memory with a child (the memory of the kid playing with the Pikachu toy and being pulled by the mother), so it would be too many child’s stories in the experience. For this last memory of the Pikachu toy, we also decided to make it shorter and show just the mother pulling the kid from the arm and the kid drops the toy.
  2. Objects needed to fill up environment: we need more objects like folders/files, cans, rusty pipes, a football ball, a shopping trolley, etc. Also, they considered that the Pikachu I made needed to be more destroyed like with an arm hanging and part of the face chopped off. Then also the barriers at the entrance needed to be more destroyed and we also needed a scan screen to attach to it.
  3. Environment textures: the lecturers and the studio partner considered that the environment looked like a castle as it was too grey and too concrete-like. It needs more colours like in walls showing a painting that is peeling off, with water stains, murals with advertising, grafittis, bold, etc.

Objects retexturing

I destroyed a bit more the Pikachu toy as requested by the lecturers. I took off half of the face and left one of the arms hanging. I also made some extra holes around the model.

Categories
Advanced & Experimental Advanced Maya

Week 9 & 10: Loop Animation Comp in Nuke

In these two weeks, we could dedicate the lectures to polish and finish the loop animation and to ask all the questions we had about it.

After I finally got to render all layers, I put them on together on Nuke. I order to do the shadows, I duplicated the main model and projected it on a card perpendicular to the first main model. Then, I colour corrected, desaturated, and blurred this projection to match the shadows from the background. Following on, I thought that the neon ring if the dome looked too flat, so I decided to add some fake neon effect by adding glow, filter erode, and edge blur to create this diffused yellow light, and then duplicated this to create the interior white light of the neon (just desaturated the colour so it looks white). Since this neon half ring is supposed to be reflected in the metallic edge of the model platform, I also added some fake reflections a radial node that has been rotoscoped and graded accordingly (also added some edge blur and blur to soften core and edges of reflection). Moreover, the space texture from the inside dome was not too visible, so I rotoscoped the area where this texture is supposed to appear and added the texture manually instead of exporting it again. I also graded it so it did not show too much detail (that was the previous problem I tried to correct in Maya but for some reason it did not render as it was supposed to). Lastly, I also considered that the background look too flat, so I added some more pronounced shadows on the sides by rotoscoping the sides I wanted the shadows and adding some edge blur, blur, and grade to get the shadows colour.

Final comp without SFX

Since this was supposed to be an oddly satisfying loop animation, I also decided to include a relaxing background music to improve to the experience. The final looping videos were edited in After Effects.

Final Video with SFX

I enjoyed this project so much and I think I got more confident in my modelling and texturing skills with it. I also learnt the different ways to render a comp to then comp it by layers in Nuke, and After Effects, meaning that I do not need to finish everything in Maya, and I can take advantage of different programmes that are specialised on different aspects, to achieve better results. I also got more confident with node editing in Nuke as it has always been a bit challenging and intimidating for me, so I feel like this project has helped me expand my creative, technical, and project management skills in many areas.

Categories
Advanced & Experimental Advanced Nuke

Week 9: Nuke Homework Q&A Session

We dedicated this lecture to asked all the questions we had about what we have seen in this term and our weekly homework and projects.

Hero shot green screen removal homework correction

In this comp I had an issue getting the finer details of the girls hair when removing the green screen and comping with the forest background. Also, the snowflakes I keyed to add in the foreground of the scene were barely visible. In order to take the hair details, I had to take a ‘IBK colour’ node and pick the darks and lights of G (green colour) so it selects as much detail of the hair as possible. I can also use ‘Filter erode’ to remove noise and then add ‘patch black’ (like at 20) to remove black part. Then, I can add a ‘IBK gizmo’ set to green, and then link ‘fg’ to green screen plate, and ‘bg’ to background plate (so it takes background features). Then, I can tick ‘use bkg luminance’ in the ‘IBK gizmo’ node, so it takes the background luminance, and tick ‘use bkg chroma’ so it takes background colour too. I can then ‘Merge (over)’ the ‘A’ link to the ‘IBK gizmo’, and ‘B’ link to background. This will take all the details of the hair from the green screen and add it to the luminance of the new background.

Regarding the snow problem, I was taking the luminance with ‘Keyer’ node from the original plate, and I had to connect it to the ‘Transform’ node instead so it takes the correct aspect ratio. Then, every time I ‘Premult’, I always need to use ‘Merge (over)’, so I changed to this node. I can also add more or less effect with ‘Multiply’ node.

Final green screen removal scene
Final green screen removal scene – alpha

Markers clean-up homework correction

I asked the professor for the distortion I was getting from the smart vector and he confirmed to me that the problem was that the node was affecting the whole image. So in order to correct it, I had to add a ‘Premult’ to the ‘Roto paint’ and add a ‘Framehold’ again (before the ‘ST map’), so the distort only affects the alpha created with the ‘Roto’. Also, I need to improve ‘Roto paint’ using the techniques to control light changes.

Final result

Garage comp homework correction

In this comp I had the issue with the shadow being cast on the wall hole. To remove the shadow from the wall hole, I need to take the previous roto made for that wall and ‘Merge (stencil)’ in the shadow part (between ‘Blur’ and ‘Grade’ nodes). Then, before this ‘Merge (stencil)’ node, we add an ‘Invert’ node so the roto alpha only takes the hole instead of the wall. To correct some bits that are outside this previous roto and that now are showing as this has been inverted, we make a quick ‘Roto’ that selects the area we want to keep (the hole in this case), we adjust the position of this ‘Roto’ in several frames, and then we ‘Merge (mask)’ to ‘Invert’ node (‘B’ connection). Lastly, we make the edge of the roto less crisp adding ‘Edge blur’ so it softens it.

I also fixed the back objects as they were looking too dark and the smoke effect was not affecting them so it did not look realistic. I then desaturated the colours adjusting their ‘Grade’ nodes and then added a ‘Merge (over)’ from the smoke card block to these objects.

Final garage comp
Categories
Collaborative

Week 9: Deadline Extension, Retexturing, & Group Showreel

This week, we tried to finish up a testing version of the VR experience to show in the exhibition next week. We also discussed and confirmed with the tutors the application of an extension of the deadline.

Objects Retexturing

I remodelled and retextured a Pikachu toy downloaded from internet. First, I removed some bits of the mesh to show its destruction. Then I sent the model to Mudbox, linked Pikachu textures that came with the downloaded model, then created a new material, and using the stencil textures of the programme, I painted some dirt and mud splashes effect in the model to make it look more weathered. Then I exported the textures created and linked them to the model in Maya. Then exported the model with textures in FBX and relinked them in Unity. One relinked, I exported model and textures into a Unity package so the VR girls could place the objects directly in the VR scene without having to relink textures.

Me and the rest of the group also found some objects online that could be retextured and destroyed to blend in to the dystopian environment, such as spectacles, converse shoe, fire extinguisher, electric box, barriers, etc.

Individual showreel (Not final)

Since we were not sure if we were going to have approval on the deadline extension, we decided to put together a group showreel, in which each one of us would show their work in the project. Therefore, I exported 360s of my models I had so far, and put them together in After Effects with some titles describing what I did on each asset. I then sent uploaded this in our shared drive so Alex would put all the individual showreels together in one.

Team meeting

In this week’s meeting, we got confirmation that the extension of our deadline was 90% sure that it was going to be approved so we focused first on the priorities we had to get the testing version done for the exhibition, and then we checked what we would need to complete the beta version during the Easter break.

We reviewed the objects we had and the objects we needed for the finished beta version of this experience (not the testing version for the exhibition). We agreed that the objects found online needed to be more decayed as the original models looked too polished and new.

We also reviewed the textures that needed to be added to the environment such as rubble, the doors of the corridor cells (some closed, others half open or destroyed), the dead fish in the fountain, and the rest of the objects found online to make the place look more credible and like there were once humans there long ago.

Categories
Advanced & Experimental Advanced Nuke

Week 8: Markers Clean-up Techniques & Homework in Nuke, & Final Garage Homework Review

In this lecture, we learnt how to remove markers from a character’s face in a live footage scene, and how to add texture and corrections that follow the movement of the character.

Degrain/Regrain techniques

Before starting with markers removal from a live footage shot, it is important to degrain our footage so Nuke can read and detect better the pixel information when adding different nodes for cleaning up or tracking techniques. If we use this, then we will need to regrain the plate once we have finish all our changes, so all added elements have the same grain texture and it looks like it has been filmed all in one shot with the same camera and light conditions.

  • Simple degrain. We can denoise plates with ‘Merge (minus)‘ followed by a ‘Merge (plus)‘.
  • ‘F_ReGRain’ node. This is an alternative to regrain node and it is only available in NukeX. It is more precise than a simple regrain, since it shows less of the patches added for clean up plates.
  • ‘DasGrain’ gizmo. This gizmo can be downloaded from Nukepedia where there is also a tutorial on how to use it. We will plug the ‘DasGrain‘ to the original plate and to the denoised plate. Then we plug a ‘Common key‘ gizmo to ‘comp‘ and ‘mark‘ links in ‘DasGrain’. In ‘DasGrain’ node settings, we can set ‘output‘ to desired one (it has different outputs for QC). In the ‘replace‘ tab, we can select the area we want to scan (usually the darkest area), then select ‘activate‘ and then ‘analyse‘. This gizmo is being newly used across VFX companies due to its efficiency and reliability.

Patch changing light techniques

When adding patched to clean up marker in our plate, we need to take care of light changes as the patch could be too obvious:

  • First, we can try to correct lighting manually by using a ‘Unpremult‘ node, then ‘Grade‘ by hand in the needed keyframes, and then ‘Premult‘ back. This technique is not recommended as it is time consuming.
  • Divide/multiply technique. ‘Blur‘ image (add a lot of blur), then clone the ‘Blur‘ node, and add ‘Merge (divide)‘ to merge both ‘Blur’ nodes. Lastly, ‘Merge (multiply)‘ with background.
  • Image Frequency Separation technique. We use ‘Slice Tool‘ gizmo to analyse a specific area of the plate (a face with markers for example), and all frames too (separated gizmo). Then we ‘Blur‘ to see low frequency of image and ‘Merge (from)‘ node to see high frequency. With this, when cloning area with ‘Roto paint‘ to clean markers, we are going to paint only low/high frequencies so the light is not affected (only gamma). This technique is used so light changes do not affect the patched area. With ‘Laplacian‘ node, we could get the same result too. We first need to link with ‘Merge (plus)‘ node to bring back the light and the colours from the original plate, then we ‘Rotopaint’ the part we want, followed by a ‘Blur‘ to add/remove the quantity of light required. Alternatively, we could also ‘Blur‘ and ‘Multiply (divide)‘ to see and correct different values, to then we ‘Merge (multiply)‘ to merge back (like mentioned before).
  • Interaction patch technique. Add patch with ‘Roto paint’ with ‘match move‘, then scan original plate with ‘Transform‘, ‘Copy (alpha -> alpha)‘, and ‘Premult‘. Then ‘Merge (multiply)‘ with plate, ‘Regrain‘, and ‘Merge (over)‘ with main plate.
  • Curve tool’ node. This is used to add/remove info to the plate (for example, to correct flickering of image). First we start by cropping the info we want by adding ‘Curve tool‘ node, selecting an area, setting ‘curve type‘ as ‘max luma pixel‘ and then click ‘go‘ so it starts to analyse the area. Then, in ‘max or min luma data‘ we click on the icon at the end and then right ‘click + copy + copy links‘. Then we go to ‘grade‘ and ‘paste + paste absolute’ on ‘lift‘ (shadows or min luma data) and ‘gain‘ (luminance or max luma data).
  • ‘Roto’ and ‘Transform’ technique. We start with ‘Transform‘ node, followed by a ‘Roto‘ of the part we want, and a ‘Track‘ of the roto. Then we ‘Blur‘ the roto as alpha, ‘Premult‘, and ‘Merge (over)‘ with main plate.
  • Clone patch technique. First we denoise the plate so we can ‘Track‘ the markers properly (1 track per marker). Then we copy translate x and centre x to ‘Rotopaint‘ node. We do the patch with clone tool and add ‘Roto‘ over cloned area. Finally, we ‘Filter erode‘, ‘Blur‘, ‘Regrain‘, and ‘Merge (over)‘ to main plate.
  • ‘Premult’ and ‘Unpremult’ for paint technique. First, ‘Denoise‘ plate and ‘Track‘ marker. Then copy ‘Roto‘ over marker. ‘Invert‘ roto/mask (like a hole), and ‘Merge (mask)‘ to ‘Shuffle‘. then ‘Blur‘ slightly and link as a mask to ‘Edge blur‘ node which previously was linked to ‘Merge (mask)’ node. Then we ‘Unpremult‘, ‘Copy (alpha -> alpha)‘ from ‘Blur‘ to ‘Premult‘. Lastly, we ‘Regrain‘ (linked to original plate), ‘Premult‘, and ‘Merge (over)‘ to main plate.
  • ‘In Paint’ technique. It is nearly the same as the previous technique but, instead of inverting the roto and blur it, this time we use ‘In paint‘ node, which can be tweaked to make the patch blend in.
  • ‘UV map’ technique. When using ‘Expression‘ node, R and G channels (X and Y coordinates) have identical values, and just B value is 1, which has no effect on what ST/UV images do. With ‘Expression‘ node, we can ‘Roto paint‘ specific details such as motion blur or warp of an image, and the we connect ‘ST map‘ node to plate. We could also use ‘Grid warp‘ node, but since this is a really heavy tool, it is recommended to avoid this if no needed.
  • Vectors technique. As usual, first we ‘Denoise‘ the plate, to then use a ‘Smart vector‘ node. This node could work fine with the default settings, however, it is better to increase ‘detail‘ to achieve a better result and to have less problems with image warp later on. Then we can export this with ‘Write‘ node since smart vectors are really heavy and could slow down the preview. Separately, we remove the markers with ‘Roto paint‘, ‘Filter erode‘, and ‘Blur‘, and we also add a ‘Frame hold‘ node in the reference frame where we are doing the cleaning up. Then we add a ‘Vector distort‘ node that will track the movement of the markers (set ‘output‘ to ‘warped src‘ in this case) following the smart vector map created previously, and then we add a ‘Copy (motion -> motion)‘. Apart, we add a ‘Vector to motion‘ node to add motion blur to the movement of the markers and the we link to to the ‘Copy’ node we added before. Then we add a ‘Vector blur‘ node (with the ‘output‘ as ‘result‘), we ‘Regrain‘, ‘Premult‘ and ‘Merge (over)‘ to main plate. We could also use an ‘ST map’ after the ‘Vector distort’ and in the last one, add ‘output’ as ‘ST map’ instead. This way is better than ‘warped src’, since ‘ST map’ is lighter. Smart vectors can also be used to add texture.

Homework – Face markers clean-up

This week’s homework was to remove the markers of a live footage shot of a girl moving her face. I first tried tracking the markers with a regular ‘Tracker’ node to then link it to the patches made on each marker. This technique is quite straight forward for time consuming since the ‘Tracker’ was also failing to track properly so I had to move the tracker point manually to the correct spot in most of the frames. Also, some of the patches are visible when the girl looks to the sides.

I also tried a different technique, using a ‘Smart vector’ node this time. This technique is really quick if it works fine, however, I am struggling with the distortion of the face when the girl moves her head.

I think I may be doing something wrong as it is distorting the whole image and not just the patches added. I will have to ask Gonzalo in the next class (final result added on Advanced Nuke – Week 9 post)

Final Garage Comp

Since this week I could not go to class in person as I was ill, I did not have the chance to ask for the questions I had regarding the shadows in my garage comp. Therefore, I emailed Gonzalo with a version of my comp attached and my question regarding the shadows being too harsh, and he sent me back a solution to this issue. It looks like I had to add another the ‘Shuffle’ + ‘Blur’ and mask link it to another ‘Grade’ node connected to the main plate, as shown below:

Garage Comp

However, I still got the issue of the shadow casting on the wall hole. I tried to add a ‘Merge (stencil)’ node using the previous wall roto I had, however, it was not working as it was cropping the whole wall and not just the hole. I will ask the professor next week about this (final result added on Advanced Nuke – Week 9 post).

Categories
Collaborative

Week 8: Environment/Memory Objects Modelling, Re-texturing, & Deadline Extension

This week, we focused on the interactive objects we had for the memories, and on the environment texturing and props.

Objects

Along this week I worked on some of the needed objects for the memories so 3D Animation could start with the animation of these.

I started with the diary as Veronika already had the bugs model rigged. So I made a standard hard cover book model that can be opened in the middle and one of the pages can be bended and animated too.

Then I also tried to model a fingerprint scan machine for the lobby memory. However, as I was really undecided on how to make this one, I researched some references like the following:

I liked the simple style of this scan and I also liked that is not just fingerprint scan but whole hand scan machine. However, I figured that since the walls were going to be curved, I could also add like a stand underneath to it so it does not need to be attached to the wall. The final model I came up with is the following:

I also found a Pikachu toy online which I plan to remodel a bit in Maya, and then in Mudbox I will add dust and mud texture to show decay and the pass of time. Some other models were bought with the budget we had approved for this project, so in order to keep track of this, we also made a list in Miro.

I also researched a bunch of environment materials to fill up the space to give the dystopian look we are looking for:

Team meeting

In this week’s team meeting, we discussed the possibility to request an extension of our deadline so we had plenty enough time to finish the beta version of this project. The reason why we need an extension is because we got the brief of the project late, and the budget we needed for the models bought on internet also got approved late, so we has been falling behind week after week due to these inconveniences.

Regarding the project feedback, in this meeting we pretty much spoke about the environment and the objects we have and what we still needed to make or improve.

My hand scan model was disregarded as it does not fit the style the professors wanted. They also suggested more objects for the memories and they also provided a list in Miro of what they wanted.

They also requested to make an Excel with all the models we use in the VRR experience including our modelled objects and the ones downloaded from internet. This can be found in the following link – https://artslondon-my.sharepoint.com/:x:/r/personal/r_li0920182_arts_ac_uk/_layouts/15/Doc.aspx?sourcedoc=%7B1FC2C2BE-FCE2-4A4C-938A-1A74FDE8E902%7D&file=Departure%20Lounge%20Bid%20Proposal.xlsx&action=default&mobileredirect=true&DefaultItemOpen=1&login_hint=n.gonzalezsanchez0320221%40arts.ac.uk&ct=1680215854179&wdOrigin=OFFICECOM-WEB.MAIN.REC&cid=b65fb2b8-7592-43fd-a689-103d6a2168cb

References

Mainguet, J. (2018). Biometrics movies 2018 (online). Available at: https://biometrics.mainguet.org/movies/ThePredator_hand.jpg [Accessed 4 March 2023]

Categories
Advanced & Experimental Advanced Maya

Week 8: Lighting in Maya, & Rendering in UAL Render Farm

This week, I set the lighting of the model in Maya to prepare it to render in UAL Render Farm

I started to play with the lighting of the model in Maya and also added a background. I wanted a simple background as the model is crowded enough and I did not want it to take off the attention from the model. Therefore, I found a stone-like texture in black mostly. Then, following the colour palette from the model, I added a back blue and purple light to separate the background from the model and to highlight the texture of the background.

Since the space background in the interior of the dome looked a bit crowded, I did some colour correction in it so the stars looked dimmer and reduced the quantity of them.

Once I got the light set up, I started to test the render settings that would take the maximum detail but at the same time that will not take too long to render.

Also, since I wanted to import it into Nuke to add shadows and some extra illumination, I decided to export it in different layers, like the main solar system structure separated from the dome, dome neon half ring and background. In order to separate the dome neon ring from the dome for rendering, I added a plane in between so when rendering the alpha the rest of the dome would not show.

After everything was ready to render, I transferred the project folder to the university computer and set the UAL Render Farm (Deadline).

Main model render without dome and background

These layers are going to be put together in Nuke and I will add shadows and extra light reflections.