Categories
Advanced & Experimental Advanced Nuke

Week 3: Types of 3D Projections in Nuke

In this lesson, we saw the different techniques that can be used for 3D project, such as patch projection, coverage projection, or nested projection, and we also analysed how to add texture and lighting onto a 3D object as well as the general problems we can encounter with this.

In 3D tracking, we need to try to avoid to include the sky, as it would give us problems later on, in the same way that we avoid objects that move or reflections in roto.

When adding a ‘rotopaint’ to a card in a 3D space, we need to first freeze the frame with a ‘frame hold’ node at the best position in the sequence for visibility and tracking a specific point. Then we add the ‘rotopaint’ or the patch we need, and add another ‘frame hold’ to ‘unfreeze’ the frame. Then we premultiply it to create an alpha and use a ‘project 3D’ node to project it in our card (the ‘project 3D’ node must be connected to the projection camera and another ‘frame hold’ node). Lastly, we connect our card to the ‘scanline render’ node which will be merged with the main plate.

In order to add texture to a ‘card’ in 3D space, we will use the same method as before, but this time we will take the texture or picture that we want to add which we can ‘colour correct’ and ‘grade’ if needed, to then ‘roto’ the part we want to add from it, premultiply it, and with ‘corner pin 2D’ we will place it in the perspective we desire. Then we will ‘transform’ it to the dimensions we want and ‘merge’ it to the main plate after adding a ‘frame hold’. Lastly, we need to ‘copy’ the roto and premultiply it so we can project the alpha to our ‘card’.

If we want to roto something in the scene to change its features (colour correct, grade, etc), we can do the same as we did with the ‘rotopaint’ but in this case we adjust the roto every 10 or 20 frames. We do not need to adjust the roto every frame as it will follow our match move previously done so just a few adjustments should be sufficient.

When we have several 3D projections that we want to put together, we can use ‘Merge mat’ node, as if we use a regular ‘merge’ node, the quality of the image can decrease and look different.

After seeing these 3D projection techniques, we were asked to practice them using the following a footage of a street provided by the lecturer. For example, we could add something on the wall or floor, change the windows texture, colour correct a specific element of the scene, etc. This is the result of my practice:

When 3D projecting on top of a 3D object or artefact, the types of projections we can use are:

  • Patch projection
  • Coverage projection
  • Nested projection (projection inside another projection)

We can find some issues when doing artefact projections that can be solved we the following techniques:

  • Stretching problem: texture is stretched and not showing in the correct place. This issue can be fixed adding a second camera projector on top.
  • Doubling problem: texture is doubled. We can fix it doing two separate projections.
  • Resolution problem: texture look pixelated. We can use ‘sharpen’ node to solve it, however, we can also use a more efficient solution which is adding ‘reformat’ node and set the ‘type’ as ‘scale’, to then link node to ‘scanline render’ which would be the connected to a second ‘reformat’ node with the resolution of the original plate.

Lastly, we also saw how to build a 3D model taking as a reference a 2D image. Using ‘model builder’ node, we can create and adjust cards following the perspective of the 2D image, to then ‘bake’ this geometry into a 3D space. We can add ‘point light’ nodes to set illumination with different intensity, colours, and cast shadows. Another illumination node is the ‘direct light’ which is used as a filling light directed to a specific point or direction.

Once we finished reviewing this week’s theory, we were also asked to make the roto of the hole in the scene of the Garage project and to remove the markers with patch projections. I made the roto pretty quick and had no issues with it, but I struggled with two specific markers clean up: in the two markers positioned by the hole in the wall, when I added the roto, the patch made with rotopaint was showing outside the roto boundaries (right on top of this roto), so it was showing the wrong patch.

After asking the professor for some help, he figured out that I missed the lens distortion node on both the beginning and the end of the clean up set up (to undistorted the scene and the redistort it back).

Another issue I noticed is that the patches added on the floor marks were showing through the roto of the wall. I asked the professor again and found out that this part needs to be merged differently as it is outside the roto. So added a ‘merge (stencil)’ just to these part of the clean-up, then ‘shuffle (alpha-alpha)’ and connected it to the roto ‘scanline render’ node. This will create an stencil of the patches taking the roto as reference and it will not show through the wall.

Final clean-up + roto

I had a lot of troubles with this homework and spent a lot of time trying to figure out why it was not working, but I feel that this struggle was useful to familiarise a bit more and feel more confident towards the nodes system used in Nuke.

Categories
Advanced & Experimental Advanced Maya

Week 3: Rube Goldberg Machine Simulation Bake & Texture in Maya

This week, we focused on baking our bullet simulation to proceed to add texture and set our camera movement.

After all the bullet system is built and set up, we need to bake the simulation so the programme creates the animation’s keyframes of each active rigid body. In order to do this, I selected all the active rigid bodies, then selected ‘Bake Simulation’ on ‘Edit->Keys’ tab. Once Maya has created the keyframes of each element and since we no longer need the bullet system set up, I selected ‘Delete Entire Bullet System’ on ‘Bullet’ tab so all the bullet elements are deleted. I also manually animated with keyframes the background gears since I struggled a bit trying to animate them with bullet; every time I added a new hinge, the whole animation stopped working as I had it set up so it was really time consuming to adjust it all over again each time.

After baking the simulation, I proceeded to texture my design. I liked the cyberpunk mixed with steampunk look that my machine was getting and decided to add some metal textures such as copper, gold, chrome, and brushed metal, as well as glass texture on the helix slide, on the top part of the machine and on the light bulbs. These reflective materials gave me the opportunity to add glow to the balls and to some parts of some elements such as to the ring holders of the helix slide, to some neons on the finish line, and to the filaments of the light bulbs. The following examples inspired me with the colours, mood, and composition of the scene.

Before adding the textures, I searched an HDR in polyhaven.com and downloaded a wood workshop HDR with low and warm light conditions. I wanted to give the feeling that this machine was made in this workshop from random materials found in it. I also researched textures and references like wood, old gears and light bulbs:

Wood workshop

I also found a tutorial in YouTube of how to make glow effect:

https://www.youtube.com/watch?v=E9iIf95BCQ4

The following sequence of rendered previews show the textures I used:

The light bulbs were modelled and textured later on as I thought that the space at the end of the base was looking a bit empty and boring. So I modelled them with the idea that they would turn on when the ball hits the finish line planks and a switch is triggered. I modelled the base and outer side of an old school light bulb and added inside the filaments that I textured separately to give the glow effect. Also the glass of the light bulb is doubled so it gives this thickness and volume effect.

I could not finish the final design this week as I added more elements of what I initially planned and it took me longer than expected, but overall I am very happy with how this is turning out.

Textures and HDR:

  • Base wood texture and planks wood texture – https://polyhaven.com/a/wood_cabinet_worn_long
  • Metal texture with marks – https://quixel.com/megascans/home?category=imperfection&search=metal&assetId=uh4obghc
  • Untreated wood texture – https://quixel.com/megascans/home?category=surface&category=bark&assetId=wghjcggn
  • Vintage number 1 – https://www.freepik.com/free-vector/ornamental-1-background_1138096.htm#query=no%201&position=31&from_view=search&track=sph
  • Vintage number 2 – https://www.freepik.com/free-vector/ornamental-2-background_1138095.htm#from_view=detail_alsolike
  • Wood workshop – https://hdri-haven.com/hdri/repair-facility
Categories
Collaborative

Week 3: Collaborative unit support & moodboard

This week we settled our group and continue to develop a draft moodboard.

This week we sent our blogs with our work to the VR tutor – Ana Tudor – so she should check our style and skills. After her consideration, she confirmed to us that the final group was agreed and that we were all in so we added our group in Padlet.

Since we could not meet the involved lecturers and the studios’ external partners this week to discuss the brief of the project, we thought of making an initial brain storming of our idea of a dystopian environment related to global warming. Therefore, we added a moodboard in Mira with pictures and notes of our thoughts. We also made a WhatsApp group chat with all the student team members so we can share ideas and communicate easily. I think everyone in our group is really respectful and involved in this project so we got along with each other easily despite the fact that we do not know each other yet.

Initial Moodboard

Our first idea departs from desolation and fallen apart buildings with some small greenery ground depicting the pass of time.

Destroyed interiors with moss and small plants growing back

Also, due to rise of the sea level, we considered the possibility of an underwater world with the destroyed buildings surrounded by seaweed and just a few fish floating around (as most of them are extinct.

Underwater world due to rising of the sea level

We also took researched another artists work as reference of dystopian scenery design.

Reference artists

Lastly, we thought of ways of interaction of the user with the story and environment, for example, with objects that could trigger memories from the past that gives details of what happened and how the world ended like this.

User interaction with the environment ideas
Categories
Advanced & Experimental Advanced Nuke

Week 2: 3D Clean-up and 3D Projections

In this class, we learnt how to use the 3D projection in Nuke to clean up scenes or add elements with textured cards, rotopaint, rotoscoping, and UVs.

In Nuke, we can use a ‘3D project’ node to project anything onto a 3D object through a camera. We can use this node with different techniques:

  • 3D Patch with a textured card. We can use a ‘text’ node, or image, or texture projected on a ‘card’ node which would be linked to the ‘scene’ and ‘premult’ nodes, merged to the main plate.
  • 3D Patch with project on mm geo. First, we need to find a reference frame and add a ‘Framehold’ node to freeze this frame. Then, we clone the area using ‘Rotopaint’ node followed by a ‘Roto’ and a ‘Blur’ nodes, that would be premultiplied. Then we add another ‘Framehold’ (so it shows in all the timeline) or, alternatively, we can select ‘Lifetime’ in ‘all frames’ in the ‘Rotopaint’ node. However, it is recommended to use the second ‘Framehold’. Afterwards, we add the ‘Project3D’ node linked to a ‘Camera’ that would be the projection camera and we add another ‘Framehold’ node to this camera. Finally, we add a ‘card’ node where we are going to project the ‘Rotopaint’ job and then we will link this ‘card’ to the ‘scene’ that will be merged to the main plate.
  • 3D Patch with project roto. This time, we start with a ‘Project3D’ node to input in the ‘card’ (linked to the camera projector with a ‘Framehold’ connected to a ‘Scanline render’ node). Afterwards, we add and do the ‘roto’ in one or two frames only (a tick ‘replace’). Then, we add another ‘Project3D’ node to input it in a second ‘card’ (must be same ‘card’ as first one) that would be linked to a second ‘Scanline render’. Then we can add a ‘Grade’ node connected from main plate to the second ‘Scanline render’ to grade the roto that we have previously created.
  • 3D Patch with project UV. The starting point is a ‘Project3D’ node (linked to ‘camera’ and last ‘Scanline render) connected to a ‘card’. This ‘card’ is first input on first ‘Scanline render’ that will be at the same time connected to a ‘constant’ node of a 1:1 aspect (this will fix the frame for us). Then we can ‘Rotopaint’ the part we need patch and ‘Premult’. We ‘Reformat’ again to go back to our video original resolution. Then we project this on a ‘card’ that will be connected to the second ‘Scanline render’. We ‘Reformat’ again the second ‘Scanline render’ and merge to main plate.

To review our final shot after adding these 3D patches, we use a ‘Merge’ node connected to the final output and the main plate, and then set up as ‘difference’.

In order to see the point cloud generated by the 3D camera tracker in the 3D space, we can use the ‘Point cloud generator‘ node. We will just need to connect it to a ‘Camera’ and the main plate (source), then ‘analyse sequence’ in the ‘Point cloud generator’ node, and link it to a ‘Poisson mesh‘ node. Alternatively, in the ‘Point cloud generator’ node, we could select all the vertex of the cloud in the 3D space, create a group, and select ‘Bake selected groups to mesh’ option. This option ‘Model builder’ node to create a model taking as reference our point cloud. To do this, we connect the’Model builder’ to a ‘Camera’ and the main plate or source, then we enter in the node and create a ‘Card’ from there. We can place it and drag its corners wherever we wish. We will then readjust through other frames (just need like 1 or 2 frames adjustment).

This week’s homework consisted in practice all the techniques we have seen today, and 3D track a plate provided and place the floors and back wall grids, add cones on markers, and place two 3D geometries (all these elements need to be match-moved with scene’s camera movement.

The following images and videos show the process I followed and the final outcome of my practice.

Final 3D projections practice
Final 3D tracking and matchmove practice

This 3D tracking has been a bit hard to put together and understand what I am doing and why I am doing it, as I needed to think in both the 2D and the 3D space. Once I have the nodes figured out then the rest can be set really easy. I guess practice and experience is the key to get the hang of this.

Categories
Collaborative

Week 2: Time Management Workshop, & Intro to Story Shape & Structure

In this week, we had two workshops that would help us with time management of our project, as well as an intro to story telling. We also had our second social gathering to share and discuss interests between the students from all the masters involved.

We started the week with a social gathering at the LCC canteen so we can exchange our views about the topics between all the students involved in this Collaborative Unit. The aim of this gathering was to find people with ideas that we are interested in to form a project group. I was really kin to get involved in the Dystopian and Utopian project that has been set in the Padlet board so I focused on finding people that was seeking to pursue this project. I found two MA VR students – An and Ria – that were interested and after a long chat, we agreed to find the required group members to get this project moving. I then spoke with two of my classmates, Martyna and Jess, to see if they were interested in joining us, since this project would involve a lot of 3D modelling and it would be better to have 3 modellers instead of just one. Lastly, we thought to add some animations so Martyna and Jess found two students from 3D Computer Animation that also were interested – Veronika and Gloria – and we consolidated like this an initial possible group.

Time management workshop

We saw several tools and methods to improve time management like scheduling our time, using resources to increase concentration and focus (like binaural beats, white noise, etc), and tracking our work patterns to get to know ourselves and how adjust our scheduling accordingly.

I consider myself as a night learner as I feel more productive and inspired at this time. My creative flow is at its best after around 9pm. I am definitely not a morning person, being hard for me to focus on creative work at this time. I also prefer to work alone as I get easily distracted in an environment with other people. Music or podcasts in the background has always been helpful but it depends of my mood or the activity I need to do.

Storytelling workshop

In this session, we started with the narrative structure of a basic story. Usually the stories are shaped into a 3 act structure:

  • Beginning. Set up or thesis.
  • Middle. Confrontation or anti-thesis.
  • End. Resolution or synthesis.

It is important to differentiate between plot and theme in story telling. The plot describes what is the story about, whereas the theme describes context of the story (time frame, aesthetic, etc). It is also useful to differentiate between text and sub-text. The text is the narrative or the actions that take place in the story, and the sub-text is how the audience identifies with the story (no need to be shown in the actions of the story but what it is interpreted from what is shown).

The characters and the story shape must have the following in order to be engaging and interesting:

  • Suspension of disbelieve. Audience is willing to accept anything that is happening in the story (e.g. talking fish).
  • Character flaw. This makes the character real, problematic, and gives them the opportunity to grow, which can be part of the story development (character arc).
  • Inciting incident. This is an incident that confronts the protagonist with a major dramatic question that makes the story to move forward (plot points).
  • Climax. This is the moment when the second act ends and the audience wonders how is this going to be resolved (character flaw or problem cannot be sustained anymore).

These two workshops were very helpful for our future projects. I consider that time management is a key aspect for any job from any background but for VFX artists specifically as we will have to deal with really tight deadlines and with last minute problems that will need to be resolved quickly and effectively. The story telling is also a key aspect of this industry as we could be really good VFX artists in the technical aspect, but we need to keep in mind that a sequence of a movie can be very well produced and not show any interesting story to keep the interest of the audience, so it is important to take care of how we tell or transmit a story.

Categories
Advanced & Experimental Advanced Maya

Week 2: Rube Goldberg Machine Modelling & Animation in Maya

In this class, we learnt how to bake the simulation that we already set up, to then add texture, refine the design of our Rube Goldberg machine, and animated the camera movement of our scene.

After all our bullet actions are adjusted and we are happy with the dynamics of the animation, we will proceed to ‘bake simulation’ of all the active rigid bodies so all bullet set up is removed and converted in key frames instead. It is also important to select ‘delete entire bullet system’ to get rid of any bullet set up left in our outliner.

Once the dynamics of our scene are sorted, we can proceed to animate the camera movement of our scene, creating a new ‘camera and aim’, and setting its position and aim at the same time. We can also add texture to our scene and finish building up the final touches to make it look presentable.

This week, I focused in finalising my Rube Goldberg machine’s design and dynamics. I added a different route for the second ball, with a helix slide, a clock gear, and a second finish line. I also refined some of the elements, adding some edge loops, to then smooth them down pressing ‘3’.

The next step of this project would be to bake the active rigid bodies, so the programme creates the key frames of the movements set with bullet tool and continue to texture it and set the camera movement of the scene.

Categories
Collaborative

Week 1: Collaborative Unit Brief

This week we were briefed of the collaborative project we have to do along with the students of other master’s degrees like MA VR, MA 3D Computer Animation, MA 2D Animation, and MA Game Design.

The collaborative unit consists in a group project made in collaboration with other students outside our master’s degree to build together a digital outcome and explore topics outside visual effects.

The research and development of this project has to be documented in our blogs as it is a self-guided project. We will need to learn to work as a team demonstrating our organisational, communication, and creative skills. We will also have to demonstrate our roll within the group, what we were responsible of, the process followed, and summarise our experience in a short critical report at the end (around 500 words).

In a separate lecture this week we learnt how to organise a group, how we should set some ground rules that apply to everyone and the is agreed by everyone, how do we need to structure our blog, the recording of the meetings we have with our group, and the regular crits we will have with our lecturer.

For this week, we were also asked to add our details on a Padlet board created with the different available topics (we cold add our own topic too or change the existing ones too).

My chosen topic – Utopia and Dystopia

This subject caught my attention because I love steampunk, cyberpunk, and post apocalyptic scenery, as well as texturing scenes and modelling 3D objects to include in these kind of environments. I also enjoy to imagine how a regular place would look like after it is affected by the pass of the years after dealing with weather conditions, human interaction, etc., and think about the details that would make it more ‘realistic’ or convincing to the audience eye.

Categories
Advanced & Experimental Advanced Nuke

Week 1: 3D Tracking in Nuke

In this first class, we started to dig into the 3D space in Nuke for first time. We learnt how to correct the camera lens or distortion of the scene and how to use 3D tracking to add geometry or texture to a scene.

In order to change the distortion of an image depending on the type of lens effect desired, we can use a ‘Lens distortion‘ node. One of the options we can use is the automatic option, where the programme analyses the scene, detects the horizontals and verticals of the scene, and corrects the distortion of the scene accordingly. On the other hand, we can also set the horizontals and verticals of the scene manually, to then ask the programme to solve the scene distortion following those lines we have created. Another way to change the distortion of a scene is using an ‘STMap‘ node instead. This node is based on 2 colours map of the scene, created after adding a ‘shuffle’ node set to shuffle forward to red and green. After we shuffle, we can add the ‘STMap’ node and set the ‘RGB’ channel to ‘RGBA’ UV channels. we can add distortion to the scene. We can also remove the distortion using same ‘shuffle’ node but set to shuffle backwards instead.

After this, we saw how to create geometry in a 3D space such as spheres, cubes, cards, etc. In order to import or export geometry we can use ‘ReadGeo’ (to import) and ‘WriteGeo’ (to export) nodes. We can also transform this geometry using ‘TransformGeo’ node, or change the texture/surface features like specular or transparency, with ‘Basic Material’ node. Once the geometry is set, we can also add illumination to the scene with ‘Light’ node adding more or less intensity, direct or indirect light, and colour of the light. The ‘Sharpen’ node can also be used to improve the image details, so Nuke can read it better (for tracking purposes).

Since all these settings make our project heavier and it takes longer to render, we can ‘Precomp’ a part of our map that is already finished so Nuke does not have to calculate all those features from that side every time we render.

Following on, we also studied the way to jump from a 2D scene to a 3D space using the ‘Scanline Render‘ node. Pressing ‘tab’ in the keyboard we can jump from 2D to 3D in Nuke. We could also add a ‘Camera‘ node to decide the camera movement and the framing of the scene want.

Lastly, we saw how to 3D track a live action shot so we can add objects or texture in the 3D space:

  1. Using a ‘Camera Tracker‘ node, we will set up the type of camera lens used to film that shot, and fill up all the rest of the features of the scene (such as range, camera motion, lens distortion, focal lens, etc.). We could also leave it without that information, so the programme just tracks it automatically.
  2. Once everything is set, we track our scene so the programme detects and creates several tracking points along the scene (we can choose how many tracking points we want the programme to create).
  3. Once the programme finished creating the tracking marks, we can then see the number of errors of track that have been originated and if it is over 1, it is recommended to make the tracking again as this will give problems later on. If this number is below 1, we can then delete the unsolved or rejected tracking marks.
  4. Next, we proceed to select a specific point in the centre of the scene and we set it as origin point of the shot.
  5. Then we select the track marks that forms the ground of the scene and we tell the programme that this is our ground plane.
  6. After our scene is tracked and properly set, we can then export this ‘scene map‘ keeping the output linked to our 3D tracker node so every change we made is reflected in the scene map created. We could also export the ‘camera‘ only but with the output unlinked so the changes we make in the 3D tracker node is not reflected in this ‘camera’ export.
  7. Finally, we can now add geometry, cards, etc., to our scene and place it, following the ‘camera cloud‘ created in the scene exported. These elements added to the scene will now follow the camera movement and 3D space of the scene.

As our assignment of the week, we were asked to play around with what we learnt today and to try to add geometry and cards planes to the scene shot provided, using the ‘camera tracker’ node.

3D tracked scene with planes and geometry included

I was a bit intimidated by 3D spaces and Nuke’s node system, however, at the end I found it quite straight forward and easy to set up and control.

Categories
Advanced & Experimental Advanced Maya

Week 1: Rube Goldberg Machine Modelling & Animation in Maya

In the first week of term 2, we had our first contact with animation basics, trying to animate two bouncing balls made of different materials. We also started to design our first project of the term, the Rube Goldberg machine.

Firstly, we started a quick model of a basic staircase as a base of our first animation. Then, we created two spheres to start our bouncing animation. The first ball is suppose to be made of rubber, therefore, the animation needs to show a high bounce on each step of the staircase. The second is made of metal, so it should look heavier and less bouncy. We set the basic key frames for each jump, and then adjusted the animation with the ‘Graph Editor’. With this last tool, we can see the graphic of the animation, therefore, we could tweak each movement to make it look more realistic.

Rendered animation

After understanding the principles of manual animation, we dived into ‘Bullet Physics’ in Maya. This plug-in is specialised in the interaction between the geometry of a 3D scene. In order to practice with this tool, we were assigned to create a Rube Goldberg Machine, using basic shapes that would interact with each other.

Before using this tool, the programme needs to be set up to be able to see the ‘Bullet’ tab in the programme’s menu. Once everything is set, we can start designing our machine. The first thing that popped in my mind when we were introduced to this project, was an instagramer that I follow on Instagram known as ‘Enbiggen’. He specialises in creating these 3D Rube Goldberg machines to reproduce the music of any known song, movie soundtrack, etc. I have attached two of his creations that inspired me:

https://youtube.com/shorts/KCSvlCAr-CY?feature=share
https://youtube.com/shorts/Wfp3Gfa9MCM?feature=share

As a first idea of my Rube Goldberg’s machine, I came up with the following sketches as possible designs:

Once the machine was planned, I continued to build it up in Maya. After designing and placing the basic geometry in the 3D scene, I set them as ‘Active rigid body’ or ‘Passive rigid body’ depending if I wanted the polygon to act as a dynamic object or as a static object. Once this is set, I needed to rewind the animation until the beginning (so Maya calculates how the objects would interact between each other), and to adjuste them as needed afterwards. This needs to be precise as the programme can be very picky with these calculations and can cause some errors at the time of setting up each object’s bullet action. Also, as I wanted to make objects spin, I added constrains to some of the polygons using a ‘hinge’ option, placed in the middle of the object (‘Rigid Body Constrain’ tool).

I am struggling a bit with the set up of the actions and adjustment of this as whenever I closed the programme and reopened the scene later on, the same actions that I set up previously, were reacting and behaving differently. I flushed the playback cache to see if this helped but it was doing the same. I had the same problems in both the university computer and in my personal computer.

Categories
Showreels

Term 1 Showreel