In this lesson, we learnt how to stabilise a shot using 2D tracks
With a ‘2D Track’ node, we can track the camera movement of a scene frame by frame to match it with another element. Then with a ‘Transform’ node we can change the translation, rotation, and scale of the frame to stabilise it. We can also create several tracking nodes from the main ‘2D Track’ node to automatically stabilize the scene, to match-move the scene, and to remove or add jitter.
Sometimes the scene has too much noise or grain and the 2D tracker is not able to track it properly. In this case, we can use a ‘Denoise’ node to reduce the image noise or grain, so the camera tracker does not struggle the read the pixel sequence in between frames. We can also use ‘Laplacian’, ‘Median’, or ‘Grade Contrast’ to correct the grain.
As usual, it is important to set a Quality Control (QC) backdrop so we can check that the tracking or any rotoscoping added is properly done.
The assignment of this week is to stabilize the IPhone shot and to add the phone’s screen animation with ‘Rotoscoping’ and ‘Corner Pin’ nodes.
Iphone comp improved: I tried to improve the fingers roto using the green despill set up that the professor sent to us and also improved the screen animation using the ‘curve editor’ to soften the starting and end points of the movement.
I struggled a bit with the fingers rotoscoping as when the fingers are moving faster, it is hard to roto the motion blur. The green despill set up we got from he professor helped a bit but I still do not fully understand how it works so I am sure that I could improve this comp once I learn how the green despill technique works.
In this session, we reminded how to create blend shapes to animate the facial expressions and how to create a ‘rig’ or ‘skeleton’ to animated the head and the mouth of our model.
Using the ‘Shape Editor’ tool, we can create a blend shape or shape variation in order to set the facial expressions of our model. On each blend shape, we need to add ‘target’ points with which we could create our movements or reshapes, for example the eyes opening and closing, the mouth smile, or the eye brows frown.
In order to animate the head and the mouth opening, we created a ‘rig’ or ‘skeleton’ that will determine the joints of the neck and jaw. After setting the rig, we bound the skin of the model to our rig and painted the skin weights to add the influence parts of our model (the parts the will be more influence by the rig movement).
Lastly, we created the model’s set of teeth and added them to the rig influence.
RiggingJaw painted weightsHead painted wightsNeck painted weightsMouth opening with adjustment of painted weightsMouth opening with adjustment of painted weightsMouth opening with adjustment of painted weightsTeeth wireframeTeeth modelFacial expressions sequenceMouth opening sequence
I struggled a bit with the painting of the weights to open the mouth of the model. I had to adjust my mesh with the ‘soft brush’ and ‘relax’ tools so it started to respond appropriately.
In this lecture we learnt how to structure a critical report or thesis, what type of language do we use, referencing and citations. We also approached the process to follow when developing the investigation of our topic and we analysed the different methods used in academic writing such as paraphrasing.
When researching information for our critical report, we will need to use trusted sources like peer reviewed texts (books or scholarly articles) or recognised academic material online (like academic journal articles on UAL Library or Google Scholar).
If we need to use any short sentence from these texts, this will need to be referenced. When quoting or paraphrasing we will use the Harvard referencing system to provide a list of citations and references at the end of our critical report or thesis. Citations can be in-text (adding quote marks to the cite) or, if longer than 40 words, separated around 1 cm from text main body and on each side. We will use formal language and we will avoid to use personal language (like ‘I’, ‘my opinion’, ‘I think’, etc).
In order to develop our argument, we can use our own point of view but it needs to have evidence that supports it. This argument will be structure with an introduction, main body, and the conclusion and in longer texts, such as thesis, we could structure the sections adding headings.
The steps to follow to develop an academic argument are:
State the main point and argument to prove (topic) in the introduction.
Analyse important reasons of your argument (evidence that supports the main point or contention).
Identify the possible objections (evidence against main point).
Research and gather evidence that supports main argument.
Structure and connect paragraphs so they follow a logic lead to conclusion.
State clear conclusion putting together statement and supporting points.
When paraphrasing, we need to reword the author’s idea using our own voice. We use paraphrasing to avoid plagiarism, to avoid overuse of quotes, to avoid problematic language, and to shorten long quotes. Summarising is often confused with paraphrasing, however, this is used when we want to state the overall or relevant points of an idea using our own words.
To practise paraphrasing, we were told to paraphrase the following passage in our own words:
The authenticity of a documentary is ‘deeply linked to notions of realism and the idea that documentary images are linked to notions of realism and the idea that documentary images bear evidence of events that actually happened, by virtue of the indexical relationship between image and reality’
Horness Roe. A. (2013) Animated Documentary. Basingstoke: Palgrave Macmillan.
In my own words, this text would sound like this:
According to Honess Roe (2013, Animated Documentary), the authenticity of a documentary is connected to what we understand as ‘reality’ and the fact that the images in a documentary are connected to this ‘reality’ since they show events exactly how they happened.
In this lecture, we learnt how to colour correct a sequence, the different colour spaces of a file, and how to import and export it.
We saw how to use ‘Grade, ‘ColourCorrect’, ‘Toe’, and ‘Blackmatch’ nodes to correct the colour of a sequence. These nodes can be used to correct specific parts of a sequence using rotos or to colour grade an alpha. Alphas need to be premultiplied to be added to the background plate, however, some alphas already come premultiplied, so in this case, we will add an ‘Unpremult’ node, then add the ‘Grade’ and/or ‘ColourCorrect’ nodes and then ‘Prebuilt’ node again.
It is also important to take in consideration the codec or colour space and linearization of the file imported as depending of what we are going to use the file for, we will need more information preserved in the file or a smaller size file. The files in a film production can be shared with compositors as LUTS, CDLs or graded footage. We also discovered the new ‘OCIOColorSpace’ node, which is used when the footage provided has already been graded.
And lastly, we saw proper ways to build up a map for grade and colour correct a footage, separating the primary and secondary colours correction, and then correcting the shadows in the last step. This way, if more amendments are requested, we can make the changes quicker.
The assignments of this week were to colour correct and airplane alpha to match its background and to carry on making some colour corrections in the previous mountains video using the roto created last week.
We also were asked to plan our air balloon sequence which we will be building up until the end of the term 1. My main idea for my air balloon video is to add a dark style, with neons and glowing lights, and add mist and thunders around the mountains.
This week, we learnt how to add a UV texture to an organic model, a human face in this case, using both Maya and Mudbox.
In Maya, we imported the skin texture to the project. Then we created a UV map from the model in the ‘UV Editor’, and using the ‘grab’ tool, we started to adjust the UV map to the texture imported. Since the texture imported was designed for models with opened eyes (ours had the eyes closed), we exported it to Mudbox and, using the stamp tool (similar tool to Photoshop stamp), we edited the texture to match the closed eyes of our model. Once finished, we imported the edited texture back to Maya and re-adjusted it. Since the texture was looking completely flat, we added a bump map using the ‘Hypershade’ to add the pores, marks, and facial lines effect to the skin. Finally, we also added a UV map texture to the eyes and, using the ‘Animation editor’ we opened the eyelids of the model so we could see the eyes’ texture.
UV mapSkin texture adjustmentSkin texture with open eyesEyes skin area adjusted to closed eyesEye animation (open/closed) using blend shapesEyes textureFinal texture sequence
I had some issues with the UV map as my model mesh needed to be adjusted in the middle part of the nose (I had some triangulated mesh there so needed to make it squared and follow the rows and columns of squares, to make it more symmetric). Once adjusted, the UV map started to respond better and I could adjust the skin texture more accurately.
In this week’s lecture, we have discovered how animation can be political and influence or persuade the audience, and we also analysed if animated documentary can be considered and actual documentary or not.
It is possible to persuade or influence the audience through social media, broadcast news and events, film and animation, and television. There are media platforms that can be used for this such as broadcast, print media, mainstream film & animation, independent film & animation, games, podcast, etc. These influential messages in moving image don’t necessarily have to be political, they can also be subliminal or masked content, propaganda, persuasive commercials, documentaries, personal struggle (observation, experience).
The animated documentaries are used to explain, illustrate, or emphasise a story. It can be recorded or created frame by frame and it is presented as a documentary by producers and/or received as a documentary by the audience, festivals, or critics. This type of animation offers new alternative ways to see the world as it shifts and broadens the limits of what and how we can show reality. Its authenticity depends on how specific are the images that compounds it, and it is linked to notions of realism (how story was told and not an imaginary story). There is some controversy regarding these animated documentaries as some people disagrees that these can be classed as ‘documentaries’ as they have a lack of objectivity.
I found an animated documentary as a good example of this, called Nowhere line: voices from Manus Island by Lukas Schrank.
**Award Winning** CGI 2D/3D Documentary: “Nowhere Line: Voices from Manus Island” – by Lukas Schrank (TheCGBros, 2016)
This documentary is based in a phone called made to two asylum-seeking men detained in Manus Island, Australia, at the Offshore Processing Centre.
Since no images has been able to be recorded in this case, an animation can illustrate this story and search for an emotional connection with the audience. It doesn’t need to be considered fake information as the images are an interpretation of the facts narrated by these two men in Australia. However, it can be very informative and helpful to put things together in a story and keep the attention of the audience. I think animated documentaries are a very useful tool to draw the audience’s attention to important matters like this one in Manus Island and make them engage and empathise with the story.
References
TheCGBros (2016). **Award Winning** CGI 2D/3D Documentary: “Nowhere Line: Voices from Manus Island” – by Lukas Schrank (online). Available at: https://www.youtube.com/watch?v=_D8B0o1aRcs [Accessed 8 November 2022]
In this lecture, we saw the technique used to track the camera movement in a scene and how to combine or premultiply several sequences.
In order to track the movement of a scene, we can add tracking points in Nuke that will detect the camera movement. This is a useful tool for rotoscoping since we will not have to adjust the roto in every single frame because of the camera shake. Sometimes it is important to add several tracking points as the camera movement will be different in the foreground, middleground, and background because of the motion parallax.
In another note, we can also combine several elements like rotos together in Nuke with a ‘merge’ node. However, it is important to keep in mind that the alpha channel value always has to be between 0.0 and 1.0. This can be sorted changing the way that the layers interact with each other, with settings like ‘screen’, ‘over’, ‘max’, etc. ‘Channel merge’ nods can also be used for this but they are not as reliable as the ‘merge’ nods.
When layering scenes, there is a tool that it is used in most of the cases called ‘Premult’. This tool premultiplies the RGB values by the alpha so the two layers are visible at the same time. It is also important to combine ‘Premult’ node with ‘Copy’ to add the alpha to the background.
The assignment this week was to rotoscope the bridge from the running man’s video and the mountain from the air balloon project using tracking points.
Tracker pointTranslate values of tracker transferred to rotoFinal rotoFinal set up
Running man final rotoMountain roto3 tracker points added to take in consideration depthQC with grey backgroundQC with green overlayFinal set up
In this lecture, we discovered the basics of organic modelling using Quad Draw modelling workflow to created a Head model.
First, we downloaded a head model from wiki.polycount.com to use it as reference. Then in Maya, we imported our head and set it up to make it a ‘sticky’ surface. Once this feature is live, we used the Quad Draw tool to create a mask with the only half of the shape of the head. Then we refined the eyes, mouth, nostrils, ears and rest of the head traits. Once the half of the face was fully set up, we used ‘Duplicate Special’ to create the other side of the face. We also created the eye balls and mirrored one of them so any changes made in one side it is reflected in the other side too (also used topology feature to change some of the face features).
The following sequence of screenshots shows my process to develop my 3D head model:
Face mask made with Quad Draw toolMaking sure have of the face is well shaped to have good symmetryAddition of the neck mask tooFirst stage of the face in different perspective viewsFirst stage of the face wireframeEarCheek bone, eye bags, and expression lines to give more realismNostrils cavitiesTop view to see shadowed detailsPerspectives of retouches previously mentioned
The most challenging part of the model were the ears as their are pretty irregular, however, I think I achieved a good result at the end. I also refined the face and gave it more angular features so the model had a more characteristic look.
In this class we analysed the types of abstract or experimental work used in visual effects and animation.
Abstraction is not related to objects but to express something through colour, forms, light, shadows, movement, sounds, etc.
A formative abstraction is focused on the manipulation of basic visual fundamentals: colour, form, light, space, and texture, along with movements, rhythm, and sound; with the target of experimenting new methods or techniques to reach different results. It is important to look at experimental work as it is full of technical advancements discovered through experimentation. Experimental work is a vast variety of concepts, models, and approaches. In this lesson we saw several video examples of this experimental visual effects that was made directly in the 35mm film stock, exposing it several times, or painting directly on it.
The conceptual abstraction is the abstraction or juxtaposition of narrative structures or story telling tools to provide emotional process. This process can be used in independent films or non-dialogue films that do not have a narrative and show a more metaphoric manner.
There are different forms of interpret abstraction:
Categorisation – genre and sub-genre, setting, mood, theme.
Form and function – meaning, format, presentation.
Process – techniques, material, technologies applied, technique-message relation.
The assignment for this week is to pick a short movie that is considered experimental and to analyse it following the contents outlined in the lecture, so I decided to analyse the short movie ‘Juniper’ directed by Robert Pereña.
This short film is a conceptual abstraction in which the stop motion was made from scrap paper, trash and art supplies mixed with rotoscoping techniques. ‘Juniper’ looks into the effects of the pollution in the environment through several artworks creating with this an expressive and unique style.
This type of stop motion animation is very crowded and, therefore, very expressive as it is a lot happening on each frame (different artwork per frame). The viewer needs to watch the movie several times to fully absorb all the information put into it. The film also evoques the feeling of suffocation that we would feel with the pollution in our environment. The limitation of this animation style would be that it is not suitable for all types of audiences as there was a warning at the beginning of the video that it could trigger seizures to people with epilepsy.
The process followed has a clear relation with the message as they created a stop motion sequence, using scrap paper and all sort of art supplies mixed up to create this ‘polluted’ environment. The use of an unfixed colour palette or shapes transmits the chaos that is happening around and inside the girl’s rotoscoped silhouette. Also, the fact that each frame is a different artwork gives this twitching movement that makes this movie so unique.
At the beginning, the pace is a bit slower as the girls is sat down and starring at the leaf that suddenly becomes a butterfly. The girls face and body features are visible at this stage and the colours are mainly purple and black transmitting a calmer environment. However, when the butterfly enters inside the head of the girl, her features become abstract, and the pace of the film increases as she starts to move along the scenes. The colour palette is indefinite at this point and an explosion of colour and movement is added to the scene. Afterwards, the girl’s silhouette recovers its traits, and the girl raises her feast as a symbol of rebellion against the noisy and polluted city. Lastly, the background clears to white and the girls silhouette starts to stand up and grow like a tree. The music adds tension to the film as it is a dark and low sound playing in the background along with the sounds of the wind and some minimalistic high dissonant sounds.
References
Robbie Pereña, 2019. Juniper – Experimental Short Film (Stop Motion Animation/Rotoscoping) (online). Available at: https://www.youtube.com/watch?v=WsRkDJYKOwE [Accessed 01 November 2022]
In this class we discovered the basics of rotoscoping in Nuke.
Rotoscoping is used to create alpha channels ‘matte’ to match the footage motion. With this, we can change the subjects background or create different effects with layering.
In Nuke, we learnt the basic rotoscoping using ‘Beziers’ to create the alpha channel and feathering to soften the edges of it.
This is the matte I created of the running man video the professor sent to us. It is made by parts, starting from the head of the running man down till the legs.
Head rotoRight arm rotoLeft arm rotoBody rotoRight leg rotoLeft leg rotoQuality Check
Final roto
Rotoscoping can be a tedious job in my opinion but with practice and experience it could result in a more quick and pleasant job, as well as in a rewarding experience with the final result achieved.