Categories
Nuke VFX Fundamentals

Week 10: Real Scenarios in Production and Balloon Festival Comp Review

In this lesson we analysed the different scenarios we can face in production as a VFX compositor and then we reviewed the final composition of our Balloon Festival project.

In production for film, the stages followed are:

  1. Temps/Postviz. The temps are the preview of how the movie is going to look in low quality and the postviz is the preview of the movie but with higher quality (even there are specialized companies in this).
  2. Trailers. It shows the several shots of the movie that are finished at a good level of quality.
  3. Finals. Final product of the film. Usually it is exported to EXR., and two different Quick Times with specific settings ready for being reviewed.
  4. QC. The quality control of the final product is done by the VFX Supervisor, and they decide which one is the best product to send to the client.

There is specific software for the project management that improves the organisation and communication between the team, such us Google docs and sheetsFtrack, and Shotgun. They are useful to publish the final scenes that are ready for review, to request tasks, to agree meetings, etc.

The production roles existing in a film are:

  • Line Producer. The person that is below the Producer and is in touch or checking in with the VFX supervisor, director, editor, internal producers, producers, and artists. They manage the client, the timing, and the budget.
  • VFX Producer. This person makes sure that the studio completes the project, that they comply with the deadline agreed with the client, and that it is completed within the budget set.

A way to share and review a project development is to set VFX dailies. This is an important meeting to see that everyone is in the same direction and to receive the feedback of the film director, the client, the producer, and/or supervisor. It is usually written and recorded, and what it is agreed there cannot be changed later outside that meeting.

Once we have finished the scene we were assigned to, we will publish it so the lead or VFX supervisor reviews it. A good habit to develop is to make sure that what we are publishing is final and it does not have any errors that makes the review difficult as the schedules and deadlines in films use to be tight. Before publishing a scene, it is good practice to follow this tech check process:

  • Check notes for the shot
  • Compare new version with old one
  • Check editorial (shot that editor sent to take as a reference with the original)
  • Check if there is any retime in the shot
  • Check that our shot has the latest ‘Match move’
  • Write in the comments if we have any personal notes
  • If we have any alternatives for one shot, inform the line producer before adding this to our published scene.

Balloon Festival Comp

Once, we analysed the different scenarios in VFX production, we proceeded to review the final compositing of the balloon festival project.

For this project I modelled my air balloon in Maya (as shown in Week 3: Maya Modelling Tools Overview Part 2 – Air Balloon and Week 4: UV Set Up & Texturing in Maya).

During Maya lectures, we learnt how to model a 3D hot air balloon and animate it with a simple 360° spin animation. Then, using this and a mountain shot provided by the professor, we were asked to composite a short sequence for a ‘Balloon Festival’. There was no rules, just to put in practice all that we learnt and to have fun with the compositing.

Since I really enjoy designs with an 80s neon style with a dark background that highly contrasts with neon colours, I decided to focus in this thematic. I started trying to colour correct the scene as I wanted a night ambience and the scene was shot in plain daylight.

I tried colour correcting the scene following a tip the professor taught to us in class about separating the colour correction process in primary colours, secondary colours, and shadows. I rotoscoped some sections of the mountains and colour corrected them separately to create a bit more depth and tying to avoid a ‘flat’ look. Then I checked how it would look like with a grey background an refined the roto.

I reformatted the video to fit the size of the main comp, and then I retimed it because, since it was a time lapse video originally, it was playing way too fast. Then I linked it to one of the trackers previously created so it follows the movement of the main plate. Also, I colour corrected it slightly to make it look a bit darker, and also created a roto so it did not overlap with the mountains (created an alpha of the roto with ‘shuffle’ node and copied it in the sky nodes trail).

I did not look the look of the grass in the foreground as it had some gold colour from the original lighting so I decided to add some coloured fog in front to disguise this. So I found a fog video online and added to the comp. I also colour corrected it and made it purple so it matched the palette of colours I wanted to achieved (black, blue/green, and purple).

Following on, I added my 3D air balloon model to the comp. I added four air balloons with different scale position and movement, and also I colour corrected them adding some purple and blue highlights and making them a bit darker. To make the comp a bit more interesting, I also added like magical and colourful trails to two of the air balloons, again, with purple and blue tones.

Then I wanted to add the text ‘Balloon Festival’ like if this were the promotion video of an actual festival. I created a neon effect adding a ‘Glow’ node so the middle of the type is white and the borders have a blue glow. I also used ‘Neon 80’ font to make it look more realistic. Then I added a roto mask to the text to create the transition of the air balloon passing and the text appearing behind it.

Moreover, I added a frame of blue and purple animated neon lights with a foggy texture that I found online. Like I did with the fog and the colour trails, I merged them to the main plate using ‘screen’ option in the ‘merge’ node so the black background is not visible and it only shows the neon lights.

Since Nuke is not very good working with sound, I exported the final sequence with the write node and imported it to After Effects to add the sound. I could also have done it with Premiere Pro, but I was having some problems with my version of the programme so I decided to use After Effects as a quicker solution. I found an 80s style royalty free music ironically called ‘stranger things’ (Music Unlimited, 2022), so I imported it to After Effects and just added a fade out at the end.

Final result

The final result has a funny and eye catching look and the 80s music sets the ambiance suitable for the style. It has been a long and hard process for me as I was struggling a bit with the order of the nodes and when to add certain nodes like ‘premult’, ‘shuffle’, ‘copy’, and when to link nodes using the ‘roto’ mask link or regular link. At the end of the day, with practice everything started to make sense and now I can say that I feel comfortable with Nuke’s compositing process and structure.

References

Apisit Suwannaka. Drifting Smoke Motion Design on Black Background Free Video [online]. Available at https://www.vecteezy.com/video/2973097-drifting-smoke-motion-design-on-black-background [Accessed 19 November 2022]

Distill, 2016. Time Lapse Video Of Aurora Borealis [online]. Available at https://www.pexels.com/video/time-lapse-video-of-aurora-borealis-852435/ [Accessed 19 November 2022]

John Studio. Beautiful colorful particles or smoke abstract background Free Video [online]. Available at https://www.vecteezy.com/video/3052087-beautiful-colorful-particles-or-smoke-abstract-background [Accessed 19 November 2022]

Mim Boon. Neon frame background animation Free Video [online]. Available at https://www.vecteezy.com/video/12276978-neon-frame-background-animation [Accessed 19 November 2022]

Music Unlimited, 2022. Stranger Things [online]. Available at https://pixabay.com/music/synthwave-stranger-things-124008/ [Accessed 27 November 2022]

Categories
Nuke VFX Fundamentals

Week 9: Blur, Defocus, and 2D Clean-up in Nuke

In this session, we learnt how to use ‘Blur’ and ‘Defocus’ in a scene and how to do a 2D clean-up using ‘Roto Paint’, ‘Difference’, ‘Regrain’, and ‘Grain’ tools.

In order to add realism to a scene, it is a good technique to add some ‘Blur’ or ‘Defocus’ to it. However, depending the desired effect, we use one or the other. We use ‘defocus’ to emulate what happens with a real lens when unfocused, so since this is a more realistic and natural effect than ‘blur’, this is more commonly used for a more cinematic and more visible effect. On the other hand, ‘Blur’ is used when we need to defocus a colour or something minimum that is going to be barely visible (more for correcting purposes rather than effect wise).

‘Z Defocus’ and ‘Z Blur’ are used to defocus or blur specific areas of the plate and can be also used to blur or defocus taking in consideration the depth of the shot when the alpha is converted to depth. With these nodes we could also defocus or blur following a shape like a disc, bladed or following a roto we made. These nodes can be used together with ‘Convolve’ node in order to defocus or blur with a roto shape in different forms.

Nuke is also used for cleaning up a scene. This can be made using ‘Roto Paint’ node with which we can paint, clone, blur, dodge, and burn specific areas of he shot. After this, we could add a ‘Difference’ node to subtract the alpha taken from the ‘Roto Paint’ area followed by a ‘Copy’ and’Premult’ nodes. Also, we could add a ‘Frame Hold’ node to freeze the reference frame where we are going to do the roto painting.

Once we added the patch or correction to our shot, it is good practice to add a ‘Grain’ effect to match the grainy texture of the video and the patch blends in. We can use ‘Grain’ node which is applied through the alpha so it does not affect to whole plate but just the alpha area, or the ‘ReGrain’ node which will affect the whole plate as double grain (so it cannot be applied multiple times).

This week’s task was to practice what we learnt today trying to do a clean-up of the school shot provided: removing some papers that are on the wall, adding some roto paint in the side of the lockers, adding something to the background door (in my case I added some animated text), adding something in perspective in the left side doors (I added a video of what it looks like a circular magic portal), and adding something interesting on the floor (I added another magic portal).

Original Plate

To start, I wanted to remove some papers from the pin board on the right. To do so, I added a ‘Roto Paint’ node and used the ‘clone’ tool to paint on top of the papers using the texture of the board. Then with a regular ‘Roto’ node I created the alpha of the painted area followed by a ‘Filter Erode’ to soften the edges and a ‘Premult’ to transform it into an alpha. All of this has been done with a ‘Frame Hold’ node so it is easier to build up the roto. Then I tracked the area with 4 tracker points and created a ‘Transform (Match Move)’ tracker node to match the move of the scene. Finally, I added the ‘Grain’ node to match the grain of the Roto Paint with the scene grain and merged it with the main comp.

Secondly, I added an animated ‘Roto Paint’ to the side of the lockers. I used the already existing tracker node that was used to remove the poster that was in the same place that I wanted to add the new ‘Roto Paint’. I created a ‘Transform (Match Move)’ tracker node and attached it to the ‘Roto Paint’ node with the animation. To animate the painting, I played with the colour and opacity adding key frames in these features. Then, I linked this the the main comp.

Thirdly, I added some text in the back doors tracking the area first and then adding ‘Corner Pin 2D’ first baked to fix the frame and then another one to match move the scene movement. I also added an animation to the text key framing the colour section, and the merge it to the main comp.

For both magic portals I used the same technique that we used last week with the ‘Planar Tracker’ and creating a ‘Corner Pin 2D (Relative)’ to fix the image to the area selected. I reformatted both clips and corrected the saturation and grades. Then I merged them to the main comp using ‘screen’ option so the black background disappears and there is an transparency effect in the colours.

‘Merge (screen)’ node
Categories
Nuke VFX Fundamentals

Week 8: Planar Tracking in Nuke

In this lesson, we checked further nodes in Nuke and we learnt how to use a planar track to add a flat image to a sequence.

We reviewed nodes such as ‘Reformat’ (to change sequence format to match main plate), ‘Crop’ (to crop an image or a video as required), ‘Merge’ (we saw how to use it to fix the size of the bounding box of a sequence to the Alpha layer or the Background layer), and ‘Shuffle’ (to add or remove channels – R, G, B, Alpha, and Depth).

We also learnt how important is the concatenation in a Nuke comp. Concatenation is the process of moving pictures/frames in a sequence. Nuke does calculations that need to follow a logic and if this logic is broken, the final result will not work. Following on this, we analysed several ways to organise the nodes in Nuke so they follow an order and, therefore, we achieve the desired result without any error.

Finally, we also studied how to use the ‘Planar trackers’ to add a 2D image to a 3D space and how to make it follow the movement of the sequence. First we added the ‘Planar tracker’ node, select the area we want with tracking points and track like we do with a regular ‘Tracker’ node. Then we turn on and align the grid to the tracking points to create the perspective desired, and finally, we create a ‘CornerPin2D (absolute) to create the tracker node that we are going to link to the image that we want to add. We can track translation, scale, and rotation together or separately if desired. When there is an object in front of the area that we want to track, we can track the object separately with another ‘bezier’ in the same ‘Planar tracker’ node, so Nuke recognises that object as an area of exclusion (so it does not take it in consideration when tracking the area that we want to).

As a homework, this week we were asked to add an image to the following sequence using what we learnt today in class.

First poster planar tracker showing bezier and grid lines adjustment

I added both posters using a ‘planar track’ node to track the plane where I wanted to add the poster. For the left poster I just tracked it, adjusted the grid lines to the perspective plane I wanted, and then created a ‘corner pin 2D (relative)’ that will be linked to the poster. This node will let the poster or image added to follow the movement of the shot that we have tracked.

For the second poster, it was necessary to add second bezier that tracks the pole that passes in front of the poster so the programme understands that the area of the second bezier does not have to be taking in consideration when tracking the first bezier area (it is excluded). The roto of the pole was already added in the comp by the professor so I just had to ‘merge’ the second poster ‘corner pin 2D’ to the main comp. I also adjusted the ‘grade’ and ‘saturation’ of the posters, skewed them a little bit with ‘transform’ node to fit 100% the perspective, and added some ‘blur’ to remove the sharp edges from the posters and blend them in to the comp.

My Nuke comp with both poster’s added
Poster’s in street added using ‘Planar tracking’ technique

This practice seemed pretty easy to me compared with other assignments as ‘Planar tracking’ is a straight forward tool. However, at the beginning I had a problem with the middle poster that has the pole obstructing part of the view in front of it. The ‘Planar tracker’ was not reading the area properly as the tracking points were jumping from the area selected to a completely different area and was not keeping the perspective I wanted to keep. I solved this making the tracking area bigger so the programme had more information to create the track along the frames. I also colour corrected the posters to blend them with the scene and make it more realistic. Overall, I am very happy with the result.

Categories
Nuke VFX Fundamentals

Week 7: Match moving – point tracking in Nuke

In this lesson, we learnt how to stabilise a shot using 2D tracks

With a ‘2D Track’ node, we can track the camera movement of a scene frame by frame to match it with another element. Then with a ‘Transform’ node we can change the translation, rotation, and scale of the frame to stabilise it. We can also create several tracking nodes from the main ‘2D Track’ node to automatically stabilize the scene, to match-move the scene, and to remove or add jitter.

Sometimes the scene has too much noise or grain and the 2D tracker is not able to track it properly.  In this case, we can use a ‘Denoise’ node to reduce the image noise or grain, so the camera tracker does not struggle the read the pixel sequence in between frames. We can also use ‘Laplacian’, ‘Median’, or ‘Grade Contrast’ to correct the grain.

As usual, it is important to set a Quality Control (QC) backdrop so we can check that the tracking or any rotoscoping added is properly done.

The assignment of this week is to stabilize the IPhone shot and to add the phone’s screen animation with ‘Rotoscoping’ and ‘Corner Pin’ nodes.

Full comp
Final result

Iphone comp improved: I tried to improve the fingers roto using the green despill set up that the professor sent to us and also improved the screen animation using the ‘curve editor’ to soften the starting and end points of the movement.

Improved comp with green despill
Improved comp

I struggled a bit with the fingers rotoscoping as when the fingers are moving faster, it is hard to roto the motion blur. The green despill set up we got from he professor helped a bit but I still do not fully understand how it works so I am sure that I could improve this comp once I learn how the green despill technique works.

Categories
Nuke VFX Fundamentals

Week 6: Merging and Colour Matching in Nuke

In this lecture, we learnt how to colour correct a sequence, the different colour spaces of a file, and how to import and export it.

We saw how to use ‘Grade, ‘ColourCorrect’, ‘Toe’, and ‘Blackmatch’ nodes to correct the colour of a sequence. These nodes can be used to correct specific parts of a sequence using rotos or to colour grade an alpha. Alphas need to be premultiplied to be added to the background plate, however, some alphas already come premultiplied, so in this case, we will add an ‘Unpremult’ node, then add the ‘Grade’ and/or ‘ColourCorrect’ nodes and then ‘Prebuilt’ node again.

It is also important to take in consideration the codec or colour space and linearization of the file imported as depending of what we are going to use the file for, we will need more information preserved in the file or a smaller size file. The files in a film production can be shared with compositors as LUTS, CDLs or graded footage. We also discovered the new ‘OCIOColorSpace’ node, which is used when the footage provided has already been graded.

And lastly, we saw proper ways to build up a map for grade and colour correct a footage, separating the primary and secondary colours correction, and then correcting the shadows in the last step. This way, if more amendments are requested, we can make the changes quicker.

The assignments of this week were to colour correct and airplane alpha to match its background and to carry on making some colour corrections in the previous mountains video using the roto created last week.

We also were asked to plan our air balloon sequence which we will be building up until the end of the term 1. My main idea for my air balloon video is to add a dark style, with neons and glowing lights, and add mist and thunders around the mountains.

Categories
Nuke VFX Fundamentals

Week 5: Tracking and Premultiplication in Nuke

In this lecture, we saw the technique used to track the camera movement in a scene and how to combine or premultiply several sequences.

In order to track the movement of a scene, we can add tracking points in Nuke that will detect the camera movement. This is a useful tool for rotoscoping since we will not have to adjust the roto in every single frame because of the camera shake. Sometimes it is important to add several tracking points as the camera movement will be different in the foreground, middleground, and background because of the motion parallax.

In another note, we can also combine several elements like rotos together in Nuke with a ‘merge’ node. However, it is important to keep in mind that the alpha channel value always has to be between 0.0 and 1.0. This can be sorted changing the way that the layers interact with each other, with settings like ‘screen’, ‘over’, ‘max’, etc. ‘Channel merge’ nods can also be used for this but they are not as reliable as the ‘merge’ nods.

When layering scenes, there is a tool that it is used in most of the cases called ‘Premult’. This tool premultiplies the RGB values by the alpha so the two layers are visible at the same time. It is also important to combine ‘Premult’ node with ‘Copy’ to add the alpha to the background.

The assignment this week was to rotoscope the bridge from the running man’s video and the mountain from the air balloon project using tracking points.

Running man final roto
Mountain roto
Categories
Nuke VFX Fundamentals

Week 4: Rotoscoping in Nuke

In this class we discovered the basics of rotoscoping in Nuke.

Rotoscoping is used to create alpha channels ‘matte’ to match the footage motion. With this, we can change the subjects background or create different effects with layering.

In Nuke, we learnt the basic rotoscoping using ‘Beziers’ to create the alpha channel and feathering to soften the edges of it.

This is the matte I created of the running man video the professor sent to us. It is made by parts, starting from the head of the running man down till the legs.

Quality Check
Final roto

Rotoscoping can be a tedious job in my opinion but with practice and experience it could result in a more quick and pleasant job, as well as in a rewarding experience with the final result achieved.

Categories
Nuke VFX Fundamentals

Week 3: Intro to Digital Compositing and Nuke Software Interface

In this session, we discovered the different roles of a digital compositor within a production or VFX company, along with the production stages to follow to create a film, video game, commercial, etc. We also saw the different compositing programmes available nowadays and we also had our first Nuke overview.

A Digital Compositor’s role is to create the final composition of a frame, shot or sequence (including animation, background, graphics, and SFX). The several rolls or stages that a digital compositor can opt to are the following:

  • Roto Artist – focused in rotoscoping (beginner position)
  • Prep Artist – rotoscoping and patching
  • Junior/Junior-mid/senior Digital Compositor – they usually put the parts together in a scene and support and help Roto and Prep Artists
  • Sequence Lead or Lead Compositor – in charge of a sequence
  • 2D Supervisor – organises sequences, meets with final clients, etc
  • VFX Supervisor – organises artists, clients and production

On another note, we also learnt that there are three stages to produce a film:

  • Pre-production – starting from the initial idea that is shaped into a story which is organised in a storyboard, with animatic, and design.
  • Production – after the film is organised, in this stage it is prepared the layout, R&D, modelling, texturing, rigging/setup, animation, VFX, lighting, and rendering.
  • Post-production – in this third stage is when it is taking place the compositing, 2D VFX and/or motion graphics, colour correction, which will result in the final output.

Lastly, we saw the different compositing programmes available such as After Effects, Davinci Resolve ‘Fusion’, and Nuke 13. In this class, we will be focusing in Nuke so we had an overview of the very basic tools of the programme and created our first composition. As this programme is based in nods and layers, which I have seen before in programmes such as Photoshop, After Effects, and Blender, it was easy for me to learn how nods were connected to work together. I played around a bit with the programme at home later on and created a little animation based on the instructions the professor gave to us. I could not export it as I’m still not sure how to do this but I took some screenshots of how it looked at the end.

My first comp in Nuke

My first impression of Nuke is that this is complex programme and I will need to dedicate a good amount of practice time to get use to the node’s work space, as I am more use to layered programmes like Photoshop and After Effects.

Categories
Nuke VFX Fundamentals

Week 2: Cinematography Foundation II

This week we continued with cinematography basics but this time more focused on video (moving picture).

A movie is a story told in pictures (moving images) which consists in capturing the light reflected or produced by a subject (exposure). In order to capture the amount of light desired, we will need to take care of the type of lens/optics of the camera (focus), the lens aperture (amount of light let through), the camera shutter (speed of aperture and closing of the lens’ blades), and the digital sensor of the camera that captures and process the light to store it in a digital file.

Depending of the exposure set, the image can be brighter or darker. Generally, the image should be balanced by the mid-tones. If there are too many highlights or whites the image will be over exposed and if there are too many shadows or blacks, the image will be under exposed.

In order to reach the desired exposure value (EV), we will need to take care of the ISO, shutter speed, and aperture. With the ISO, we can change the sensitivity to light and can be measured from 25-6400 or more. The more sensitivity added, the more noise in the picture, so this only should be increased if really necessary. With the aperture of the lens (f), we can add more or less light as well as depth of field (DOF – focal point of an image). If the aperture of the lens is wide, the DOF will decrease (shallow) and more light will come through, whereas if the lens opening is small, the DOF will increase (deep) but the light that comes through will be low. With the shutter speed (how long takes to the blades of the lens to open and close, measured in fractions and seconds), we can expose the sensor to more time of light or less. With a fast shutter speed we can freeze motion in the scene, but with a slow shutter speed we will capture blurred motion instead (more light comes through). In cinema, this is measured in frames per second (fps – frame rate) and depending of how smooth we want the movement to be or more realistic we can make it higher or lower. Shutter angle is used to describe the relation between shutter speed and frame rate in cinema (cinematic motion blur), which has a golden rule that says that shutter speed should be set to double the frame rate.

Another important aspect to take in consideration is the type of focal lens used (distance between top of the lens and image sensor and measured in mm). The shorter focal length, the wider the angle of view and vice versa. Also, the digital file in which we export our video (MP4, MOV, AVI, ProRes, etc), will also determine the quality desired depending on where do we want to reproduce the final video (social media, TV, cinema, etc).

The assignment of this week is to shot 2 to 3 seconds videos from the city (using our personal phone cameras) and put them together in a short video (can add music too). As I live in Cambridge, I decided to show the life in this city which has a slower pace and it is not as big and noisy as it is London.

City life in Cambridge – Week 2 video assignment

Most of the scenery has been filmed with a long shot, since my phone’s zoom has a really low quality and it doesn’t look as neat. However, I tried to add a zoom in and I also used some pans following subjects in movement to give more dynamic to the video. I tried to use diagonals and leading lines in the composition of the frame to add some depth to the frame. Finally, I put it together in Adobe Premier Pro and colour-corrected it to have a consistent style.

Categories
Nuke VFX Fundamentals

Week 1: Cinematography Foundation

In the first session with Gonzalo, we learnt the basics of cinematography such as what is visual story telling and the components needed to simulate an emotional experience.

The professor made clear since the beginning that every shot or take in a movie counts. Also, the director’s requirements need to be met regardless of what style or preference we have and each frame needs to be cohesive with the thematic and style of the movie.

The composition of a scene has to be carefully crafted taking in consideration:

  • Aspect ratio – proportions of the frames depending on what type of screen the film is going to be on.
  • Positioning – where the subjects are placed in the scene, taking in consideration rule of thirds, diagonals, focal points, hierarchy, etc.
  • Lighting – this is possibly one of the most important points to take care in a scene, since if there is no light, there is no scene. It needs to be considered the intensity (brightness), quality (soft or hard light), size of source (for example, small sources have softer light), distance from the subject (the further away from the subject, the harder light obtained), filtering (through diffuser or bouncing), and angle and positioning of the light (key light, fill light and back light are the three basic lights). We could use natural light (from the sun for example) or artificial light (with spotlights for example).
  • Colour – it needs to follow the style of the movie with warmer colours or colder colours (colour temperature measured in Kelvin units). Cameras in general are usually set for tungsten (3200K) or daylight (5600K).
  • Angle of the camera – this can defined the mood of the scene and can give the subject more or less importance. When a high angle is used (camera placed above the eye line), we can transmit weakness, less importance of the subject; and when a low angle is used (camera placed below the eye line), the subject becomes more important and powerful.
  • Camera shots – there are several type of framing that can be used to show the subject on camera: extreme close up, close up, loose close up, tight medium shot, medium shot, medium full shot, and full shot. A closer shot is used to show more detail or intimacy. A wider shot is usually used to situate the audience in the time and place.

In order to put in practice all these concepts, the task for this week is to take a maximum of 9 pictures that show or transmit the concept of ‘time’.

The speed of time

In these pictures I thought in how the time passes throughout the day depending on what are we doing and were we are. For example, when we go to work and we are in a rush or when we are doing an activity that we enjoy, usually time seems to go faster. Therefore, with my DSLR camera, I took several photographies using a closed aperture (f11 in most of the photos), a low ISO (to catch the night contrast and to don’t have much noise in the dark spots) and long exposure (slow shutter speed) in order to catch the light trails of the cars and bicycles passing down the road. I consider that the light trails show the rush of time and they also give movement to a still picture directing the viewer’s eye from one point to another.

In contrast with the rush of time, I also took photos of the stillness of time. For example when we are at home chilling at night time after a long day, time goes slower. I also used long exposure for the building and the lamp’s pictures with a low ISO and closed aperture to keep the contrast between light and shadows. However, for the picture of the moon between the plant leaves, I used a much higher shutter speed since the moon is quite bright and long exposure would result in a more than over exposed picture.

Thirdly, I took several photographies of the moon rising in the sky and put these pictures together in Photoshop to show the pass of time during night time.

Lastly, I also took several pictures of the view in front of my house throughout the day (from sunrise to sunset). In Photoshop, I put them together, showing only a triangular portion of each picture, and creating like this the effect of time passing during the day in the same scene.