Вы находитесь на странице: 1из 52

Toolkit 2 - Lighting and

Rendering 1: Arnold
Pt2.
• Fog (Legacy) & Atmosphere.
• Displacement Maps (Standard and Vector).
• Stand-ins (proxies).
• Motion Blur.
Fog (Legacy) & Atmosphere • Can add atmosphere using Arnold in two
different ways:
• AI fog which is a legacy system which is quite
an old system, and othe over being volume
metric which is a newer system to add in
atmosphere.
• Here we have a preset scene with a basic
directional light with an exposure of 1, and
angle of 5 and samples of 5.
• To bring in the volume metric affect you can do
this by going to the render settings and going
down to the environment tab in the Arnold
render section. You will see a atmosphere
channel where you can set the atmosphere for
the scene, to do this you click on the black and
white checker and select create AI fog.
• Channel editor will pop up in the attributes
editor. Here you have the colour, distance,
height, ground normal and ground point. Hit
render.
• So in the scene you now have a default
fog, this is more of a fake effect there's not
really a volume in the scene, it purely from
the cameras point of view. You can also
see that there is more ‘fog’ on the left
than on the right so will need to adjust to
make it more natural.
• So to adjust go down to the ground
normal, you will see 3 channels name X,Y
and Z. so at the moment the fog is coming
from the Z axis as it moves from the left to
right of the render. So to fix cancel the Z to
zero and change Y to 1 and hit render.
• You should see the fog direction change
the bottom of the render view. The fog is
quite strong with little decay so to adjust
this you can change the distance of the fog
which determines how ‘far’ we can see
into the fog. So it is set to a default of 0.02
so I am going to drop it down to a distance
of 0.005; now much more of the house
can be seen.
• Brought the height up a bit to 15 to get a
more even spread of fog.
• Distance: how far we can see into the fog
• Height: is how much fog we are getting form the ground point up
• Ground Normal: the direction of the fog.
• Ground Point: is the coordinates of the fog it’s self, so you can move the position of the fog
around in the scene.
• Colour: is the colour of the fog and the colour of the background in the scene. It can also help
create some of the effects you would need in a scene.
• In the colour channel select a colour and apply.
From there go to the settings in the render view
and you an temporarily change the background
colour. So pick a darker colour for the fog and a
lighter colour for the temp background.
• Here you can then change the height till I was
happy with the fog effect –round about 30 after I
found 50 to be too much and 20 to be too little.
This created a very polluted effect with lots of
particulate matter in the atmosphere, causing the
sun to be blocked and have the ground be
reflected back up into the atmosphere.
AI fog (legacy) Outcome:
Ai Volume Metric Fog. • Fake fog using Arnold Ai fog and create true
volume metric effects.
• Preset scene has a spotlight already set up and
positioned up behind the trees.
• Did a quick render to preview the scene.
• To add fog being up the render settings and go
down to the environment tab and instead of
selecting the AI fog, select the AI atmosphere
volume. To edit the effect the channel will open
up in the attributes editor, initially the fog wont
appear in the render view as there is no density
within the scene at the moment, so set it to 0.1
and take a quick render; effect is immediate in the
scene.
• We are also seeing the light source now, where as
before we could only see the spotlight on the
floor. Key thing to remember as anytime you are
using a sort of volume metrics you are always
going to see the light source whilst you don’t
normally see that in any other render.
• Density: the thickness of the
fog
• Colour: the colour of the
fog.
• Attenuation: is to add decay
in to the scene- typically
reduce the colour in the
scene.

• Attenuation colour: work with the Attenuation by reversing the effect of the Attenuation. So where
Attenuation make the scene darker, Attenuation colour will make the scene lighter to counteract its
partner.
• Anisotropy: is a bias between the ground plane and the light source. So it is how close we our light/
intensity towards our light. Basically allow you to determine how much light you want to letting to the
scene,
• Samples: help clean up the scene in terms of noise and particulate matter.

Contribution attributes:
• Camera: if you lower the camera contribution you remove the atmosphere from the scene.
• Diffuse: by default the diffuse is deactivated, but if you where to increase the diffuse you can allow the
atmosphere to affect the colour in the scene.
• Specular: is activated by default .
• Set Anisotropy to 0.076, density to 0.01, Attenuation
0.241, colour to a mint green or a colour of your
choosing.
• If you where to change the colour on the Attenuation
colour it works as a subtractive effect. So what ever
colour that is brought in, it will create a weird effect;
meaning you would have to use the colour opposite of
the one you are using the main colour node. So say for
example you are using green in the main colour node,
you will have to pick and red/pink colour to get the same
sort of green colour in the render view.

• Lighting is a bit generic at the moment so if you


select the spot light you can go down to light
filters, click add, select gobo. Find the node and
then under the slide map click the checkers and
bring in a 2D texture, fractal.
• By doing so it had broken up the light source to
make it a little bit more natural and you will see
some light streaks once the settings has been
tweaked.
• To go back to atmosphere go to the render
setting and click the black square with the
arrow next to atmosphere node.
• Increase exposure on the spot light to 13. in
atmosphere change Attenuation to 0.289
and lower the density to 0.078 anisotropy
to 0.184.
Clean up
• Render section front leaves of tree
on the right.
• Increase samples in atmosphere to
12.

• In render setting : camera AA set to


4, set SSS 0/ Transmission 1/diffuse
2/specular 2/ volume indirect 2.

• Now read for final render.


Ai Volume Metric Fog Outcome:
Points to note with Ai Volume
Metric Fog.
• Can take long time to render.
• Common mistake when cleaning up the image to
increase the render setting in the Arnold renderer,
when it can easily be talked by going to the sample
nodes to help reduce the noise. This way you aren’t
adding unnecessary render times to your scene.
Displacement.

• Using Mudbox to generate texture maps to then use in Maya. The are two types of displacement maps in
Mudbox, one being a standard displacement map and other beign a vector displacement map.
• Standard Displacement Map: Is one of the oldest maps. It is a white to black gradient with shades of grey
which pushes and pulls the surface directly up and down based upon any given shape. The mid grey (0.5
grey) in that gradient represents no change to geometry. Black represents a push down and white
represents a push up in surface texture. A limitation of this map is that it only pushes up and down and
has no angle to its geometry, meaning we have a limitation to the geometry we actually make using this
displacement map.
• Vector Displacement Map: Is a newer version of a Displacement map. In this version the vector means 3
channels: X, Y and Z. here we also have a colour displacement map; which means we can not only go up
and down, but we can change the angle and it’s direction. Meaning we have no limitations to the
geometry me make.
In Mudbox: Dinosaur Head.

• In Mudbox we have a dinosaur head modelled. The model has 7 levels of subdivisions. The
lowest being zero and the highest being level 6. there is also 2 spheres in the scene
representing the dinosaurs eyes.
• For this you will need to bring
in the low and high res mesh.
• To generate a standard To do this select the head,
displacement map go to Maps > add selected in low res – this
extract texture map > new brings in level zero – then
operation. Select displacement under source models add
map in pop up window and bring selected again. This then
up the options. brings in the highest level
(level 7).
• Look at the method next; you have 2 methods: ray casting: which is where
Rays cast onto the surface or away from the surface to figure out eh
changes between the high and low. This is done using a search distance. If
you click best guess, this updates to take into account our levels and give
us the most economical and the most suitable search distance for our 2
different models.
• From there choose samples and change the sample method, which then
changes the resolution for the map based upon the furthest outside/inside
or the closest to the low res mesh. Leave this on the furthest outside.
• Image properties > size of our image > make 4096 x 4096.
• Base file name > click file > name dino_head_displ_0.6 > set file type to
open EXR (32 bit floating point_RGBA) – this will give us a HTR Map which
will allow for lots of different values between which and black so we can
get the most out of the map as possible > hit save.
• Once set hit extract > will have to wait for it to generate before taking the
Low res model into maya and then plugging the map in.
• The map has been created and has been plugged in as a bump on to the dino head surface.
We will need to turn this off, so go to layers > paint > turn off visibility (eye symbol).
• Then take a low res model into Maya to do this select head > hit page down on the keyboard
> keep going down till you hit level zero
• Then select all 3 objects (head and eyes) > go to file > export selection > export as
Dino_head.obj > hit save.
In Maya:
• Open Maya > go to file > import > find dino_head.obj. at
first it may not seem like it has imported, but it has. This
is because the Maya universe is 10x bigger than mudbox
– so the model will be 10x bigger.
Set up scene to render:
• Create > lights > directional light > scale up and direct to model > set exposure to 1 > angle to
3 > samples to 5 > duplicate light and rotate to give a rim light to model.
• Set up resolution gate in render settings > image size HD_1080 > renderable camera set to
perspective > turn on res gate in viewport and frame up dino in viewport and take an initial
render.
• Select eye > go to shape node in attributes
> Arnold > subdivisions > set type to
catclark > iterations to 2 > repeat for other
eye. Through doing this it creates a
smooth for the eyes.
• Repeat process for the other eye and
head.
• From there assign materials to eyes and
head. To do so right click on mouse, go
down to assign ne material > Arnold
shaders > standard shader > name eyes
> repeat for head and name head. Then
bring down specular weight to 0.370 >
roughness to 0.214 and bring down
colour slightly.
• From there open the hypershade > middle mouse drag head shader to editing
box > select and Graph the network (box with 2 arrow pointing to the right) >
select standard surface node > in attributes click the displacement channel
checker > select file node > go to file node in hypershade > attributes > file
input > find exported map from Mudbox and click open > take ender to see
what we have. – Maya should convert EXR to a Tx file (texture file) when we hit
render in Arnold.
• Adjustments to map surface > select head
and go to displacement attributes as set
the height to 10 and hit render. From there
go to subdivisions in attributes and change
the iterations from 2 to 6 and do final
render.
Displacement Outcome:
Vector Displacement Map:
In Mudbox: Mushroom.
• Model has 6 levels (0-6) > the model change quite drastically when you step down the levels
(page down) > and there are a lot of curves, meaning a standard diplacment map wont work
due to the varying angles and therefore not be able to cope with the changing angles so a
vector map will have to be used.
• Go to Maps > extract texture maps > new operation > vector displacement > low level zero
and high level six > set image size to 4096 x 4096 > vector space > object > file >
Toadstoal_vector_displ_0.6 and set file type to OpenEXR > save add VDM in pop-up window
and hit okay > then hit the extract button.
• From there select model > hit page down till you get to level zero > reselect level zero model
> file > export selection > Toadstool.obj
In Maya:
• Open new scene and got to fie import > toadstool.obj > zoom out and create a directional
light > set exposure to 1 > angel to 3 > samples to 3 > duplicate and rotate for a rim light
effect.
• Then go to render settings > presets set to HD_1080 > renderable camera set to perspective
> open res gate in viewport and frame model.
• From there create a new
surface shader by assigning
a standard surface and
name Toadstool_ss > in
specular node set weight to
0.409 > roughness to 0.253
and bring colour down.
• Then open the hypershade and middle mouse drag toadstool
shade to work area and graph the network. After this select
the shader and input Displacement map in attributes through
the file node.
• Maya typically places displacement maps in the displacement node channel > but we want it
in the vector displacement channel, so we will have to delete 2d texture and file1 in
hypershade > then create a new displacement in vector displacement with the checker and
selecting file > this then allows us to bring in the vector displacement map image.
• Preview in render - there
are a few problems, these
being that we need to
smooth the models and
correct the scale.

• To do this select the model and go to subdivisions in


attributes and go to type and set to catclark and set
iterations to 6.
Vector Displacement Map Outcome:

For some reason there was no texture map included in the file download to go with this tutorial
so did a full render of the untextured model.
Stand-ins (Proxies).
• To lower the strain on the viewport memory, we
can turn complex models into proxies using the
Arnold stand-in function. This will allow us to work
with a larger number of objects within a scene and
not slow down the run time in the viewport. For
example; if we wanted to create a whole city/ or
landscape of some sort we would be working with
lots of high quality polygon objects, meaning when
duplicated numerous amounts of times. This will
slow things down in the viewport and create a lag;
which then make it hard to work quickly and
effectively in Maya.
• To solve this problem we can use Arnold stand-ins,
also known as proxy objects. To do this you will
need to export the chosen object –in this case the
robot- out of maya to then import back in, which
then lower the stress on the viewport at any one
time.
• To create Stand-ins you will need to select the
entirety of the chosen object -the robot
(Robot_Move_Grp)- all at once.
• From there you will need to go to the Arnold menu
tab, go down to stand-in and then the export
options.
There is a number of setting options in the export window:
• At the top under general options, you have the file type
we are exporting. Here we have an .ass file that will be
used to export the stand-ins.
• Below that we have the file type specific options and the
export options. These relate to the look or preference for
our object. Within in this it includes to the Maya light
links; for example you might have linked up a light to the
object and then broken it, in doing so we create a rule
and those rules are then saved in our file type. This is
important to note as once you have set up some rules can
be retained in the file type.
• Below that in the sequence tab we have the ability to
bring out a sequence of objects as well. Meaning a
sequence of models eg a character moving across the
scene. This would be slightly different to if we were
bringing out an animated object, in this case a sequence
means a sequence of models; so for everyone of the
frames our character is moving we are going to get a
different model exported out.
• Here I left everything at its default as I just want to export
the character out as it is, to then bring back in as a proxy
later on. With this I went to export selection as saved the
file as Robot_Proxy.ass and then hit save. In doing so I
have now created a proxy object and have sent the select
object out as a proxy file.
• I then deleted the high polygon object and then did a quick render to ensure that there is nothing in the
scene going forward.
• I then went to the Arnold menu tab and stand-ins again, but this time clicked on the create options and
selected the proxy file to then load into the maya viewport.
• Here you are met with a bounded box that represents the volume of the object –the robot- I
had exported out. On the right in the attributes you will see the stand-in node with the Path
directory for the proxy object; this is important as we need to make sure we keep these in
the same place when we open our scene so then Maya knows where to find out objects –
much like when we attach a texture map to a polygon mesh. So try not to move said file as
you will need to relink the file back into the directory.
Shot_cam. Viewport.

• From there I then duplicated the selected proxy bounding box to fill the render view full of
robots, with this I created a 9 by 11 grid to fill the entirety of the shot cam.
• In the render view I then rendered the image, with this there was no long render times and
no added stress on the viewport as there are not high polygonal meshes to slow it down.
Few Points to Note #1:
• Common problem with working with proxies is that you are left with a bounding box; which
doesn’t really indicate where our object is in relation to the direction its facing, making it
hard to place said object. But we can change the mode of the proxy in the Viewport Draw
Mode which allows us to see versions of the proxy depending on what setting we choose
instead of a empty cube – here I used the shaded option to quickly see the object.
• This is important to remember it is only a representation of the object as you don’t see the
entirety of the object until you render it.
• You can also leave it as shaded in the viewport while you position everything, but then
change it back to bounding box to reduce stress on the viewport.
Arnold Stand-Ins (Proxies) Outcome:
Points to Note #2.
• Stands can be very powerful as it means you can
create complex scenes but also manage and
navigate scenes quickly and effectively at the same
time.
• Another thing to note is that if you are using an
animated sequence – a sequence of models- you
are able to change the frame start and frame offset
in the attributes as well.
Motion Blur.

• Motion blur can be applied in a number of ways; the first being doing within the maya render
settings.
• To do this I found the frame with the animated car in motion (frame 17) and then went to the
Arnold render settings, making sure the Arnold render view was open to check the results.
• In the render setting I then went to the Arnold render tab and then to the motion blur menu and clicked
enable. This will activate the motion blur on that particular frame.
• Hit render, should see the motion blur affect now applied.

What the functions do In the motion blur tab, here you have a range of settings, these are:
• Deformation: relates to objects that are deforming, eg: skinned characters/anything tht has a surface that
is being pulled around by a deformer.
• Camera: is camera motion blur, meaning the camera is actually moving and will create blur based upon
the set key frames.
• Below that we have the actual motion blur function which is set for out scene.
• Position: is the position of the blur, which in this case is in the centre of the frame.
• Length: is the duration of that blur. So it is based upon the key frames either side of the main frame. So in
this case frame 17 is the main key frame and frame 16 and 18 has a portion of that motion blur applied to
help create that blur.
• Here I left the settings
in the motion blur at
their default and
looked at setting the
samples for the
render; for this I
created a render
region and looked at
setting the volume
indirect to 0, SSS to 0,
Transmission to 1,
Specular to 2, Diffuse
to 2 and Camera AA
to 4 and then hit
NOTE: you can turn off the motion blur in the render.
camera.
NOTE 2: Motion bule can be destructive as you can
no longer go back to a shaper image once
exported since the file now contains the motion
blur.
• From there I then created a batch render for
the new motion blur scene. Using the settings
in the common tab (see on the left) and by
going into the rendering menu, then the
render tab and going down to batch render to
then be able to render out with the motion
blur.
Pros and Cons to Motion Blur
in Maya.
PROS: CONS:
• Can help clean up • Working destructively
animation. as the motion blur is
• Can help the animation built into each one of
look much smoother those frames.
that it was before. • Can cause problems
• Can help even with compositing in
everything out and post-production.
soften things.
Motion Blur in After Effects.
However, you can create the same effect in After Effects using a filter instead.
• To do this I had to use the car skid
sequence that didn’t have any
motion blur applied from Maya.
• This was then brought into After
Effects as an image sequence and
then dragged down onto the
timeline. From there I then moved
through the frames to frame 16
and went to the effects menu and
searched ‘Type Pixel’ and then ‘see
Pixel Motion Blur’. This was then
dragged down onto the image
sequence in the timeline.
• Motion blur has now been added to the sequence, this will look slightly different to the
effect in Maya/Arnold due to how it will be calculated, as we are actually looking at the
position of pixels over a few frames.
• Meaning when looking at the position of one pixel on frame 15 and 17 which then gives us
this duplicated effect in out render.
• To export sequence I went to composition and then add to Adobe Media Encoder Queue.
• Used media encoder instead – much faster and quicker as you can use the h.264 setting
rather than a QuickTime like in After Effects.
3 different ways to add motion blur to a scene. 1. within the maya render settings. 2. within
post production, eg After Effects. And 3. through a composting package (motion vectors).
Points to Note about Motion
Blur:
• Is the only effect in After Effects to achieve motion
blur.
• Is technically a cheat.
• It does give motion blur, but id doesn’t give us the
accuracy of Maya.
• Once played you don’t notice this as much since its
moving at such a high speed.

Вам также может понравиться