Вы находитесь на странице: 1из 3

Graphics programming-- let's begin. Maybe we should start with math.

Keep in mind, many graphics programmers are not good at math.


Wait, let's step back further-- you don't really need to know any of this, a
ctually, to do a lot of cool things with graphics. But all this information is t
ools in your toolkit. When you approach a problem instead of needing to search t
hrough Google, you'll be a little more informed about it (and, well, can maybe m
ake better Google searches :) ).
Also remember graphics is still a new field, compared to other disciplines y
ou can learn about in school. In fact, graphics programming specifically changes
drastically as GPU's change. This information is helpful, but the current techn
iques can be improved upon. Never hesitate to wonder how things can be better, a
nd get creative.
Vectors and scalars
Scalar is just another word for a number, like an integer
Vectors have more values, 3 is common. Very useful for things like position
or color.
Dot products
If two vectors overlap, the dot product is 1. If they are perpedicular (90 d
egree angle), the dot product is 0. (Note: just for vectors that only represent
a direction and have length 1, you can think about varying vector lengths later)
Lighting is such a great way to think about a potential use for dot products
. If the light is perpendicular to an object, the dot product between the light
and the normal of the surface (the direction the surface is facing) is 0-- it is
n't lit, the light isn't shining on it. If the light is shining directly on the
surface, the dot product between the light and the surface is 1-- full brightnes
s. If it's at an angle, the dot product will be somewhere between 0 and 1, depen
ding on what the angle is.
Scalars multiplying with vectors
A scalar, when multiplied with a vector, just multiplies each value of the v
ector by the scalar amount
Matrices
Matrices are very useful!
A large use of matrices is transforming a set of positions from one "space"
into another.
UI is my favorite way to start thinking about this transformation. If I'm po
sitioning units in a user interface that exists within the world, I don't want t
o think about where they actually lie in world position coordinates. It's nicer
to think of, say the top corner of a grid, as the origin (0,0) coordinate. It's
simpler. I can use a matrix to transform all the positions in my UI grid to worl
d space when needed.
In graphics, you have a few important spaces. The specifics of the ones you
end up using depend a bit on the graphics engine you use and the graphics API yo
u use, but the basics remain the same. You have object/model space, world space,
view space, and screen space. Matrices help you transform between each of these
"spaces".
Wait, what's a graphics API anyway?
A graphics API is a set of instructions that your CPU sends to your GPU. Thi
nk things like "go draw this", "these are the vertices of the mesh you're drawin
g" or "this is where my camera is".
OpenGL is a graphics API, DirectX is a graphics API.
But aren't there many kinds of GPUs? How does one instruction work for all o
f them? Good question! This is where graphics drivers come into the picture. The
graphics driver will take the instructions you've sent to the GPU, and turn the
m into instructions the particular GPU on your computer understands, along with
specifics having to do with execution.
New graphics APIs like Vulkan or DirectX12 claim to be "lower level" graphic
s APIs in the sense that instead of relying so much on lots of driver code, you
can provide more details the driver normally provides with your graphics API. Th
is allows you to customize your code more, as well as bypass less-than-awesome d
rivers, but the downside is these APIs are more complicated.
Cool. So back to math...
Mathematics is a big deal in graphics!
A simple lighting equation could dot the normal with the direction of the li
ght, then add an ambient value. Why the ambient value? Light bounces around ever
ywhere in real life physics. We don't have time, energy, or compute power to dea
l with this in real time graphics. We have to fake it. We wish our computers wer
e powerful enough to not need to fake it and have real time graphics. Adding the
ambient value simulates the light bouncing everywhere and making things brighte
r.
Fog could use a similar concept, factoring in a distance value too.
A fancier solution to the "wait guys, light actually bounces everywhere, sto
p faking your lighting so much" problem would be something like global illuminat
ion and light probes-- sampling an image of the surrounding scene at points. For
instance, if you're near a red wall, this sample image will have a big red blob
on it. This will influence lighting, and make the side of your object more red.
How would we do specular? We could take the vector halfway between the norma
l and the light source, and dot this with the normal, and then square that. If y
ou think about the math behind this, you can see how that would produce a more s
pecular (shiny) light to an object. Just one solution!
Getting a good feel for mathematics is about much more than being able to so
lve equations without Google. If you have a good mental model for what different
math operations do, if you can picture it, you can dream up artistic new effect
s and make them a reality.
Memory
Understanding memory is also so important.
A computer spends a significant amount of time modifying and moving around m
emory.
If we just had all this memory in one big block, it would take too long to a
ccess elements. We can usually figure out what we need to access fast, and keep
that close.
"Keeping it close" brings us to the concept of caches. Thinking just in term
s of CPU, we have the ALU, Register, L1, L2, and L3 cache-- each Level is a litt
le farther out (harder to access) but stores a little more memory. Beyond that,
we have main memory, disk, and actually, network can be seen as an outer level i
n this mental model (streaming data from the internet, for instance).
GPUs are different.
Wait, how is GPU memory different?
GPUs process data in parallel. Very much in parallel.
Remember the ALU? GPU's have tons of those. The GPU is great at processing l
ots of very simple processes at the same time. The processes can't talk to each
other at all, remember.
GPU's typically do also have an L1 and L2 cache, but just remember memory is
more limited here and remember the parallel nature of the ALU's.
Also remember that the data sent to the GPU is sent from the CPU, over a bus
. That's a bottleneck. The bus can hold a surprisngly large amount of data, but
it's still something to remember.
What kind of data is sent over? Well, remember the part about graphics API's
. Typically the biggest data sets will be texture data, and things like vertex b
uffers and index buffers.
What's a vertex buffer and an index buffer?
A vertex buffer contains a list of positions for all vertices in a mesh.
An index buffer will list which of these vertices form triangles, in order
You could just have a vertex buffer, but you'd have a lot of repeating data.
An index buffer allows us to be smart about our memory usage so you can use the
memory on more interesting things
Keep in mind, you can get creative with how you use buffer data sent to the
GPU. Have you heard of voxels? You can send voxel data to the GPU in buffers, an
d read that in your shaders.
If I can get creative with how I send buffers, can I use the GPU for processes o
ther than graphics?
Yes, absolutely! Check out compute shaders.
AI is using GPU's quite heavily, too. Any field that tends to do a lot of si
mple operations at the same time can make use of a GPU.
What other things do we keep in mind for optimization?
Always think about caches. Avoid cache misses. People tend to like arrays, f
or instance, because that data's contiguous in memory. That means when you load
up an array, you're likely to also need other elements of the array, and since t
he arrays are contiguous in memory that block of memory will get pulled in toget
her and other elements of the array will be in your cache too. This is opposed t
o structures like linked lists, where each element is just an element and a poin
ter to another element farther away.
Hardware's smart. It does try to fill the cache with relevant data-- for ins
tance, it can detect whether you're accessing an array backwards or forwards, or
grabbing every fourth element, and so on. Pretty amazing, actually. But knowing
basics about how hardware is structured allows us to help it out.
GPU hardware is also smart. But the thing about examining GPU hardware/compi
lers/drivers too much is that it changes more frequently than the CPU side of th
ings. However, we still benefit from keeping up to date with how GPU hardware is
laid out. AMD and Intel are both pretty good about being open about specs for t
heir GPUs, check out those resources.
We can also know things like if you have a lot of conditional statements in
a shader, the GPU's likely to just execute everything. Remember, it has all thos
e parallel units, it's trying to put them to use! So maybe avoid that if you kno
w it's not necessary. By keeping up to date on GPU specs, we can also determine
things like whether we should use more scalar or vector operations.
Remember that often, usability trumps optimization. Be educated about hardwa
re and optimization, but don't sacrifice readable and usable code for optimized
and unreadable code unless you have to.
Other notes
Spend some time with assembly, if you can, or microcontrollers. It gives int
eresting insight into hardware and the way things work under the hood. It's also
really not that hard!
And I'll echo what I said in the beginning. So many graphics techniques are
so new compared to techniques in other disciplines. You know, computer science i
s new in general! Experiment. Play. Don't be afraid to try a new idea.

Вам также может понравиться