Planet Tech Art
Last update: April 30, 2017 11:59 PM
April 27, 2017

Vectors Primer

Understanding vectors is a crucial skill for any technical artist. In this article, we will look at the very basics, both as a refresher for those who already know the concepts and an introduction for the uninitiated.

What is a Vector?

You might recall those arrows of a certain length from the highschool algebra, or you might have already tinkered with vectors in a 3D package where they seem to be used to denote points in space. We will only be dealing with 3D vectors, in the form [x, y, z] where x, y and z are single-precision floating point numbers (basically decimals with limited precision – and it's good to always keep that in mind), that describe a position offset in space in the context of the corresponding axes. In MAXScript, they are represented by the Point3 value.

However, it's important not to treat vectors as a list of numbers or coordinates only – especially since they are not really point coordinates per se. And since showing beats telling every time, let's have a look at this, first of all:


What can we say about it? Well, for one, it is defined by two points. And yet, it's not acutally one vector, it doesn't matter that the point labels are the same. Actually, at each frame we see a different vector. It's not the labelling that defines a it, it's the direction and its length. The vectors on the following images are all identical:

Vector v Vector v: Vector[A, B] Vector v Vector v: Vector[A, B] Vector u Vector u: Vector[C, A] Vector u Vector u: Vector[C, A] Vector u_1 Vector u_1: Vector[G, D] Vector u_1 Vector u_1: Vector[G, D] Vector w Vector w: Vector[F, E] Vector w Vector w: Vector[F, E] Point A A = (2, 1) Point A A = (2, 1) Point B B = (1, 2) Point B B = (1, 2) Point C C = (3, 0) Point C C = (3, 0) Point D D = (4, 1) Point D D = (4, 1) Point E E = (5, 2) Point E E = (5, 2) Point G G = (5, 0) Point G G = (5, 0) Point F F = (6, 1) Point F F = (6, 1)

You might have seen the concept described as a directed magnitude and while that's definitely more exact, I want you to think about the 3D vectors we will be using primarily as position offsets.

What does it mean? Different things in different contexts:
  • when used for point positions in world space, a vector describes the position offset from the scene origin,
  • in local space it's a position offset from the center of the object's coordinate system,
  • for a pair of points it's a direction and distance to get from one to the other,
  • and sometimes, like for example with normal vectors, we care mostly only about the direction part.

And as vectors are position-less, i.e. you can move a vector all around and it's still the same vector, let's respect the convention of placing the starting point at the origin:

Segment gSegment g: Segment [B, C]Segment iSegment i: Segment [D, B]Segment jSegment j: Segment [A, E]Segment kSegment k: Segment [A, F]Vector uVector u: Vector[O, B]Vector uVector u: Vector[O, B]Vector vVector v: Vector[O, A]Vector vVector v: Vector[O, A]Point BB = (5, 1)Point BB = (5, 1)B = [5, 1, 0]Point OO = (0, 0)Point OO = (0, 0)Point AA = (1, 2)Point AA = (1, 2)A = [1, 2, 0]5.12.24

Vector Addition

You might remember that adding vectors graphically amounts to sticking a starting point of one vector to the end point of the other one, optionally adding other vectors iteratively, and in the end connecting the starting point of the first one with the end point of the last one. The order is arbitrary and doesn't matter.


That's true and lends itself nicely to the notion of vectors as position offsets in space, but let's have a look at another way to look at it. As I've said before, I will stick to the convention of placing vector starting points at the origin, and I do that for a reason:


The difference from the previous image is that here all the vectors are placed so that they start at the [0,0] point. Instead of putting the other vector in the addition to the end of the first one, let's draw a thin dashed line. As you can see, for both vectors, the end point of the dashed line ends up in the same place, at the end of the diagonal of the parallelogram created by the vectors. This diagonal is the resulting vector.

Try and move the vertex handles around, and you will see that when both the vectors are of the same magnitude (length), their sum is a vector that halves their angle. If one of them is bigger, it drags the resulting vector more towards itself.

When expressed numerically, matching pairs of coordinates are added together:
[1, 0, 5] + [-10, 2, 0] = [-9, 2, 5]
What can you use that for?
  • With more vectors of the same length, the resulting vector will point in their average direction (unless they cancel out each other).
  • When their lengths are different, the result is a weighted average (used in weighted normals where small faces contribute a smaller amount).
You can visualize any 3D vector by creating axis vectors from its components. It is also a nice demonstration of vector addition in 3D space, note that in this case, the resulting vector is also a diagonal of the imaginary bounding box of the three vectors:


Since the axis vectors come handy pretty often, there are predefined MAXScript globals x_axis, y_axis and z_axis that contain the values [1,0,0], [0,1,0] and [0,0,1] respectively.

Vector Subtraction

Vector subtraction follows exactly the same rules, only flipping the vector that's negated. For a pair of vectors, it helps to think about it as the other diagonal of the parallelogram, the one that connects the end points of the two vectors when they share the same starting point (which is true for example for all positions in space measured from the origin point). The direction of the resulting vector depends on whether we are subtracting A from B or B from A.

In the next example, A is the vector that is subtracted from B, and as such becomes the new 'origin' point for the resulting vector. Whenever you find yourself deciding what to subtract from what, ask yourself what you want to treat as the origin – the subtracted position effectively becomes one. Grab the vector handles and see how the result changes:


You can also think of it this way: subtracting [0,0,0] from any position gives you the position difference that needs to be added to the origin [0,0,0] to get to that given position. The line AB is described by points A and B, and if you subtract A from both of them, the first one is suddenly [0,0,0] – and the other one is (B - A) – our new vector.

DISCLAIMER: All scripts and snippets are provided as is under Creative Commons Zero (public domain, no restrictions) license. All the diagrams and presentations on the page are made using GeoGebra and are made available under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike license. The author and this blog cannot be held liable for any loss caused as a result of inaccuracy or error within these web pages.

by Vojtěch Čada (noreply@blogger.com) at April 27, 2017 08:37 AM


A Quick Checkin- Clion, Unreal and Mac

Super quick check-in. Hello world! For the last year or so, I've been working with Echtra Inc. on a great project as a Tools Engineer/Technical Artist guy.

The project is using the Unreal Engine, which I like more and more every day. On Windows, C++ plus Visual Studio Pro and Visual Assist is a great combo, and I happily churn through my daily tasks without fighting the tools too much.

Not so on my Mac at home. Programming in Unity on a Mac is great! Mono Develop isn't amazing, but it isn't terrible. But Unreal on a Mac. I want it to be fun, I want it to be possible, but I just can't get myself to like, let alone enjoy, XCode.

On that, for anyone thinking "well, you could just use blueprints..." etc, I feel it's too much of a shackle to not be able to just dive into the guts of it. C++ or bust.

So anyway, I recently adopted PyCharm at work and really enjoyed using it for my Python tools. I noticed that JetBrains also made an IDE called CLion, and they also had recently got it running with Unreal, so I thought, what the hell, why not.

Turns out their documentation is missing a couple of important notes, that maybe they take for granted, but after a couple of forum dives I managed to actually get it compiling, and running, my little test project.

So what was missing?

Something isn't set...

Once I run through their setup scripts, these were the things I needed to double check.

  • Make sure Mono is installed and up to date, and that the mono command is available in the terminal.
  • Once you generate your CLion project via the editor (make sure to follow the instructions here) you need to update the generated configs with paths to the editor executable. 
  • eg: /Users/Shared/Epic Games/UE_4.15/Engine/Binaries/Mac/UE4Editor.app/Contents/MacOS/UE4Editor
  • Finally, now that it's pointing to the editor correctly, add an absolute path to your project's .uproject file in the project arguments. 
If everything went well, you should now be able to build and run your project from within CLion. Bye bye XCode. I'll probably post some time in the future about how I'm finding CLion. Well, it compiles, and thats a start...

by Peter Hanshaw (noreply@blogger.com) at April 27, 2017 06:06 AM


April 25, 2017

Real Time Global Illumination

 
In keeping with a lot of the older posts on this blog I thought I'd write about the realtime GI system I'm using in a project I'm working on. It's a complete fresh start from the GI stuff I've written about previously. Previous efforts were based on volume textures but dealing with the sampling issues is a pain in the ass so I've switched to good old fashioned lightmaps. This is all a lot of effort to go to so why bother? The short answer is I love the way it looks. As a bonus it simplifies the lighting process and there's a subtlety to the end results that is very hard to achieve without some sort of physically based light transport. A single light can illuminate an entire scene and the bounce light helps ground and bind all the elements together.
 
The process can be divided into five stages, lightmap UVs, surfel creation, surfel clustering, visibility sampling and realtime update.The clustering method was inspired by this JCGT article however I'm not using spherical harmonics and I generate surfels and form factor weights differently. The JCGT article is fantastic and well worth a read.

Before you run off, here it is in action.



 Lightmap UVs

The lighting result is stored in a lightmap so the first step is a good set of UVs. These lightmaps are small and every pixel counts so you have to be pretty fussy about how the UVs are laid out. UV verts are snapped to pixel centers and there needs to be at least one pixel between all charts in order to prevent bilinear sampling from sampling incorrect charts. The meshes are unwrapped in Blender then packed via a custom command line tool. This uses a brute force method that simply tests each potential chart position in turn, for simple scenes and pack regions up to 256x256 the performance is acceptable.



  

Surfels and Clustering

Next up we have to divide the scene into surfels (surface elements) and then cluster those surfels into a hierarchy. At runtime these surfels are lit and the lighting results are propagated up the hierarchy. This lighting information is then used to update the lightmap.
 

Surfel placement plays a big part in the quality of the illumination and I've been through a few iterations. Initially I tried random placement with rejection if a surfel was too close to it's neighbours but this was hellishly slow. I also tried a 3D version of this which was much faster but looking at the results I felt the coverage could be better. Particularly around edges and on thin objects, the neighbour rejection techniques would often leave gaps that I felt could be filled. This seemed like it could be addressed by relaxing the points but I wanted to try something else.

I decided to try working in 2D using the UV's which in this case are stretch free, uniformly scaled and much easier to work with. The technique I settled on first generates a high density, evenly distributed set of points on each UV chart. N points are selected from this set and used as initial surfel locations and these locations are then refined via k-means clustering.

This results in a set of well spaced surfels that accurately approximate the scene geometry and makes it easy to specify the desired number of surfels. For each chart N is simply 

(chart_area / total_area) * total_surfel_count


The initial high density point distribution.
Surfel creation via k-means clustering of the high density point distribution.

These surfels are then clustered via hierarchical agglomerative clustering which repeatedly pairs nearby surfels until the entire surfel set is contained in a binary tree. Distance, normal, UV chart and tree balancing metrics help tune how the hierarchy is constructed. I'm still experimenting with these factors.

Hierarchical agglomerative clustering in action.

Lightmap visibility sampling

Influencing clusters for the highlighted lightmap texel.
Once the surfel hierarchy has been constructed each lightmap texel needs to locate the surfels that most contribute to it's illumination. Initially I used an analytic form factor but this would sometimes cause lighting flareouts if a texel and surfel were too close. Clamping the distance worked but felt like a bit of a hack so I switched to simply casting a bunch of cosine weighted rays about the hemisphere. Each ray hit locates the nearest surfel and the final form factor weight for each surfel is simply

 num_hits / total_rays

Once all rays have been cast the form factor weights are propagated up the hierarchy. The hierarchy is then refined by successively selecting the children of the highest weighted cluster. At each iteration the highest weighted cluster is removed and it's two children are selected in it's place. This process repeats until a maximum number of clusters is selected or no further subdivision can take place. The texel then has a set of clusters and weights that best approximate it's lighting environment.

Lighting update

The realtime lighting phase consists of several stages. First the surfels direct lighting is evaluated for each direct light source, visibility is accounted for by tracing a single ray from the surfels position to the light source. The lighting result from the previous frame is also added to the current frames direct lighting to simulate multiple bounces. There's a bit of a lag here but it's barely noticeable. Lighting values for each cluster are then updated by summing the lighting of it's two children.

Each active texel in the lightmap is then updated by accumulating the lighting from it's set of influencing clusters. The lightmap is then ready to be used.

Direct light only.
Direct light with one light bounce.

Direct light with multiple light bounces.
Timings for each stage (i7-6700k @ 4.0ghz)


  Surfel illumination (1008 surfels):               0.36ms
  Sum Clusters (2015 clusters):                     0.08ms
  Sum Lightmap texels (6453 texels * 90 clusters):  0.64ms

Environmental Lighting

Environment lighting is provided by surfels positioned in a sphere around the scene. These are treated identically to geometry surfels except for the lighting update where a separate illumination function is used. Currently it's a simple two colour blend but could just as easily be a fancy sky illumination technique or an environment map.




To finish up here are some more examples without any debug overlay. These were taken with an accumulation technique that allows for soft shadows and nice anti-aliasing.





by Stefan Kamoda (noreply@blogger.com) at April 25, 2017 04:58 PM


Rust animation test

Haven't had much free time the last few months but I did manage to let this rendered animation test grow slightly out of control. Big thanks to Stephan Schutze (www.stephanschutze.com, Twitter: @stephanschutze) for the awesome audio work.




Concept and design work


This little guy started out as a bunch of thumbnail sketches (below left) well over a year ago, but the design also shares some similarities with an even older concept (below right).


Eventually I got around to modelling and although the concepts don't really show it, I drew a lot of inspiration from the Apple IIe and Amiga 500 computers of my misspent youth. The 3D paint-over below shows an early version with only one antennae. The final version has a second antennae which was an accident, I kept it when I realised they could work almost like ears and added a bit more personality.


And finally, a snippet from an old mock comic book panel, just for the hell of it :)

by Stefan Kamoda (noreply@blogger.com) at April 25, 2017 01:19 PM


April 22, 2017

Ditching comments

Hi folks,

I just wanted to let you know that I'm ditching Disqus (the service powering comments) from this website in an effort to eliminate trackers. I silently removed google analytics some time ago for the same reason, but this time hurts a bit more because comments are the way we have to interact with each other and I felt like it deserves an explanation.

First things first: I love receiving your feedback, everytime I get a message/email from someone because of an article or some of my open source projects it totally makes my day, even "harsh" comments push me to do better by correcting some missconception or learning something new. As a sef-taught I owe a lot to the community and the whole purpose of having a website is to, in some way, pay back by sharing/helping newcomers and pushing myself by learning from your feedback. Big thanks to all of you for your support through the years.

Said that, I also have strong concerns about online privacy and the status of the web, I surely take measures to stay away from ads/trackers by using all sort of privacy oriented plugins, extensions, VPN and whatnot; but I feel like it is totally unfair from my part to push trackers to anyone in exchange for the ability to leave a comment on this website or to feed my ego by checking stats.

There might be some of you thinking what's wrong with ads/trackers? I have nothing to hide!

Well, most cloud-based services like google analytics or social widgets (facebook's likes and what not) require the inclusion of a little script to provide the service, said script is also used to track the visitor building an unique profile (which is a key piece to targeted ads... and who know what else, you have no say on what that data is used for). This alone is horrifying, but you have to consider that around 60% of the web uses google analytics and social widgets are rapidily becoming omnipresent, allowing these companies to literally follow you from website to website reconstructing your whole browser history without your knoledge/agreement.

This is wrong, we are not just talking about some script slowing down the website by adding some extra network requests, it's about respecting your freedom! Websites are literally trading your digital persona without your knowledge.

I know this website alone makes no difference to the googles and facebooks of the world, but it's about integrity and acting according to my believes... even if it means giving up on some convenient services along the way.

Cheers!

Hi folks,

I just wanted to let you know that I'm ditching Disqus (the service powering comments) from this website in an effort to eliminate trackers. I silently removed google analytics some time ago for the same reason, but this time hurts a bit more because comments are the way …

by Cesar Saez at April 22, 2017 02:00 PM


Blogger blues

Wherein our author uses blogger to post a blog post blogging about how much he dislikes blogger.
It's late on a Sunday night and I need to get this off my chest.

I really have come to loathe Blogger.  The sluggish, overly complicated, JS heavy theme, the sluggish, too-complex-for-speed-but-too-simple-for-interesting-stuff editor, and the way it stuffs stylesheet info into the RSS feed come to mind. but overall... it's just gotten on my nerves.

So, I'm probably going to transition the blog over to something else.  My current leading candidate for a site generator is Pelican, a Python based static html site generator which seems to be powerful enough for my not-too-complex needs.  Jekyll is another candidate but all things being equal I'd rather stick with a Python-based setup and the final output will be pretty much the same.

I'm a tad nervous about what happens to old links and traffic so I assume that I'll probably transition over gradually with duplicate postings for a while. If any of you have done something similar in the past I'd be curious to hear about how it went.

In the meantime, I'll just add that I've been dealing with the transition in typical TA fashion. I hacked up a script to download all of the existing blog posts as XML, then used the html2text module on the cheese shop to convert the HTML from the posts into markdown text. I'm still going to have to hand-finish every pieces, cleaning up dead links and missing images and so on:  I'm sure it'll be a TA-style spit'n'bailing-wire party for a while yet.

In the meantime I'm all ears if anybody has more suggestions for site generators, or a reason to go with something other than a static site on github.io, please let me know in the comments!

update:  the new site is here


by Steve Theodore (noreply@blogger.com) at April 22, 2017 12:53 AM


The New Hotness


I've finally completed rolling over to to a new, self-hosted blog platform!

The process took a bit longer than I wanted, mostly because it web development remains a messy, iterative process - at least for me.  Since I ended up unifying both the blog and the old Character Rigger's Cookbook as well as on old markdown wiki, I had to do a lot of little scripts to groom the old content into a consistent format and linking strategy.  Add in more than a modicum of CSS noodling and whatnot and my 'couple of weekends' project turned into a couple of months.

However all that is behind me now, and all my future updates are going to be coming up at theodox.github.io  (you can also use www.theodox.com).  If you're subscribed to the current feed, you should switch over to either http://theodox.github.io/feeds/atom.xml or http://theodox.github.io/feeds/rss.xml, depending on your reader; one of the nice side effects of the switch is that the new feeds are much cleaner than Blogger's -- no more CSS gibberish instead of article summaries, thank you very much!)

I'm going to leave this site intact, and I'll try to keep monitoring comments and so on here, but the new site is going to be where all the news stuff comes out.  I'm also going to change the blog.theodox.com redirect so it goes to the new site, so if you're using that in a blogroll you won't have to update it.

PS. I had to touch a lot of content during the migration: there are about 150 blog posts, several dozen wiki pages, and a bunch of articles that all had to be repointed and I'd be pretty surprised if nothing odd slipped through.  Please let me know using the comments at the new site so I can fix up anything that's confusing or misleading.

So, here one more link to the new site, just in case.  Hope to see you there!

by Steve Theodore (noreply@blogger.com) at April 22, 2017 12:42 AM