Your online Softimage Educational Resource

The blog has been online for more than 4 years and there’s still not a single post even remotely related to the delicious brew called coffee… Perhaps it will someday, but in the meantime you can read the articles about Softimage. Most of the material are tutorials and Q&As I’ve written for 3D World Magazine sometime between today and 2003. If you have any questions please don’t hesitate sending me an email.


Thanks to Letterbox Animation Studios for hosting all the scene files.

Make sure you visit their Redi-Vivus.com for 100s of hours of free XSI video tutorials.

Monday, July 29, 2013

Automatic weightmaps for the left and right side of a head using ICE



Start by downloading and opening the scene ICE_Weightmaps.scn from https://dl.dropboxusercontent.com/u/3834689/CaffeineAbuse/LeftAndRight_Weightmap.zip. Before you can assign any weights use ICE you need an actual weightmap to assign them to. Select the Head object and from the Get > Property menu choose Weight Map. In the PPG, click the Rename button and rename it to Weight_Map_Left. Create a second WeightMap and rename it Weight_Map_Right. With the head still selected, press [Alt] + 9 to open an ICE Tree and from the Create menu choose ICE Tree. Get a Get Data node and enter self.PointPosition as the reference. To assign the proper weights to the weightmap you need to find where along the x-axis each of the points is located. If it’s higher a certain value (zero in this case) it’s part of the left side of the head, and if it’s lower (a negative number in this case) it’s part of the right side. While the get PointPosition node does get the position of the points on the geometry, it outputs them as a 3D vector (the X, Y and Z position and direction per point). So you’ll need to convert the 3D Vector to output the X, Y and Z component separately.

Since your calculating the distance between the points at the furthers left and right side, it doesn’t matter if the object is placed at the center of the scene or not. The distance between the minimum and maximum is always the same either way.

Get a 3D Vector to Scalar node and connect the Get PointPosition node to its input. By getting the position at the furthest right (maximum) and furthest left (minimum) and interpolate between the two you can assign a value to each point based on where it is located within that range. Sort of a percentage or gradient from furthest left to right if you will.

Get a Get Array Maximum and a Get Array Minimum node and connect the X output of 3D Vector to Scalar node to their respective Array input. Get a Linear Interpolate node and connect the Array Minimum to the First input and the Array Maximum to the Second input. The weights in the weightmap ranges from 0 to 1 so you need to rescale the current values which spans from -3.230 to 3.230 (their global x positions) to fit this range. Get a Rescale node and connect the Linear Interpolation node to its input. While you could enter the current minimum and maximum positions manually as the Source Start and End, your ICE Tree would only work on this specific model which really isn’t that useful. However, you already know the position of all the points so you can automatically get the highest and lowest number and plug that into the Rescale node. To do so, get a Get Maximum in Set and a Get Minimum in Set node and connect the X output of the 3D Vector to Scalar node to their respective Value inputs. Then connect the Maximum in Set to the Source End of the Rescale node and the Minimum in Set to the Source Start.

The FCurve node does not only enables you to create a smooth transition between the left and right weightmap, it also defines where each side start and ends. If your head is asymmetrical, simply nudge the keyframes to the left or right to aligned them with the geometry.

Get an FCurve node and connect the output of the Rescale node to its In input. Open the Fcurve PPG and select the Key at the left. Right-click on the Key and from the menu, choose Key Properties. Set the Frame value to 0.49. Click the Next Key button and change the Frame value to 0.51. This will create a smooth transition between the points of the left (with a value of 0) and right side (with a value of 1) of the head. Get a Set Data node, enter self.MyWeights as the reference and connect the output of the FCurve to its input. Then connect the Set Data node to the Port1 of the ICETree.

Get a Get Data node and enter self.MyWeights as the reference. Get a Set Data node, open its PPG and click the Explore button. Expand the tree in the explorer Head > Polygon Mesh > Clusters > WeightMap Cls > Weight_Map_Left and choose weights. Close the PPG and connect the Get self.MyWeights node the weights input of the Set Data node. Connect the Set Data node to the New (Port2)… input of the ICETree. To assign the weights for the right side weight map, all you have to do is to reverse the values. Get a Rescale node and connect the output of the self.MyWeights node to its Value input. Open the PPG and change the Target Start to 1 and Target End to 0. Get a Set Data node and repute the previous step but select the Weights for the Weight_Map_Right. Connect the Set Data node to New (Port3)… of the ICETree.

Quick tip
Whenever modifying or assign weights to weightmaps using ICE, you should always use a custom “buffer attribute” to do all your calculations. Then get the custom attribute and set the actual weights of the weightmap at the bottom of your ICETree.

Once you’re happy with the weightmaps it’s a good idea to freeze the geometry to avoid accidentally changing the weights. Please note that this will delete your ICE Tree so you might want to save the scene under a separate name for future reference.


Read the full post>>

Friday, April 12, 2013

Creating HDR environment maps from a 3D scene

Generating a HDR environment from your 3d scene is in fact not that much different from how you would go about in real life. The two most common options are to either shoot a chrome ball or to use multiple images to construct a spherical panorama. While there are 3rd party plugin which enables you to render the scene trough a spherical lens, you can just as easily create a virtual chrome ball to serve your needs.

The project files used in this tutorial can be found here: https://dl.dropboxusercontent.com/u/3834689/CaffeineAbuse/HDR_ProjectFiles.zip



Create the Chrome Ball 
Open the HDR_Environment.scn scene from this issues CD. Select the ChromeBall and from the Get > Material menu choose Constant. In the Constant Material PPG, switch to the Transparency/Reflection tab and set the Reflection Mix Color to pure white (R:1, G:1, B:1). This will turn the sphere into a perfect 100% reflective chrome ball. However, as the chrome ball is part of the scene it also means it will interfere with the lights, shadows, and so on, which probably is not what you want. 


Exclude the Chrome ball
With the chrome ball still selected, press [F3] to open a mini browser and click on the Visibility icon to open the PPG.  Uncheck the Shadow Caster and Shadow Receiver checkboxes.  While this will exclude it from casting and receiving shadows it still affects the final gathering in the scene. To avoid this, uncheck the Caster and Visible in Sampling attributes in the Final Gathering section of the PPG.

Generate the environment map
With the ball still selected, from the Get > Property menu choose Render Map.  In the Format section of the PPG, uncheck the Square checkbox and set the resolution to 1024 x 512. Click the New button next to the UV title and select Spherical to create a texture projection. Since you’re generating an HDR image you obviously need to use an image format supporting it. Change the output format to OpenEXR. By default reflection are disabled from the render map generation and while this is something you normally want it does counteract the sole purpose the chrome ball. In the Disable Surface Properties section, make sure uncheck the Reflection checkbox. Click the Regenerate Maps… button and you’re done.



Read the full post>>

Sunday, January 20, 2013

Creating strands between two different objects

Once completed the following steps you’ve effectively generated strands between the objects, though they won’t show up in a rendered image as you haven’t  defined any size or shape yet. To do this simply get a Set Data node, right click on the Value Port of the node and choose Add Port After two times so you have three values. Open the PPG, enter self.Size as Reference and set the size to 0.1 or so. Enter self.Shape as Reference1 and choose Cylinder. To loft the shape along the strand rather than using individual shapes you’ll need to add one last attribute. Enter self.StrandDeform as Reference2, press enter and check the self.StrandDeform checkbox. Finally connect the Execute output of the Set Data node to the Port2 of the ICETree node.

The project files used in this tutorial can be found at: http://dl.dropbox.com/u/3834689/CaffeineAbuse/Strands_between_points.zip


Set the starting points
Open the scene Strands_between_points_Start.scn. From the Get > Primitive > Point Cloud menu choose Empty and press [Alt] + [9] to open an ICE Tree View. From the Create menu choose ICE Tree. Press [8] to open an Explorer and drag and drop the StartPosition grid into the ICE Tree. Get a get data node, open its PPG and enter .PointPosition as the Reference. Get another Get Data node and .kine.global as the Reference. Connect the Out Name output of the Get StartPosition node to In Name input of the two Get Data nodes. The point positions are stored in local space, meaning that you have to add the object’s transformation in order to retrieve the actual position of each point.  Get a Multiply Vector by Matrix node and connect the Value output of Get .PointPosition to the Vector input and the Value output .kine.global node to the Matrix input. 


The end point
Get an Add Point node and connect the Result output of the Multiply Vector by Matrix node to the Positions1 input. Then connect the Add output of the Add Point node to the Port1 of the ICETree node. This will create a point at each of the grids vertices and will be used to create the strands. Get a Build Linearly Interpolated Array node and a Get Data node. Enter Self.PointPosition as the Reference and connect the Value output to the Start input of the Build Array node. Open an Explorer and drag and drop the EndPosition grid into the ICE Tree.  Repeat the previous step and connect the PointPosition, kine.global and Multiply Vector by Matrix nodes.



Create the strands
While you’ve just defined the end position for the strands you can’t feed this information directly into Build Array node because the point positions derive from different component types (points vs. vertices).  To fix this, get a Switch Context node and connect Multiply Vector by Matrix node to its Value input. Then connect the Result of the Switch Context to the End Value of the Build Array node. Get a Set Data node, enter self.StrandPosition as the Reference and connect the Result of the Build Array node to the input. Connect the Execute output of the Set Data node to the On Creation1 input of the Add Point node.


Read the full post>>