Your online Softimage Educational Resource

The blog has been online for more than 4 years and there’s still not a single post even remotely related to the delicious brew called coffee… Perhaps it will someday, but in the meantime you can read the articles about Softimage. Most of the material are tutorials and Q&As I’ve written for 3D World Magazine sometime between today and 2003. If you have any questions please don’t hesitate sending me an email.


Thanks to Letterbox Animation Studios for hosting all the scene files.

Make sure you visit their Redi-Vivus.com for 100s of hours of free XSI video tutorials.

Monday, July 29, 2013

Automatic weightmaps for the left and right side of a head using ICE



Start by downloading and opening the scene ICE_Weightmaps.scn from https://dl.dropboxusercontent.com/u/3834689/CaffeineAbuse/LeftAndRight_Weightmap.zip. Before you can assign any weights use ICE you need an actual weightmap to assign them to. Select the Head object and from the Get > Property menu choose Weight Map. In the PPG, click the Rename button and rename it to Weight_Map_Left. Create a second WeightMap and rename it Weight_Map_Right. With the head still selected, press [Alt] + 9 to open an ICE Tree and from the Create menu choose ICE Tree. Get a Get Data node and enter self.PointPosition as the reference. To assign the proper weights to the weightmap you need to find where along the x-axis each of the points is located. If it’s higher a certain value (zero in this case) it’s part of the left side of the head, and if it’s lower (a negative number in this case) it’s part of the right side. While the get PointPosition node does get the position of the points on the geometry, it outputs them as a 3D vector (the X, Y and Z position and direction per point). So you’ll need to convert the 3D Vector to output the X, Y and Z component separately.

Since your calculating the distance between the points at the furthers left and right side, it doesn’t matter if the object is placed at the center of the scene or not. The distance between the minimum and maximum is always the same either way.

Get a 3D Vector to Scalar node and connect the Get PointPosition node to its input. By getting the position at the furthest right (maximum) and furthest left (minimum) and interpolate between the two you can assign a value to each point based on where it is located within that range. Sort of a percentage or gradient from furthest left to right if you will.

Get a Get Array Maximum and a Get Array Minimum node and connect the X output of 3D Vector to Scalar node to their respective Array input. Get a Linear Interpolate node and connect the Array Minimum to the First input and the Array Maximum to the Second input. The weights in the weightmap ranges from 0 to 1 so you need to rescale the current values which spans from -3.230 to 3.230 (their global x positions) to fit this range. Get a Rescale node and connect the Linear Interpolation node to its input. While you could enter the current minimum and maximum positions manually as the Source Start and End, your ICE Tree would only work on this specific model which really isn’t that useful. However, you already know the position of all the points so you can automatically get the highest and lowest number and plug that into the Rescale node. To do so, get a Get Maximum in Set and a Get Minimum in Set node and connect the X output of the 3D Vector to Scalar node to their respective Value inputs. Then connect the Maximum in Set to the Source End of the Rescale node and the Minimum in Set to the Source Start.

The FCurve node does not only enables you to create a smooth transition between the left and right weightmap, it also defines where each side start and ends. If your head is asymmetrical, simply nudge the keyframes to the left or right to aligned them with the geometry.

Get an FCurve node and connect the output of the Rescale node to its In input. Open the Fcurve PPG and select the Key at the left. Right-click on the Key and from the menu, choose Key Properties. Set the Frame value to 0.49. Click the Next Key button and change the Frame value to 0.51. This will create a smooth transition between the points of the left (with a value of 0) and right side (with a value of 1) of the head. Get a Set Data node, enter self.MyWeights as the reference and connect the output of the FCurve to its input. Then connect the Set Data node to the Port1 of the ICETree.

Get a Get Data node and enter self.MyWeights as the reference. Get a Set Data node, open its PPG and click the Explore button. Expand the tree in the explorer Head > Polygon Mesh > Clusters > WeightMap Cls > Weight_Map_Left and choose weights. Close the PPG and connect the Get self.MyWeights node the weights input of the Set Data node. Connect the Set Data node to the New (Port2)… input of the ICETree. To assign the weights for the right side weight map, all you have to do is to reverse the values. Get a Rescale node and connect the output of the self.MyWeights node to its Value input. Open the PPG and change the Target Start to 1 and Target End to 0. Get a Set Data node and repute the previous step but select the Weights for the Weight_Map_Right. Connect the Set Data node to New (Port3)… of the ICETree.

Quick tip
Whenever modifying or assign weights to weightmaps using ICE, you should always use a custom “buffer attribute” to do all your calculations. Then get the custom attribute and set the actual weights of the weightmap at the bottom of your ICETree.

Once you’re happy with the weightmaps it’s a good idea to freeze the geometry to avoid accidentally changing the weights. Please note that this will delete your ICE Tree so you might want to save the scene under a separate name for future reference.


Read the full post>>

Friday, April 12, 2013

Creating HDR environment maps from a 3D scene

Generating a HDR environment from your 3d scene is in fact not that much different from how you would go about in real life. The two most common options are to either shoot a chrome ball or to use multiple images to construct a spherical panorama. While there are 3rd party plugin which enables you to render the scene trough a spherical lens, you can just as easily create a virtual chrome ball to serve your needs.

The project files used in this tutorial can be found here: https://dl.dropboxusercontent.com/u/3834689/CaffeineAbuse/HDR_ProjectFiles.zip



Create the Chrome Ball 
Open the HDR_Environment.scn scene from this issues CD. Select the ChromeBall and from the Get > Material menu choose Constant. In the Constant Material PPG, switch to the Transparency/Reflection tab and set the Reflection Mix Color to pure white (R:1, G:1, B:1). This will turn the sphere into a perfect 100% reflective chrome ball. However, as the chrome ball is part of the scene it also means it will interfere with the lights, shadows, and so on, which probably is not what you want. 


Exclude the Chrome ball
With the chrome ball still selected, press [F3] to open a mini browser and click on the Visibility icon to open the PPG.  Uncheck the Shadow Caster and Shadow Receiver checkboxes.  While this will exclude it from casting and receiving shadows it still affects the final gathering in the scene. To avoid this, uncheck the Caster and Visible in Sampling attributes in the Final Gathering section of the PPG.

Generate the environment map
With the ball still selected, from the Get > Property menu choose Render Map.  In the Format section of the PPG, uncheck the Square checkbox and set the resolution to 1024 x 512. Click the New button next to the UV title and select Spherical to create a texture projection. Since you’re generating an HDR image you obviously need to use an image format supporting it. Change the output format to OpenEXR. By default reflection are disabled from the render map generation and while this is something you normally want it does counteract the sole purpose the chrome ball. In the Disable Surface Properties section, make sure uncheck the Reflection checkbox. Click the Regenerate Maps… button and you’re done.



Read the full post>>

Sunday, January 20, 2013

Creating strands between two different objects

Once completed the following steps you’ve effectively generated strands between the objects, though they won’t show up in a rendered image as you haven’t  defined any size or shape yet. To do this simply get a Set Data node, right click on the Value Port of the node and choose Add Port After two times so you have three values. Open the PPG, enter self.Size as Reference and set the size to 0.1 or so. Enter self.Shape as Reference1 and choose Cylinder. To loft the shape along the strand rather than using individual shapes you’ll need to add one last attribute. Enter self.StrandDeform as Reference2, press enter and check the self.StrandDeform checkbox. Finally connect the Execute output of the Set Data node to the Port2 of the ICETree node.

The project files used in this tutorial can be found at: http://dl.dropbox.com/u/3834689/CaffeineAbuse/Strands_between_points.zip


Set the starting points
Open the scene Strands_between_points_Start.scn. From the Get > Primitive > Point Cloud menu choose Empty and press [Alt] + [9] to open an ICE Tree View. From the Create menu choose ICE Tree. Press [8] to open an Explorer and drag and drop the StartPosition grid into the ICE Tree. Get a get data node, open its PPG and enter .PointPosition as the Reference. Get another Get Data node and .kine.global as the Reference. Connect the Out Name output of the Get StartPosition node to In Name input of the two Get Data nodes. The point positions are stored in local space, meaning that you have to add the object’s transformation in order to retrieve the actual position of each point.  Get a Multiply Vector by Matrix node and connect the Value output of Get .PointPosition to the Vector input and the Value output .kine.global node to the Matrix input. 


The end point
Get an Add Point node and connect the Result output of the Multiply Vector by Matrix node to the Positions1 input. Then connect the Add output of the Add Point node to the Port1 of the ICETree node. This will create a point at each of the grids vertices and will be used to create the strands. Get a Build Linearly Interpolated Array node and a Get Data node. Enter Self.PointPosition as the Reference and connect the Value output to the Start input of the Build Array node. Open an Explorer and drag and drop the EndPosition grid into the ICE Tree.  Repeat the previous step and connect the PointPosition, kine.global and Multiply Vector by Matrix nodes.



Create the strands
While you’ve just defined the end position for the strands you can’t feed this information directly into Build Array node because the point positions derive from different component types (points vs. vertices).  To fix this, get a Switch Context node and connect Multiply Vector by Matrix node to its Value input. Then connect the Result of the Switch Context to the End Value of the Build Array node. Get a Set Data node, enter self.StrandPosition as the Reference and connect the Result of the Build Array node to the input. Connect the Execute output of the Set Data node to the On Creation1 input of the Add Point node.


Read the full post>>

Thursday, December 6, 2012

Using an animated map to define particle goals in Softimage

There’s a slight difference depending on what map you’re using. A texture map is essentially an image file connected to an object, and as such you need to tell ICE what to do with this information before you can make use of it. First you need to convert the RGB color of the image to a scalar value and store this information as a custom attribute. Weightmaps on the other hand are already stored as scalar values, and as such you can skip the first step and use its Weights attribute directly in step 2. Start by opening the scene Goal_using_TextureMap.scn. 

The project files used in this tutorial can be found at:http://dl.dropbox.com/u/3834689/CaffeineAbuse/GoalLocation_using_AnimatedMaps.zip
Create a custom goal attribute
Select the Goal_Object and press [Alt] + [9] to open an ICE Tree. From the Create menu choose ICE Tree. Get a Get Data node, open its PPG and enter Self.NodeLocation as the reference. Get another Get Data node and enter Texture_Map as the reference. Connect the Value output of the Get Self.NodeLocation to the Source input. Get a Color to Brightness node and connect the value of the Get Texture_Map node to its Color input. Get a Set Data node and enter Self.GoalTextureMap as the reference and connect the Brightness output of the Color to Brightness node to the input. Connect the Set Data node to Port1 of the ICE Tree.
Set the GoalLocation
Select the pointcloud and update the ICE Tree view. Get a Get Data node and enter Goal_Object as the reference. Get a Get Geometry Sample node and connect the Value output of the Get Goal_Object node to the Geometry Input and then open its PPG. This node enables you to filter which part of the surface to use as the goal by extracting the weights from either the weightmap or the custom attribute created for the texture map.  Click the Explore button in the filter section of the PPG, expand the Input> Polygon Mesh tree and select the GoalTextureMap attribute. 
Move towards goal
Get a Set Data node and enter Self.GoalLocation as the reference.  Connect the Samples output of the Get Geometry Sample node to Self.GoalLocation input of the Set Data node. Connect the Set Data node to the Execute on Emit1 input of the Emit from Geometry. This will define the goal for each particle as it’s born. Once defined, you’ll use another node to actually move them there. Get a Move Towards Goal node and connect it to the Execute1 input of the Simulation Root node. Playback the animation and you’re done.



Read the full post>>

Wednesday, November 7, 2012

Wall of lights - How to set up a wall of animated light bulbs



The perhaps most intuitive way to create different patterns for the lights switching on and off is by using a texture map, which in turn controls the lights based on the lightness of the images.
Nine out of ten times you’re overegging the pudding by adding actual light source to the setup as you most likely could get away with a using really bright material on the object. But there is that one time you do need it, and this apparently is it.

Start by opening the scene Light_Wall.scn from this issues CD. There are several ways you can animate the lights switching on and off, but the perhaps most intuitive is by using an image sequence. Select the Wall object and from the Get > Property > Texture Map menu choose Texture Map. In the Clip section of the PPG, click the New button  and choose New From File. In the Browser, select the LightSwitch.pic sequence from the Pictures folder and click OK. Select the Texture_Projection in the UV Property section and then close the PPG.

From the Get > Primitive > Point Cloud menu choose Empty and press [Alt] + [9] to open an ICE Tree. From the Create menu choose ICE Tree. Press [8] to open an Explorer and drag and drop the Wall object into the ICE Tree. Get a Get Data node and connect the Out Name output of the Wall node to its In Name input. Open its Property Page (PPG) and enter PolygonPosition as the reference. This will get the centre of each of the polygons of the wall object, but as the object itself is rotated you’ll also need to add the global rotation. This is done by multiplying the polygons position with the objects global matrix. Get a Get Data node and enter kine.global as the reference and connect the Name output of the Get Wall node to the In Name input. Then get a Multiply Vector by Matrix node and connect the Value output of the Get PolygonPosition node to the Vector input and Value output of the kine.global to the Matrix input. Get an Add Point node and connect the Result output of the Multiply Vector by Matrix to the Positions1 input. Connect the Add output of the Add Point node to the Port1 of the ICE tree. Get a Set Data node, enter Self.Size as the reference and then enter 1 as the Size. Connect the Execute output of the Set Data node to the Port2 input of the ICE Tree.

Get a Get Closets Location node and connect the Value output of the Get Wall node to its Geometry input. The location you want to get is the location closest to each point so get a Get Data node, enter Self.PointPosition as the reference and connect it to the Position input of the Get Closest Location node. Then connect the Value output of the Wall node to the Geometry input. Get a Get Data node, enter Texture_Map as the reference and connect the Location output of the Get Closest Location to its Source input. Get a Color to Brightness node and connect the Color output of the Get_Texture to its Color input.

Get an Instance Shape node and open its PPG. Click on the Explore button and select the Light_ Bulbs group. Change the Hierarchy Mode to Object and Children. The Index value controls which object/hierarchy to be used. If the lightness value of the texture map is less than 0.5, the light should be off and the light bulb hierarchy with Index 0 should be used. If the lightness is higher than 0.5 the light should be switched on and the hierarchy with the added point light (index 1) should be used instead. Get a Round node and connect the Brightness output of Color to Brightness to its Value input. Then connect the Integer output of the Round node to the Index input of the Instance Shape. Connect the Shape output of the Instance Shape to the New(Value) input of the Set Data node. Open the PPG and enter Self.Shape as Reference1.

While the points are in the right locations they are facing the wrong direction. To fix this, you can get the orientation of each of the polygons and then use that data to set the orientation of the points. Get a Get Data node and enter PolygonRefFrame as the reference. Get a Matrix to SRT node and connect the Value output of the PolygonRefFrame to the Matrix input. Connect the Rotation output of the Matrix to SRT to the New(Value) input of the Set Data node. Open the PPG and enter Self.Orientation as Reference2.

Using actual light sources will cost you when it’s time to render and in most scenarios you won’t be able to tell the difference from using a really bright material on the object instead.
The project files used in this tutorial can be found at: http://dl.dropbox.com/u/3834689/CaffeineAbuse/Wall_of_Lights.zip

Quick tip
To add color to the lights you can add a second texture map using a different image sequence. Use the same approach as for the first map, but connect the Value output of the Texture_Map directly to a Self.Color input on the Set Data node. Then add a Color_Attribute node (with the Attribute set to Color) to the light’s Render Tree and connect it to the color input of the soft_light shader.  




Read the full post>>

Thursday, August 16, 2012

Rigging an accordion lamp

The distinctive design of the accordion lamp may look simple to rig, but don’t be fooled. As the lamp expands or contract the joints at each end of the arms moves in a circular motion, rendering the standard constrains useless.

The accordion lamp consists of a series of individual arms which are mounted in pairs creating an X-Shape. Rotating any of the arms will cause all arms to rotate which either expand or contract the lamp. Start by open the scene Accordion_Lamp.scn. The scene consists of a number of null objects and the arms, which are parented under the null representing their respective centre joint.  
Select the Center1 null, press [Ctrl] + [K] to open its Local Transform PPG. Right-click on the animation icon (the green divot) for the X Axis Position and choose Set Expression... The arms are distributed linearly between the base (the part attached to the wall mount) and the lamp and since there are 4 pairs we know that the first joint should be located at 1/8 of the distance to the End null. In the Editing Pane of the Expression Editor, enter 1/8*End.kine.local.posx and click the Apply button. Open the Local Transform PPG for the Center2, right-click on animation icon for the X Axis and enter 3/8*End.kine.local.posx as expression. Repeat the procedure for the Center3 (5/8*End.kine.local.posx) and Center4 (7/8*End.kine.local.posx) nulls.
To determine the Y position of the top joint of the arm (the arm’s rotation) you need another null object, but foremost you need Pythagorean Theorem. The theorem states that if you know the length of two sides of a right-angled triangle you can calculate the length of the third side (a2 + b2 = c2). The length of side a is the distance between the wall mount and the Center1 null. The length of side b is the distance between the center joint and the top joint of the arm, in this case 3.5 units. Select the Top_Joint1 null and open its Local Transform PPG. Right-click on the animation icon for the Y Position and choose Set Expression… Enter sqrt( pow( 3.5, 2 ) - pow( Center1.kine.global.posx, 2 )) in the editing pane and click the Apply button to apply the expression. 
For the Top_Joint2 you must not only calculate the Y position but also the X position as this will change as the lamp expand or contract. Open the Top_Joint2’s Transform PPG and apply an expression to the X position. The joint will always be located at the middle of the Center1 and Center2 nulls, which you can calculate by adding their X positions and then divide it by 2. Enter ( Center1.kine.local.posx + Center2.kine.local.posx ) / 2 and click the Apply button. Close the Expression Editor and apply an expression to the Y Position. The length of side b is still 3.5 in this triangle, but side a equals half the distance between Center1 and Center2. Enter sqrt( pow( 3.5, 2 ) - pow( ( ctr_dist( Center1.kine.global.pos, Center2.kine.global.pos ) / 2 ), 2 ) ) and click the Apply button.
Select the Arm01 and from the Main Command Panel > Constrain menu choose Direction and pick the TopJoint1 null. Select Arm02, apply a Direction constraint but this time pick the TopJoint2 null. Repeat the last two steps for the Top_Joint3 and 4 and their respective arms.
The project files used in this tutorial can be found at: http://dl.dropbox.com/u/3834689/CaffeineAbuse/AccordionLampRig.zip


Looking at the accordion lamp you’ll see that it’s in fact made by multiple triangles. Pythagorean Theorem states that if you know the length of two sides of a right-angled triangle you can calculate the length of the third side.

Quick tip
Once you’ve calculated the rotation for the first arm, you’ve essentially calculated the rotation for all arms. Rather than using the theorem and contains you can simply add an equal expression to the subsequent arms Z rotation, Arm01.kine.local.rotz to the odd arms and –Arm1.kine.local.rotz to the even.


Read the full post>>

Friday, March 23, 2012

Using render channels in Softimage

The typical use of render channels is to render the scenes components, such as ambience/diffuse, reflection or motion vectors into individual images. As most of these components are calculated individually by Mental Ray anyway they're not going to affect the time needed to render the image. In addition, channels can be used to render partial or multiple render trees, adding ambient occlusion, outputting mattes or any other type of information within a single pass. In this case however, you won't get them for free. 

The project files used in this tutorial can be found at:  http://dl.dropbox.com/u/3834689/CaffeineAbuse/Render_Channels.zip

01 Adding the channel
Open the scene Render_Channels.scn from this issues CD. Select the Jigsaw_Piece_01 object and press [7] to open a Render Tree. Get a  Store Color in Channel node and open its PPG. The Store Color in Channel can be inserted anywhere in your render tree to store a specific part of the tree to custom render channel. But it can also be used to store information that is not part of the actual material. In the Render Channel section of the PPG, expand the drop down menu and choose the AmbOcc Channel. Get an Ambient Occlusion node and connect it to the Input of the Store Color in Channel node.
02 Store the information
Get another Store Color in Channel node. Open the PPG and click the Add button and enter RGB_Matte as the Render Channel Name and click OK. Set the Input color to pure red.  Get a Color4_Passthrough node and connect the Blinn node to its Input. Connect the Result output of the Color4_Passthrough node to the Surface Input of the Material node. The passtrough node acts as a hub and allows you to store as many channels as you like. Open its PPG and click the Add button twice to add 2 channels. Close the PPG and connect each of the Store Color in Channel node to the Channels > Item inputs.  
03 Render the Channels
Repeat the procedure for the other Jigsaw pieces, but set the Color of the RGB_Matte channel to pure blue for the second piece, pure green for the third and pure black for the forth. Close the Render Tree. From the Render > Render menu, choose Render Manager... In the Render Channels Output section, click the Add button. Select the AmbOcc channel in the Render Channel drop down menu and click OK. Click the Add button again, select the RGB_Matte channel and click OK. Your pass is now ready for rendering, so click the Render button and choose Render Current Frame.

Quick tip
It's important to note that if you need to re-render one of the channels, you will need to re-render the entire pass which may take considerable longer than if you where using separate passes. So contemplate which is the most beneficial in any given scenario. 


Read the full post>>