Your online Softimage Educational Resource

The blog has been online for more than 4 years and there’s still not a single post even remotely related to the delicious brew called coffee… Perhaps it will someday, but in the meantime you can read the articles about Softimage. Most of the material are tutorials and Q&As I’ve written for 3D World Magazine sometime between today and 2003. If you have any questions please don’t hesitate sending me an email.


Thanks to Letterbox Animation Studios for hosting all the scene files.

Make sure you visit their Redi-Vivus.com for 100s of hours of free XSI video tutorials.

Thursday, December 6, 2012

Using an animated map to define particle goals in Softimage

There’s a slight difference depending on what map you’re using. A texture map is essentially an image file connected to an object, and as such you need to tell ICE what to do with this information before you can make use of it. First you need to convert the RGB color of the image to a scalar value and store this information as a custom attribute. Weightmaps on the other hand are already stored as scalar values, and as such you can skip the first step and use its Weights attribute directly in step 2. Start by opening the scene Goal_using_TextureMap.scn. 

The project files used in this tutorial can be found at:http://dl.dropbox.com/u/3834689/CaffeineAbuse/GoalLocation_using_AnimatedMaps.zip
Create a custom goal attribute
Select the Goal_Object and press [Alt] + [9] to open an ICE Tree. From the Create menu choose ICE Tree. Get a Get Data node, open its PPG and enter Self.NodeLocation as the reference. Get another Get Data node and enter Texture_Map as the reference. Connect the Value output of the Get Self.NodeLocation to the Source input. Get a Color to Brightness node and connect the value of the Get Texture_Map node to its Color input. Get a Set Data node and enter Self.GoalTextureMap as the reference and connect the Brightness output of the Color to Brightness node to the input. Connect the Set Data node to Port1 of the ICE Tree.
Set the GoalLocation
Select the pointcloud and update the ICE Tree view. Get a Get Data node and enter Goal_Object as the reference. Get a Get Geometry Sample node and connect the Value output of the Get Goal_Object node to the Geometry Input and then open its PPG. This node enables you to filter which part of the surface to use as the goal by extracting the weights from either the weightmap or the custom attribute created for the texture map.  Click the Explore button in the filter section of the PPG, expand the Input> Polygon Mesh tree and select the GoalTextureMap attribute. 
Move towards goal
Get a Set Data node and enter Self.GoalLocation as the reference.  Connect the Samples output of the Get Geometry Sample node to Self.GoalLocation input of the Set Data node. Connect the Set Data node to the Execute on Emit1 input of the Emit from Geometry. This will define the goal for each particle as it’s born. Once defined, you’ll use another node to actually move them there. Get a Move Towards Goal node and connect it to the Execute1 input of the Simulation Root node. Playback the animation and you’re done.



Read the full post>>

Wednesday, November 7, 2012

Wall of lights - How to set up a wall of animated light bulbs



The perhaps most intuitive way to create different patterns for the lights switching on and off is by using a texture map, which in turn controls the lights based on the lightness of the images.
Nine out of ten times you’re overegging the pudding by adding actual light source to the setup as you most likely could get away with a using really bright material on the object. But there is that one time you do need it, and this apparently is it.

Start by opening the scene Light_Wall.scn from this issues CD. There are several ways you can animate the lights switching on and off, but the perhaps most intuitive is by using an image sequence. Select the Wall object and from the Get > Property > Texture Map menu choose Texture Map. In the Clip section of the PPG, click the New button  and choose New From File. In the Browser, select the LightSwitch.pic sequence from the Pictures folder and click OK. Select the Texture_Projection in the UV Property section and then close the PPG.

From the Get > Primitive > Point Cloud menu choose Empty and press [Alt] + [9] to open an ICE Tree. From the Create menu choose ICE Tree. Press [8] to open an Explorer and drag and drop the Wall object into the ICE Tree. Get a Get Data node and connect the Out Name output of the Wall node to its In Name input. Open its Property Page (PPG) and enter PolygonPosition as the reference. This will get the centre of each of the polygons of the wall object, but as the object itself is rotated you’ll also need to add the global rotation. This is done by multiplying the polygons position with the objects global matrix. Get a Get Data node and enter kine.global as the reference and connect the Name output of the Get Wall node to the In Name input. Then get a Multiply Vector by Matrix node and connect the Value output of the Get PolygonPosition node to the Vector input and Value output of the kine.global to the Matrix input. Get an Add Point node and connect the Result output of the Multiply Vector by Matrix to the Positions1 input. Connect the Add output of the Add Point node to the Port1 of the ICE tree. Get a Set Data node, enter Self.Size as the reference and then enter 1 as the Size. Connect the Execute output of the Set Data node to the Port2 input of the ICE Tree.

Get a Get Closets Location node and connect the Value output of the Get Wall node to its Geometry input. The location you want to get is the location closest to each point so get a Get Data node, enter Self.PointPosition as the reference and connect it to the Position input of the Get Closest Location node. Then connect the Value output of the Wall node to the Geometry input. Get a Get Data node, enter Texture_Map as the reference and connect the Location output of the Get Closest Location to its Source input. Get a Color to Brightness node and connect the Color output of the Get_Texture to its Color input.

Get an Instance Shape node and open its PPG. Click on the Explore button and select the Light_ Bulbs group. Change the Hierarchy Mode to Object and Children. The Index value controls which object/hierarchy to be used. If the lightness value of the texture map is less than 0.5, the light should be off and the light bulb hierarchy with Index 0 should be used. If the lightness is higher than 0.5 the light should be switched on and the hierarchy with the added point light (index 1) should be used instead. Get a Round node and connect the Brightness output of Color to Brightness to its Value input. Then connect the Integer output of the Round node to the Index input of the Instance Shape. Connect the Shape output of the Instance Shape to the New(Value) input of the Set Data node. Open the PPG and enter Self.Shape as Reference1.

While the points are in the right locations they are facing the wrong direction. To fix this, you can get the orientation of each of the polygons and then use that data to set the orientation of the points. Get a Get Data node and enter PolygonRefFrame as the reference. Get a Matrix to SRT node and connect the Value output of the PolygonRefFrame to the Matrix input. Connect the Rotation output of the Matrix to SRT to the New(Value) input of the Set Data node. Open the PPG and enter Self.Orientation as Reference2.

Using actual light sources will cost you when it’s time to render and in most scenarios you won’t be able to tell the difference from using a really bright material on the object instead.
The project files used in this tutorial can be found at: http://dl.dropbox.com/u/3834689/CaffeineAbuse/Wall_of_Lights.zip

Quick tip
To add color to the lights you can add a second texture map using a different image sequence. Use the same approach as for the first map, but connect the Value output of the Texture_Map directly to a Self.Color input on the Set Data node. Then add a Color_Attribute node (with the Attribute set to Color) to the light’s Render Tree and connect it to the color input of the soft_light shader.  




Read the full post>>

Thursday, August 16, 2012

Rigging an accordion lamp

The distinctive design of the accordion lamp may look simple to rig, but don’t be fooled. As the lamp expands or contract the joints at each end of the arms moves in a circular motion, rendering the standard constrains useless.

The accordion lamp consists of a series of individual arms which are mounted in pairs creating an X-Shape. Rotating any of the arms will cause all arms to rotate which either expand or contract the lamp. Start by open the scene Accordion_Lamp.scn. The scene consists of a number of null objects and the arms, which are parented under the null representing their respective centre joint.  
Select the Center1 null, press [Ctrl] + [K] to open its Local Transform PPG. Right-click on the animation icon (the green divot) for the X Axis Position and choose Set Expression... The arms are distributed linearly between the base (the part attached to the wall mount) and the lamp and since there are 4 pairs we know that the first joint should be located at 1/8 of the distance to the End null. In the Editing Pane of the Expression Editor, enter 1/8*End.kine.local.posx and click the Apply button. Open the Local Transform PPG for the Center2, right-click on animation icon for the X Axis and enter 3/8*End.kine.local.posx as expression. Repeat the procedure for the Center3 (5/8*End.kine.local.posx) and Center4 (7/8*End.kine.local.posx) nulls.
To determine the Y position of the top joint of the arm (the arm’s rotation) you need another null object, but foremost you need Pythagorean Theorem. The theorem states that if you know the length of two sides of a right-angled triangle you can calculate the length of the third side (a2 + b2 = c2). The length of side a is the distance between the wall mount and the Center1 null. The length of side b is the distance between the center joint and the top joint of the arm, in this case 3.5 units. Select the Top_Joint1 null and open its Local Transform PPG. Right-click on the animation icon for the Y Position and choose Set Expression… Enter sqrt( pow( 3.5, 2 ) - pow( Center1.kine.global.posx, 2 )) in the editing pane and click the Apply button to apply the expression. 
For the Top_Joint2 you must not only calculate the Y position but also the X position as this will change as the lamp expand or contract. Open the Top_Joint2’s Transform PPG and apply an expression to the X position. The joint will always be located at the middle of the Center1 and Center2 nulls, which you can calculate by adding their X positions and then divide it by 2. Enter ( Center1.kine.local.posx + Center2.kine.local.posx ) / 2 and click the Apply button. Close the Expression Editor and apply an expression to the Y Position. The length of side b is still 3.5 in this triangle, but side a equals half the distance between Center1 and Center2. Enter sqrt( pow( 3.5, 2 ) - pow( ( ctr_dist( Center1.kine.global.pos, Center2.kine.global.pos ) / 2 ), 2 ) ) and click the Apply button.
Select the Arm01 and from the Main Command Panel > Constrain menu choose Direction and pick the TopJoint1 null. Select Arm02, apply a Direction constraint but this time pick the TopJoint2 null. Repeat the last two steps for the Top_Joint3 and 4 and their respective arms.
The project files used in this tutorial can be found at: http://dl.dropbox.com/u/3834689/CaffeineAbuse/AccordionLampRig.zip


Looking at the accordion lamp you’ll see that it’s in fact made by multiple triangles. Pythagorean Theorem states that if you know the length of two sides of a right-angled triangle you can calculate the length of the third side.

Quick tip
Once you’ve calculated the rotation for the first arm, you’ve essentially calculated the rotation for all arms. Rather than using the theorem and contains you can simply add an equal expression to the subsequent arms Z rotation, Arm01.kine.local.rotz to the odd arms and –Arm1.kine.local.rotz to the even.


Read the full post>>

Friday, March 23, 2012

Using render channels in Softimage

The typical use of render channels is to render the scenes components, such as ambience/diffuse, reflection or motion vectors into individual images. As most of these components are calculated individually by Mental Ray anyway they're not going to affect the time needed to render the image. In addition, channels can be used to render partial or multiple render trees, adding ambient occlusion, outputting mattes or any other type of information within a single pass. In this case however, you won't get them for free. 

The project files used in this tutorial can be found at:  http://dl.dropbox.com/u/3834689/CaffeineAbuse/Render_Channels.zip

01 Adding the channel
Open the scene Render_Channels.scn from this issues CD. Select the Jigsaw_Piece_01 object and press [7] to open a Render Tree. Get a  Store Color in Channel node and open its PPG. The Store Color in Channel can be inserted anywhere in your render tree to store a specific part of the tree to custom render channel. But it can also be used to store information that is not part of the actual material. In the Render Channel section of the PPG, expand the drop down menu and choose the AmbOcc Channel. Get an Ambient Occlusion node and connect it to the Input of the Store Color in Channel node.
02 Store the information
Get another Store Color in Channel node. Open the PPG and click the Add button and enter RGB_Matte as the Render Channel Name and click OK. Set the Input color to pure red.  Get a Color4_Passthrough node and connect the Blinn node to its Input. Connect the Result output of the Color4_Passthrough node to the Surface Input of the Material node. The passtrough node acts as a hub and allows you to store as many channels as you like. Open its PPG and click the Add button twice to add 2 channels. Close the PPG and connect each of the Store Color in Channel node to the Channels > Item inputs.  
03 Render the Channels
Repeat the procedure for the other Jigsaw pieces, but set the Color of the RGB_Matte channel to pure blue for the second piece, pure green for the third and pure black for the forth. Close the Render Tree. From the Render > Render menu, choose Render Manager... In the Render Channels Output section, click the Add button. Select the AmbOcc channel in the Render Channel drop down menu and click OK. Click the Add button again, select the RGB_Matte channel and click OK. Your pass is now ready for rendering, so click the Render button and choose Render Current Frame.

Quick tip
It's important to note that if you need to re-render one of the channels, you will need to re-render the entire pass which may take considerable longer than if you where using separate passes. So contemplate which is the most beneficial in any given scenario. 


Read the full post>>

Friday, February 24, 2012

Shape blending in ICE using an image sequence

Since you'll be using ICE to control the blend of the shape keys, make sure there aren't any shape tracks added to the Animation Mixer and delete the Cluster Shape Combiner operator in the Shape Modeling stack of the object.
Open the scene Blend_Shape_Using_TextureMap.scn. Select the Can object  and from the Get > Property > Texture Map menu choose Texture Map. In the Clip section of the PPG, click the New button and choose New From file from the popup menu. Select the Gradient_Mask.jpg sequence and click OK to load the images. Select the Texture_Projection in the UV Property.

With the Can still selected, press [Alt]+[9] to open an ICE Tree view. From the Create menu choose ICE Tree. Get a Get Data node, open its PPG and enter Self.PointPosition as the Reference. Get another Get Data node, open its PPG and click the Explore button. Expand the Can > Polygon Mesh > Clusters > Can_ShapeKey > Can_ShapeKey branch of the tree and select the positions element. What you want to do is interpolate or blend between the objects current point position and the positions of the shape key. Get a Linear Interpolate node and connect the Value output of Get Self.PointPosition node to its First input.

The shape key does not store the actual point position of the shape,  but the points' change in position in relation to the initial mesh. To get  the actual positions you need to add the shape to the current point position. Get an Add note and connect the Self.PointPosition to the Value1 input of the Add node and the Get Can.cls.Can_ShapeKey.Can_ShapeKey.positions to the Value2 input. Connect the Result of the Add node to the Second input of the Linear Interpolate node.  Get a Set Data node and enter Self.PointPosition as the Reference in the PPG. Connect the Result output of the Linear Interpolate node to the Self.PointPosition input of the Set Data node. Connect the Set Data node to the Port1 input of the ICETree node.

The blend value of the Liner Interpolation node controls how much influence each of 2 input values will have. A value of 0 means the second value of the Linear Interpolation node will have 0 affect which results in the first value having full affect. With the blend value set to 1, the opposite is true and the second input will have full affect and the first input 0. Instead of setting the blend value for the entire mesh at once, you can set  it per point based on the RGB value of an image sequence. Get a Get Point Texture Map Color node and a Color to Brightness node.  The Color to Brightness node converts the RGB value of the image to a scalar value which in can then be used to control the blending of the Linear Interpolation. Connect the Value output of the Get Point Texture Map Color node to the Color input of the Color to Brightness node. Connect the Brightness output of the Color to Brightness node to the Blend input of the Linear Interpolate node.

The project files used in this tutorial can be found at:  http://dl.dropbox.com/u/3834689/CaffeineAbuse/ShapeBlendingWithICE.zip
While you typically want to set the blend value between zero and one, you can use values above 1 to amplify the affect or negative values to inverse it.

Quick tip
If you want to blend between more than 2 shapes you can use the Blend node instead. Note that the blend node will output the weighted sum of all the inputs, so if you increase the weight of one input you'll need to equally decrease the weight of the other inputs to avoid double transformation. 


Read the full post>>

Friday, January 27, 2012

Turbulize the position of a locator with ICE kinematics

Controlling the scale, transformation or rotation (SRT) of an object with ICE isn't any different from doing the same when dealing with point or "particles". The thing you need to keep in mind though is how you set the data. While you can read any of the SRT properties of an object you can't set the individual X,Y or Z transforms directly nor their local transform values. In order to set any of the transform values you need to set the whole global transform matrix. Note that this is merely a structural thing and does not affect what you can and cannot do with an object’s SRT. 
01 Creating the scene
Get a Sphere (From the Get > Primitive > Polygon Mesh menu) and a Null object (From the Get > Primitive menu). The null object serves as a place holder for your ICE Tree. Select the Null object and press [Alt] + [9] to open an ICE Tree view. Get a Get Data node, open its PPG and enter sphere.kine.global as the Reference. In order to change the individual components of the SRT matrix you first need to deconstruct the matrix, make the modifications to the respective components, and then put the matrix back together. Get a Matrix to SRT node and connect the Value output of the sphere.kine.global node to its Matrix input. Get a SRT to Matrix node and connect the Scaling and Translation output of the Matrix to SRT to its corresponding input.

02 Deconstructing the Matrix
Get a 3D Vector to Scalar node and connect the Translation output of the Matrix to SRT to its Vector input. Get a 3D Vector node, a Turbulence node and a Multiply node. Connect the 3D Vector to the Position input of the Turbulence node and the Turbulence to the Value1 input of the Multiply node. Get a Add node and connect the Y output of the 3D Vector to Scalar to its Value1 input. Connect the Multiply node to the Value2 input of the Add node. Open the Turbulence PPG and lower the Time Frequency to 0.5 or so. This will control the speed of the turbulence. The Multiply node controls the strength of the turbulence. Open the PPG and set the Value2 to 20
 
03 Put the Matrix back together.
By using the Add node, the turbulence will be added on top of the Sphere's Y translation rather than completely replacing it. Get a Scalar to 3D Vector and connect the X and Z output of the 3D Vector to Scalar node to its corresponding inputs. Connect the Result output of the Add node to the Y input. Connect the Vector output of the Scalar to 3D Vector node to the Translation input of the SRT to Matrix node. Get a Set Data node, open its PPG and enter sphere.kine.global as the reference. Connect the Matrix output of the SRT to Matrix node to the sphere.kine.global input of the Set Data node. Connect the Execute output of the Set Data node to the port1 of the ICETree node. 

The project files used in this tutorial can be found at:http://dl.dropbox.com/u/3834689/CaffeineAbuse/ICE_Kinematic_Turbulence.zip


Read the full post>>