This page exists to suggest, detail and track the features that JPatch needs to a level of functionality expected by a typical user who wants to create animation.
Most of the actual discussion of these features takes place on the JPatch forum.
Features not required for “Core” but which still should be part of JPatch, should be flagged with an [Advanced] tag.
From the beginning, JPatch was envisioned as a tool for creating character-based animation. Although the primary focus was on spline-based models, features such as bones and morphs were always part of the design.
Because much of what was originally in JPatch was inspired by Hash’s Animation:Master, JPatch was initially seen by some as an A:M clone. However, this was never the goal for JPatch, and eventually because of issues with supporting spline patches, JPatch replaced spline-based patches with subdivision surfaces.
The initial scope for JPatch was much smaller than what it currently is. With the addition of a built-in renderer and animation module, JPatch has moved from a tool for creating POV-Ray models to a stand-alone tool for creating animation.
JPatch distinguishes itself from other programs by focusing on:
This mode uses orthographic projection to render objects.
This mode uses perspective projection to render objects.
A half-wing representation is an efficient way to represent models.
Subdivision surfaces use an interpolating method that converts models into smooth surfaces. However, there is need to be able to tag edges and corners as “sharp” so that they are not automatically smoothed.
In addition to flagging a corner as “sharp”, it’s useful to be able to specify how sharp a corner should be.
As with non-integer corners, being able to specify the sharpness of a crease (i.e. an “edge”) is also useful.
Subdividing a mesh consists of replacing each face with several smaller faces to approximate a curved surface. When the mesh because “sufficiently smooth”, subdivision stops.
The issue here is to perform this action efficiently, to balance between memory usage and recomputing values.
The built in renderer needs to be able to render the mesh with at least some basic attributes, such as color and specularity.
JPatch 0.4 uses tools roughly modeled after those in A:M. These should probably be replaced by Maya-style tools. Here is a link to documentation on various manipulators in Blender.
The Rotate tool allows a point or bone to be rotated around a center point.
The Move (Translate) tool allows a point or bone to be moved.
The Scale tool allows the selected points be scaled (resized):
The Stretch tool allows a bone to be made longer or shorter. There should be a Maintain Volume option which will make the attached mesh fatter/thinner as the bone is squashed/stretched.
Will JPatch implement this, or will surge be implemented via scaling along the z axis?
The Combo tool combines the Rotate, Move and Scale tools into a single tool.
Will JPatch have a similar tool?
This is the Combo tool in Blender:
U/V mapping refers to the process of “painting” a texture onto a mesh. This is done by “projecting” the 3D points of the model (x,y,z) onto a 2D map of the model (u,v), much like a 2D map of the Earth is created by projecting the 3D points of the Earth’s sphere onto a 2D plane. This 2D “map” can be colored using a traditional paint tool, and the textures are then projected from the 2D texture back to the surface of the model.
Here’s a screenshot from Cheetah3D showing a 3D model, and its 2D texture map:
Because projecting a 3D texture onto a 2D map will cause distortion, often several different sorts of options are available. Additionally, different parts of a model may use different forms of projection.
U/V Maps can be used to control the following attributes of a model (depending on what the renderer supports):
Various sorts of u/v maps (from this tutorial):
The maps applied to the model:
Projects the points of the model’s mesh outward as if the map were wrapped around the object like a cylinder.
Projects the points of the model’s mesh outward as if the map were wrapped around the object like a sphere.
LSCM (Least Squares Conformal Map) is an advanced method of mapping which attempts to automatically create a u/v map with the least amount of distortion by preserving the angles of polygons in the mesh.
This has been implemented in Blender 2.42.
The material editor in JPatch is based heavily on the material attributes in POV-Ray.
It would be nice if JPatch were able to call the specific renderer instead of using the internal renderer (i.e. using POV-Ray or Sunflow to create the preview image).
JPatch will be able to import models created with the spline modeler (prior to version 7.x).
RIB is the format for Renderman compliant renderers.
POV is the format used by the Persistance of Vision raytracer, a popular cross-platform raytracer.
Users must be able to create renders with JPatch out of the box. For users who need features not supplied by JPatch’s embedded renderer, JPatch should be able to communicate to external renderers.
JPatch currently opens a modal render window to display the current frame being rendered, similar to Blender.
These exist so that users will be able to user JPatch as a stand-alone application, instead of having to rely on an additional tool to perform rendering.
JPatch needs a fast renderer for “Previz” (pre-visualization) purposes. Currently, JPatch uses OpenGL, which has native support in Java.
JPatch currently uses Inyo, an open-source Java based raytracer. It is expected that Inyo will be replaced by Sunflow, another open-source Java-based raytracer. Sunflow has more features than Inyo, and enjoys strong community support.
Sunflow also has a fast progressive refinement mode, which would be useful to integrate into JPatch’s viewport.
For users who need features that cannot be found in the built-in renderer, JPatch needs to be able to work with external renderer. JPatch was initially conceived as a modeling tool with no rendering capabilities, relying on either POV or Renderman.
JPatch communicates to the external renderer by creating a data file describing the frame to be rendered. It then creates a thread that calls the external renderer and passes the name of the file to be rendered. When the external renderer is finished rendering the frame, the thread is notified, and JPatch displays the rendered frame to the user.
In theory, it is possible to have JPatch watch the created image file as it is being written, so the frame the user could see the frame as it is being generated, instead of waiting for the entire frame being rendered. This is supported by POV.
JPatch can fairly easily be adapted to work with other renderers. For example, a fork of JPatch was modified to use Sunflow as an external renderer.
The Persistance of Vision (POV) raytracer is a popular cross-platform raytracer. JPatch currently supports POV.
In addition being Pixar’s flagship rendering tool, Renderman is also a specification. A list of Renderman compliant renderers can be found here. Free and open source versions of Renderman include Aqsis, jrMan and Pixie.
Rigging is the process of setting up a model so that it can be animated. It typically refers to the process of adding bones to the model, but also includes morphs, poses, and other issues.
A hierarchical bone rig in A:M:
This allows the points on the mesh to be edited and stored to simulate “muscle” actions, such as smiles, blinks, mouth shapes, muscle bulges, and so on.
A morph’s weight ranges from -1 to 1. There can be multiple morph targets for a single morph - for example, one at -1, .24 and 1.0.
This was implemented in JPatch 0.4.
This allows a skeleton to be assigned to a mesh, so that moving the bones of the skeleton manipulates the mesh.
This was implemented in JPatch 0.4.
A rigged character’s bones in A:M:
This is the same as a regular morph, but is bound to a particular bone rotation. For example, posing a character using only bones will often cause the mesh to be distorted unrealistically, because there are no underlying muscles to support the mesh. Using “smartskin” morphs, morphs can be assigned to to various bone angles, so that as the bone flexes at a particular angle and axis, the morph is applied to correct the distortion.
This causes mesh points to automatically be assigned to the bones they are closest to. Each bone is assigned a separate color, and mesh points bound to that bone are set to the same color as the bone they are assigned to. If a bone is flagged as not effecting any mesh point, that bone will be ignored.
This method often requires correction of points, but is faster than assigning all the mesh points manually.
This was implemented in JPatch 0.4.
A mesh point may be influenced by more than one bone. Weighting defines how much influence a particular bone has on a point.
David: Is this even planned for JPatch?
A Pose consists of both a morphs and bone rotations. For example, a mouth shape may require a bone rotation (to open the jaw) and morphs (to pose the lips). Another common example would be a hand pose, such as a clenched fist.
David: From the user’s point of view, Poses and Morphs can probably be combined into a single interface.
An action is a short snippet of animation.
David: Can Actions be replaced by Clips? They seem to be doing the same thing.
David: I’ve already implemented a hair simulation that works well. The main thing it’s missing is collision detection, which is generally pretty expensive. Most programs use proxies (for example, spheres) which are a lot cheaper than performing collision detection with a mesh.
Still, this isn’t a high priority item for JPatch. From the UI side, there would be a need to be able to assign hair to the mesh, assign color to the hair, edit the hair - the actual hair simulation turns out to be the simplest part!
The timeline allows moving to a particular frame in the animation, as well as a method for playing back a range of frames.
The Maya timeline:
Discussion David: While Googling around, I found this suggested redesign for Maya's timeline. It’s got some interesting ideas in it. (Some of them may have already been incorporated in).
Displays the animation keys in a 2D format, and shows how each key is interpolated to the next key. The following types of interpolations are supported:
By default, JPatch applies a Spline interpolation. JPatch also automatically adjusts the tangents to prevent spline undershoot and overshoot.
Spline values may be adjusted manually.
Here’s the Maya Graph Editor:
Here’s a shot of A:M’s motion curve editor (from Jeff Lew’s training video):
The Dope Sheet (sometimes called the Graph Editor) is similar to the Motion Curve Editor in that it shows the keys for each element. However, the Dope Sheet only displays the position of the keys along the timeline, but not the actual key values themselves.
The Dope Sheet makes it easy to adjust the timing of an animation.
Here’s an example of the Dope Sheet in A:M, with a block of keys selected:
A scene graph is a hierarchical representation of the scene being animated.
A constraint works to keep something within a particular range.
The current design distinguishes between limits and constraints. Each floating point attribute can be bounded to limits (minimum and/or maximum). Limits are always enforced immediately, that means that they are “stronger” than constraints, and a bounded attribute is never allowed to take a value outside of its limits, not even temporarily. Constraints, in contrast, are enforced in a separate step. First the node is setup according to its location within the scene graph, then constraints on a node are applied (they too have an order, so one constraint can override another).
This also implies that - to prevent loops - a node can not be constrained to one of its descendants in the scene graph. There also needs to be a loop check for constraints (e.g. you can’t constraint A to B, B to C and C to A)
The minimum and maximum rotation angle for a bone. This prevents armatures from being twisted into impossible poses.
Aim At aims a bone at a target bone. For example, this can be used to have guns on a turret automatically follow another object, or a character’s eyes follow a moving object.
Here’s an example from the Art of Illusion user manual:
Note the “cross eyed” effect. This would taken care of by only having one eye constrained using Aim At, and the other eye constrained to the first eye with an Orient Like constraint.
In his training video, Jeff Lew notes that Aim At can be used to constraint elbows and knees, to get better control over the IK solver. Of course, this assumes there’s a full IK solver running, which 1.0 won’t have.
Bone copies roll of target bone.
Attach To is exactly the same as applying both the Translate Like and Orient Like constraints. This can be used to attach an object to another object.
Reflex has a cool graphical was of representing this:
Where possible, using a graphical mechanism to represent constraints is a good thing.
Sascha: I’d use just “attach to” constraints instead of separate translate-like and orient-like constraints. The “attach to” constraint would have 6 checkboxes (translate like x, y and z and rotate like x, y and z), so the user can turn on/off the individual translate and orient constraints per axis.
David: I like this idea a lot, but I think some sort of icon at the attachment point should be used to indicate if Translate Like (a pin?) and Orient Like (a curved arrow?) are active.
Constrain To Path causes a bone follow a spline path. JPatch doesn’t currently implement spline paths.
IK chain follows target bone.
This causes rotations applied to one bone to be applied to another bone. Consider an object constrained to an airplane that is performing a loop. If only the Translate Like constraint is applied, the constrained object will move with the airplane, but not rotate with it. The Orient Like constraint takes care of this, causing the object to rotate with the constraining object.
This causes Translate action applied to one bone to be applied to another bone as well. For example, if you wanted to place and object in a moving car, you could constrain the object’s root bone to Translate Like the car’s root bone. Moving the car would then cause the object to move with the car.
This constrains the IK solver to find a solution that lies in a given plane. A Rotate Plane IK solver is simpler to implement than a full IK solver. This is a simple one-joint IK chain, with one end locked in place:
David: Version 1.0 will have limited support for IK. This is primarily useful when you want to lock feet in place, and move the body. The feet of the character will rotate around (since they aren’t part of the chain); this can either be manually corrected, or fixed by constraining the feet with a Aim At bone. This is also useful for being able to manipulate an arm by grabbing the wrist and dragging it.
An example of Onion Skinning from in Reflex:
Discussion These are called “ghosts” in Reflex.
Sascha: I’d move onion skinning under animation. Since actions are simple animations there is some overlap though. Are cycles special cases of actions, or should they be treated specially?
Walk Cycles allow an animator to create a reusable walk, and have it automatically applied to a path. For example, the animator might create a walk cycle that takes 12 frames for the character to move 8 units. The cycle can then be applied to an arbitrary path over any arbitrary time, and JPatch will automatically calculate the correct rate of speed for the walk cycle to be applied.
Here’s an example from the Blender User Manual
Here is a link to an image of Animation:Master’s stride editor.
Typically, a path is a user-defined spline. However, since moving a character from one point to another automatically defines a spline path, it would seem that there would be no need to create a separate path object. Here’s a shot from Reflex, which displays motion curves (another neat feature to have in JPatch):
When a cycled action is activated, JPatch could automatically display the “path” for the range of that walk cycle.
It is important to accurately measure the length of a stride (the distance a character moves in a single walk cycle), because of the stride length is incorrectly measured, it will appear that the character’s feel are “slipping”.
Blender tracks the following parameters for its Actions:
These can be incorporated into the idea of Animation Layers.
Reflex has a concept where the timeline actually consists of multiple, overlaying timelines called layers:
Use Layers in Reflex much as you would use layers in a common graphics editor. Layer movements to create the primary action, add secondary actions on top, try something new you're unsure of in a new Layer, or switch between two or more versions of the same movement to decide which you like better. You can cycle animation on individual layers, as well. Want to increase or decrease the amplitude of a movement? Layer envelopes work similar to Layer "opacity" in graphics editors but the envelope values can vary over time.
You can toggle layers on, off, and solo, similar to using a multitrack recorder:
Layers are a convenient way to create reusable motions (perhaps the name clips would be good), as well as cycles (such as walk cycles).
So each layer would have the following options:
Blender does something similar with their NLA (non-linear animation) tools, where strips of reusable animations can be used to build up animations:
At a minimum, JPatch needs to allow the following:
Sound playback can be optionally toggled on to match playback of animation, or rendered frames.
The lipsync editor allows the animator to assign visemes (facial expressions associated with sounds) to a pre-recorded audio track. Since visemes are generally mutually exclusive, the lipsync editor should make it possible select a single viseme (and strength) per sound, and automatically replace the prior viseme.
There should be some way to match phonemes or words to the timeline or waveform, since that would make it easier both to create a lipsync, and match up expressions to words.
The dopesheet for lipsync in A:M is hierarchical, breaking down words into phonemes automatically:
Discussion David: It would be nice to incorporate something like Yolo into JPatch.
Discussion David: I’m not sure what you had in mind here, but I’d leave it out, if possible.
Project Management involves how the assets of a JPatch project are maintained. JPatch should support:
JPatch 0.4 uses an XML file format.
Status: Workspace-manager class, predefined folder structure local history are currently being worked on.
Projects like IMP (the Internet Movie Project) exist to the use of computer animation tools to create collaborative films over the Internet. JPatch should be designed with the goal of supporting such projects. This includes:
Collada (COLLAborative Design Activity) defines an open standard XML schema for exchanging digital assets among various graphics software applications that might otherwise store their assets in incompatible formats. COLLADA documents that describe digital assets are XML files, usually identified with a .dae (digital asset exchange) filename extension.
The COLLADA Schema supports all the features that modern 3D interactive applications need, including programmable shader effects and physics simulation. It can also be easily extended by the end users for their own specific purposes. In other words, COLLADA is not designed to be a temporary data transport mechanism, but rather to be the schema for the source data for your digital assets. It is not designed as a delivery mechanism, but to be a content holder for any target platform. COLLADA’s choice of language is XML, thus gaining many of the benefits of the eXtensible Markup Language including native support of international UTF-8 encoding. The COLLADA XML Schema document is publically acccessible on the Internet for online content validation.
Discussion The wikipedia COLLADA article notes that “…some applications have adopted COLLADA as their native format or as one variety of native input rather than simply using it as an intermediate format.” It may be worth looking into COLLADA as a native format for JPatch.
In any event, this may be a useful format to support if it gains widespread adoption and support.
Creating a hair simulator isn’t that difficult; there’s already one written in Java that can be used. However, it’ll take a bit of work to integrate into JPatch. There needs to be some way to:
Hair guides (from A:M):
Many renderers already have support for “tubelike” objects, so rendering is probably not that problematic.
Handling feathers, leaves and other such objects adds another level of complexity, because “hairs” would need to be replaced with some sort of object, typically a “card” with texture applied. Renders like Sunflow support instancing of objects, but with large numbers of objects (such as what feathers would require), it would still require a large amount of time to render.
Because of this issue, it might make sense for JPatch to have a specialized rendering system which could handle hair and feathers, and be composited with other rendering pipelines.
An interesting feature in the A:M hair animation system is that the hair “thickness” can be specified with a graph:
This can then be applied to hairs, to create leaves or feathers: