This group optimizes filter performance through camera-based and occlusion culling strategies. These methods ensure that strokes are produced only on parts of the object that are visible to the camera, thereby avoiding unnecessary calculations for regions either blocked by other geometry or out of view.
The core process involves raycasting, where rays are emitted from the camera towards the object to determine which parts are visible. This is refined using several specific operations:
Source Position: The camera serves as the origin point for each ray, defining where each ray starts.
Ray Direction: The direction of each ray is calculated by subtracting the evaluated point's position from the camera's position. Additionally, a small bias is added to the point's position along its normal vector to prevent the ray from mistakenly colliding with the originating geometry. This bias helps to avoid inaccuracies, especially in complex or dense scenes.
Ray Length: Once the direction is defined, the length of the ray is calculated to determine how far it travels. This helps establish which parts of the object the ray can potentially reach.
Occlusion Condition: During raycasting, if the hit distance is less than the calculated ray length, the point is classified as occluded. This means that there is some intervening geometry that blocks the ray before it reaches its intended target, and thus that point is discarded.
By culling occluded points, this group effectively focuses computational resources on the areas of the object that will actually be visible in the rendered image, greatly enhancing efficiency.
The major benefit of this approach lies in reducing computational overhead. Rather than processing the entire geometry, which can be especially burdensome in scenes with many objects or complex animations, the system only calculates strokes for visible areas. This efficiency allows the filter to handle larger, more detailed scenes without compromising performance. Additionally, visibility data for each point is stored for later use during stroke generation, ensuring that the final strokes align accurately with the viewer's perspective.