High-performance spatial transformations and transforms graph management for robotics and computer vision. Strict-typed SE(3) transforms, camera projections, and automatic path composition.
Install:
pip install transform-graph
For visualization support: pip install "transform-graph[viz]"
Strict-typed rigid body transformations with quaternion rotations, translations, and full composition
algebra via the * operator.
combined = tf.Transform() * tf.CameraProjection()
Build a graph of spatial transforms between named coordinate frames. Query any transform between frames — the graph finds the shortest path and auto-composes the chain.
graph.get_transform('world', 'image')
Pinhole cameras with intrinsics/distortion, plus orthographic BEV/front/side projections. All compose natively with transforms.
tf.transform_points(camera_projection, points_3d)
The * operator composes transforms with dimensional type safety. The result type is
determined by the dimensional flow.
| Composition | Flow | Result | Use Case |
|---|---|---|---|
Transform × Transform |
3D→3D→3D | Transform |
Chain rigid body transforms |
Projection × Transform |
3D→3D→2D | CompositeProjection |
Project from any frame |
Transform × InvProjection |
2D→3D→3D | InverseCompositeProjection |
Unproject + reposition |
Projection × InvProjection |
2D→3D→2D | MatrixTransform |
Inter-image mapping (homography) |
Define spatial relationships between coordinate frames and query transforms automatically.
import tgraph.transform as tf
import numpy as np
graph = tf.TransformGraph()
graph.add_transform('world', 'robot', tf.Translation(x=1.0, y=2.0))
graph.add_transform('robot', 'camera', tf.Transform(
translation=[0.1, 0, 0.5],
rotation=tf.Rotation.from_roll_pitch_yaw(pitch=-0.1).rotation,
))
# Auto-compose: world → robot → camera
world_to_camera = graph.get_transform('world', 'camera')
Project 3D points to 2D pixels and unproject back with known depth.
K = np.array([[500, 0, 320], [0, 500, 240], [0, 0, 1]])
camera = tf.CameraProjection(intrinsic_matrix=K, image_size=(640, 480))
# 3D → 2D
pixels = tf.transform_points(camera, points_3d)
# 2D → 3D (with known depth)
points_3d = camera.inverse().unproject(pixels, depths)
Derive epipolar matrices directly from the transforms graph structure.
E = graph.get_essential_matrix("image1", "image2")
F = graph.get_fundamental_matrix("image1", "image2")
H = graph.get_homography("image1", "image2",
plane_normal=[0,0,1], plane_distance=1.0)