pyrealsense2¶
LibrealsenseTM Python Bindings¶
Library for accessing Intel RealSenseTM cameras
Functions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Given pixel coordinates and depth in an image with no distortion or inverse distortion coefficients, compute the corresponding point in 3D space relative to the same camera |
|
Calculate horizontal and vertical field of view, based on video intrinsics |
|
Given pixel coordinates of the color image and a minimum and maximum depth, compute the corresponding pixel coordinates in the depth image. |
|
Given a point in 3D space, compute the corresponding pixel coordinates in an image with no distortion or forward distortion coefficients produced by the same camera |
|
Transform 3D coordinates relative to one sensor to 3D coordinates relative to another viewpoint |
Classes
Performs alignment between depth image and another image. |
|
Calibration target type. |
|
Calibration callback status for use in device_calibration.trigger_device_calibration |
|
Calibration type for use in device_calibration |
|
This information is mainly available for camera debug and troubleshooting and should not be used in applications. |
|
Colorizer filter generates color images based on input depth frame |
|
IMU combined GYRO & ACCEL data |
|
Extends the frame class with additional frameset related attributes and functions |
|
The config allows pipeline users to request filters for the pipeline streams and device selection and configuration. |
|
Librealsense context class. |
|
Performs downsampling by using the median with specific kernel size. |
|
Extends the video_frame class with additional depth related attributes and functions. |
|
Extends the depth_frame class with additional disparity related attributes and functions. |
|
Converts from depth representation to disparity representation and vice - versa in depth frames |
|
Distortion model: defines how pixel coordinates should be mapped to sensor coordinates. |
|
Specifies advanced interfaces (capabilities) objects may implement. |
|
Cross-stream extrinsics: encodes the topology describing how the different devices are oriented. |
|
Define the filter workflow, inherit this class to generate your own filter. |
|
Interface for frame filtering functionality |
|
A stream's format identifies how binary data is encoded within a frame. |
|
Base class for multiple frame extensions |
|
Per-Frame-Metadata is the set of read-only properties that might be exposed for each individual frame. |
|
Frame queues are the simplest cross-platform synchronization primitive provided by librealsense to help developers who are not using async APIs. |
|
The source used to generate frames, which is usually done by the low level driver for each sensor. |
|
Merges depth frames with different sequence ID |
|
The processing performed depends on the selected hole filling mode. |
|
Video stream intrinsics. |
|
For L500 devices: provides optimized settings (presets) for specific types of usage. |
|
Severity of the librealsense logger. |
|
Specifies types of different matchers. |
|
Motion device intrinsics: scale, bias, and variances. |
|
Extends the frame class with additional motion related attributes and functions |
|
All the parameters required to define a motion stream. |
|
Stream profile instance which contains IMU-specific intrinsics. |
|
Category of the librealsense notification. |
|
Defines general configuration controls. |
|
The different types option values can take on |
|
Base class for options interface. |
|
The pipeline simplifies the user interaction with the device and computer vision processing modules. |
|
The pipeline profile includes a device and a selection of active streams, with specific profiles. |
|
Members: |
|
Generates 3D point clouds based on a depth frame. |
|
Extends the frame class with additional point cloud related attributes and functions. |
|
Extends the frame class with additional pose related attributes and functions. |
|
All the parameters required to define a pose stream. |
|
Stream profile instance with an explicit pose extension type. |
|
Define the processing block workflow, inherit this class to generate your own processing_block. |
|
Members: |
|
Quaternion used to represent rotation. |
|
Records the given device and saves it to the given file as rosbag format. |
|
For D400 devices: provides optimized settings (presets) for specific types of usage. |
|
Splits depth frames with different sequence ID |
|
All the parameters required to define a motion frame. |
|
All the parameters required to define a sensor notification. |
|
All the parameters required to define a pose frame. |
|
All the parameters required to define a video frame |
|
Spatial filter smooths the image by calculating frame with alpha and delta settings. |
|
Streams are different types of data provided by RealSense devices. |
|
Stores details about the profile of a stream. |
|
Sync instance to align frames from different streams |
|
Temporal filter smooths the image by calculating multiple frames with alpha and delta settings. |
|
Depth thresholding filter. |
|
Specifies the clock in relation to which the frame timestamp was measured. |
|
3D vector in Euclidean coordinate space. |
|
Extends the frame class with additional video related attributes and functions. |
|
All the parameters required to define a video stream. |
|
Stream profile instance which contains additional video attributes. |
|
Converts frames in raw YUY format to RGB. |