data_juicer.ops.mapper.image_mmpose_mapper module#

class data_juicer.ops.mapper.image_mmpose_mapper.ImageMMPoseMapper(deploy_cfg: str = None, model_cfg: str = None, model_files: str | Sequence[str] | None = None, pose_key: str = 'pose_info', visualization_dir: str = None, *args, **kwargs)[source]#

Bases: Mapper

Mapper to perform human keypoint detection inference using MMPose models. It requires three essential components for model initialization: - deploy_cfg (str): Path to the deployment configuration file (defines inference settings) - model_cfg (str): Path to the model configuration file (specifies model architecture) - model_files (List[str]): Model weight files including pre-trained weights and parameters

The implementation follows the official MMPose deployment guidelines from MMDeploy. For detailed configuration requirements and usage examples, refer to: open-mmlab/mmdeploy

__init__(deploy_cfg: str = None, model_cfg: str = None, model_files: str | Sequence[str] | None = None, pose_key: str = 'pose_info', visualization_dir: str = None, *args, **kwargs)[source]#

Initialization method. :param deploy_cfg: MMPose deployment config file. :param model_cfg: MMPose model config file. :param model_files: Path to the model weight files. :param pose_key: Key to store pose information. :param visualization_dir: Directory to save visualization results. :param args: extra args :param kwargs: extra args

parse_and_filter(data_sample) Dict[source]#

Extract elements necessary to represent a prediction into a dictionary.

It’s better to contain only basic data elements such as strings and numbers in order to guarantee it’s json-serializable.

Parameters:

data_sample (PoseDataSample) – Predictions of the model.

Returns:

Prediction results.

Return type:

dict

visualize_results(image, model, result, output_file)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample