data_juicer.ops.mapper.video_captioning_from_vlm_mapper module#
- class data_juicer.ops.mapper.video_captioning_from_vlm_mapper.VideoCaptioningFromVLMMapper(hf_model: str = 'Qwen/Qwen3-VL-8B-Instruct', enable_vllm: bool = False, caption_num: Annotated[int, Gt(gt=0)] = 1, keep_candidate_mode: str = 'random_any', keep_original_sample: bool = True, prompt: str | None = None, prompt_key: str | None = None, model_params: Dict = None, sampling_params: Dict = None, *args, **kwargs)[source]#
Bases:
MapperGenerates video captions using a VLM that accepts videos as inputs.
This operator processes video samples to generate captions based on the provided video. It uses a VLM model that accept videos as inputs, such as ‘Qwen/Qwen3-VL-8B-Instruct’, to generate multiple caption candidates for each video. The number of generated captions and the strategy to keep or filter these candidates can be configured. The final output can include both the original sample and the generated captions, depending on the configuration.
- __init__(hf_model: str = 'Qwen/Qwen3-VL-8B-Instruct', enable_vllm: bool = False, caption_num: Annotated[int, Gt(gt=0)] = 1, keep_candidate_mode: str = 'random_any', keep_original_sample: bool = True, prompt: str | None = None, prompt_key: str | None = None, model_params: Dict = None, sampling_params: Dict = None, *args, **kwargs)[source]#
Initialization method.
- Parameters:
hf_model – VLM model name on huggingface to generate caption
enable_vllm – If true, use VLLM for loading hugging face or local llm.
caption_num – how many candidate captions to generate for each video
keep_candidate_mode –
retain strategy for the generated $caption_num$ candidates.
’random_any’: Retain the random one from generated captions
- ’similar_one_simhash’: Retain the generated one that is most
similar to the original caption
’all’: Retain all generated captions by concatenation
Note
This is a batched_OP, whose input and output type are both list. Suppose there are $N$ list of input samples, whose batch size is $b$, and denote caption_num as $M$. The number of total samples after generation is $2Nb$ when keep_original_sample is True and $Nb$ when keep_original_sample is False. For ‘random_any’ and ‘similar_one_simhash’ mode, it’s $(1+M)Nb$ for ‘all’ mode when keep_original_sample is True and $MNb$ when keep_original_sample is False.
- Parameters:
keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated captions in the final datasets and the original captions will be removed. It’s True in default.
prompt – a string prompt to guide the generation of video-blip model for all samples globally. It’s None in default, which means using the DEFAULT_PROMPT.
prompt_key – the key name of fields in samples to store prompts for each sample. It’s used for set different prompts for different samples. If it’s none, use prompt in parameter “prompt”. It’s None in default.
model_params – Parameters for initializing the model.
sampling_params – Extra parameters passed to the model calling. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
args – extra args
kwargs – extra kwargs
- process_batched(samples, rank=None, context=False)[source]#
- Parameters:
samples
- Returns:
Note
This is a batched_OP, whose the input and output type are both list. Suppose there are $N$ input sample list with batch size as $b$, and denote caption_num as $M$. the number of total samples after generation is $2Nb$ for ‘random_any’ and ‘similar_one’ mode, and $(1+M)Nb$ for ‘all’ mode.