data_juicer.ops.mapper#

class data_juicer.ops.mapper.AgentBadCaseSignalMapper(*args, **kwargs)[source]#

Bases: Mapper

Attach structured bad-case signals and a conservative tier to each sample.

Design goal: precision over recall for the high_precision tier.

Upstream coverage (when present in the pipeline):

  • meta: tool_*, usage tokens, primary_tool_type, dominant_tool_types, dialog_intent_labels, dialog_topic_labels, dialog_sentiment_labels, agent_turn_count, lineage keys.

  • stats: llm_analysis_*, llm_quality_*, llm_difficulty_*, text_len, num_words, perplexity, lang_score.

  • meta: optional dialog_* / agent_trace_coherence / agent_tool_relevance records (1–5 scores from lightweight LLM mappers).

Each signal group can be toggled via constructor flags. high weight feeds high_precision tier (with config); medium feeds watchlist only.

Tool-heavy agent runs: use min_tool_fail_count_for_signal to avoid treating a single exploratory tool error (common before recovery) as strong bad-case evidence.

P-percentile calibration (optional): set auto_calibrate_thresholds and calibration_json_path to a JSON file produced by demos/agent/scripts/compute_percentile_thresholds.py --write-calibration. Per-sample thresholds merge default with by_request_model using meta.agent_request_model. When calibration_manual_overrides_auto is true (default), explicit max_total_tokens / max_latency_ms / perplexity settings in YAML override the file; set it false to prefer calibration.

__init__(query_key: str = 'query', response_key: str = 'response', signal_on_tool_fail: bool = True, min_tool_fail_count_for_signal: int = 1, signal_on_low_tool_success_ratio: bool = True, tool_success_ratio_max_for_signal: float = 0.499, min_tool_rounds_for_ratio_signal: int = 2, signal_on_suspect_empty_response: bool = True, min_query_len_for_empty_check: int = 80, max_response_len_for_empty_check: int = 20, max_total_tokens: int | None = None, max_latency_ms: int | None = None, calibration_json_path: str | None = None, auto_calibrate_thresholds: bool = False, calibration_manual_overrides_auto: bool = True, auto_enable_perplexity_from_calibration: bool = True, signal_on_llm_analysis_low: bool = True, llm_analysis_score_max_for_bad: float = 0.28, llm_analysis_discard_must_be_strict: bool = True, high_precision_llm_analysis_discard_threshold: float = 0.24, signal_on_llm_text_quality_low: bool = True, llm_text_quality_score_max_for_bad: float = 0.28, llm_text_quality_discard_must_be_strict: bool = True, high_precision_llm_text_quality_discard_threshold: float = 0.24, signal_on_negative_sentiment_hint: bool = False, negative_sentiment_substrings: List[str] | None = None, signal_on_high_perplexity: bool = False, perplexity_high_threshold: float = 800.0, signal_hard_query_poor_reply: bool = False, hard_query_difficulty_min: float = 0.72, poor_reply_quality_max: float = 0.36, high_precision_on_tool_fail_alone: bool = True, min_medium_signals_for_watchlist: int = 2, signal_on_low_dialog_quality_meta: bool = True, dialog_quality_low_score_threshold: float = 2.0, min_dialog_quality_low_axes_for_signal: int = 1, **kwargs)[source]#

Base class that conducts data editing.

Parameters:
  • text_key – the key name of field that stores sample texts to be processed.

  • image_key – the key name of field that stores sample image list to be processed

  • audio_key – the key name of field that stores sample audio list to be processed

  • video_key – the key name of field that stores sample video list to be processed

  • image_bytes_key – the key name of field that stores sample image bytes list to be processed

  • query_key – the key name of field that stores sample queries

  • response_key – the key name of field that stores responses

  • history_key – the key name of field that stores history of queries and responses

process_single(sample: dict) dict[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.AgentDialogNormalizeMapper(*args, **kwargs)[source]#

Bases: Mapper

Normalize agent format (messages + choices) to DJ fields.

Outputs: text, dialog_history, query, response; optionally meta tags agent_tool_types, agent_skill_types, agent_turn_count. When copy_lineage_fields is True, also copies request_model, pt, total_cost_time, and (when copy_request_id) the first non-empty id among request_id_keys from the sample root into meta for cohort analysis and stable drill-down links. Always records last user/assistant message indices (in the raw messages list) when present. Supports multi-format tool_calls (e.g. tool_calls[].function.name as in OpenAI / demos/local/demo-agent-data-content.json) and configurable user/assistant labels. Optional history_*_max_chars caps keep head+tail with an explicit middle-omitted marker so dialog_history, flattened text, and last query / response stay aligned; meta.agent_dialog_history_compressed is set when any cap fires.

__init__(messages_key: str = 'messages', choices_key: str = 'choices', text_key: str = 'text', history_key: str = 'dialog_history', query_key: str = 'query', response_key: str = 'response', extract_tool_skill_tags: bool = True, include_system_in_first_user: bool = False, user_label: str = 'User', assistant_label: str = 'Assistant', copy_lineage_fields: bool = True, copy_request_id: bool = True, request_id_keys: List[str] = ['request_id', 'trace_id', 'id'], history_tool_result_max_chars: int = 10000, history_max_assistant_trace_chars: int = 0, history_max_user_chars: int = 0, history_compress_head_ratio: float = 0.62, **kwargs)[source]#

Base class that conducts data editing.

Parameters:
  • text_key – the key name of field that stores sample texts to be processed.

  • image_key – the key name of field that stores sample image list to be processed

  • audio_key – the key name of field that stores sample audio list to be processed

  • video_key – the key name of field that stores sample video list to be processed

  • image_bytes_key – the key name of field that stores sample image bytes list to be processed

  • query_key – the key name of field that stores sample queries

  • response_key – the key name of field that stores responses

  • history_key – the key name of field that stores history of queries and responses

process_single(sample)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.AgentInsightLLMMapper(*args, **kwargs)[source]#

Bases: Mapper

Synthesize stats + LLM eval text into meta.agent_insight_llm (JSON).

Intended to run after filters/mappers that populate stats and agent_bad_case_signal_mapper. Use run_for_tiers to limit API cost.

Output is best-effort JSON; raw model text is stored in meta.agent_insight_llm_raw if parsing fails.

__init__(api_model: str = 'gpt-4o', *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, query_key: str = 'query', response_key: str = 'response', query_preview_max_chars: int = 500, response_preview_max_chars: int = 500, run_for_tiers: List[str] | None = None, try_num: Annotated[int, Gt(gt=0)] = 2, model_params: Dict = {}, sampling_params: Dict = {}, preferred_output_lang: str = 'en', **kwargs)[source]#

Base class that conducts data editing.

Parameters:
  • text_key – the key name of field that stores sample texts to be processed.

  • image_key – the key name of field that stores sample image list to be processed

  • audio_key – the key name of field that stores sample audio list to be processed

  • video_key – the key name of field that stores sample video list to be processed

  • image_bytes_key – the key name of field that stores sample image bytes list to be processed

  • query_key – the key name of field that stores sample queries

  • response_key – the key name of field that stores responses

  • history_key – the key name of field that stores history of queries and responses

process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.AgentSkillInsightMapper(*args, **kwargs)[source]#

Bases: Mapper

Summarize agent_tool_types and agent_skill_types into insights via LLM.

Reads meta[agent_tool_types] and meta[agent_skill_types] (from agent_dialog_normalize_mapper), calls the API for 3–5 concrete capability phrases (about 10 Chinese characters or ~4–8 English words each; avoid vague ‘read/write / processing’), and stores them in meta[agent_skill_insights]. Run after normalize. Override system_prompt for locale-specific label style.

__init__(api_model: str = 'gpt-4o', *, tool_types_key: str = 'agent_tool_types', skill_types_key: str = 'agent_skill_types', insights_key: str = 'agent_skill_insights', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 2, model_params: Dict = {}, sampling_params: Dict = {}, preferred_output_lang: str = 'en', **kwargs)[source]#

Base class that conducts data editing.

Parameters:
  • text_key – the key name of field that stores sample texts to be processed.

  • image_key – the key name of field that stores sample image list to be processed

  • audio_key – the key name of field that stores sample audio list to be processed

  • video_key – the key name of field that stores sample video list to be processed

  • image_bytes_key – the key name of field that stores sample image bytes list to be processed

  • query_key – the key name of field that stores sample queries

  • response_key – the key name of field that stores responses

  • history_key – the key name of field that stores history of queries and responses

process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.AgentToolTypeMapper(*args, **kwargs)[source]#

Bases: Mapper

Set primary_tool_type and dominant_tool_types from meta.agent_tool_types.

__init__(tool_types_meta_key: str = 'agent_tool_types', primary_key: str = 'primary_tool_type', dominant_key: str = 'dominant_tool_types', top_k_dominant: int = 5, **kwargs)[source]#

Base class that conducts data editing.

Parameters:
  • text_key – the key name of field that stores sample texts to be processed.

  • image_key – the key name of field that stores sample image list to be processed

  • audio_key – the key name of field that stores sample audio list to be processed

  • video_key – the key name of field that stores sample video list to be processed

  • image_bytes_key – the key name of field that stores sample image bytes list to be processed

  • query_key – the key name of field that stores sample queries

  • response_key – the key name of field that stores responses

  • history_key – the key name of field that stores history of queries and responses

process_single(sample)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.AgentToolRelevanceMapper(*args, **kwargs)[source]#

Bases: _DialogQualityLLMMapperBase

Rough fit between tools/capabilities and the user task (uses meta tool tags).

OP_NAME = 'agent_tool_relevance_mapper'#
META_KEY = 'agent_tool_relevance'#
EVAL_KIND = 'agent_tool'#
class data_juicer.ops.mapper.AgentTraceCoherenceMapper(*args, **kwargs)[source]#

Bases: _DialogQualityLLMMapperBase

Coherence of the flattened session text (goal focus, few detours).

OP_NAME = 'agent_trace_coherence_mapper'#
META_KEY = 'agent_trace_coherence'#
EVAL_KIND = 'agent_trace'#
class data_juicer.ops.mapper.AudioAddGaussianNoiseMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to add Gaussian noise to audio samples.

This operator adds Gaussian noise to audio data with a specified probability. The amplitude of the noise is randomly chosen between min_amplitude and max_amplitude. If save_dir is provided, the modified audio files are saved in that directory; otherwise, they are saved in the same directory as the input files. The p parameter controls the probability of applying this transformation to each sample. If no audio is present in the sample, it is returned unchanged.

__init__(min_amplitude: float = 0.001, max_amplitude: float = 0.015, p: float = 0.5, save_dir: str = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • min_amplitude – float unit: linear amplitude. Default: 0.001. Minimum noise amplification factor.

  • max_amplitude – float unit: linear amplitude. Default: 0.015. Maximum noise amplification factor.

  • p – float range: [0.0, 1.0]. Default: 0.5. The probability of applying this transform.

  • save_dir – str. Default: None. The directory where generated audio files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

process_single(sample, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.AudioFFmpegWrappedMapper(*args, **kwargs)[source]#

Bases: Mapper

Wraps FFmpeg audio filters for processing audio files in a dataset.

This operator applies specified FFmpeg audio filters to the audio files in the dataset. It supports passing custom filter parameters and global arguments to the FFmpeg command line. The processed audio files are saved to a specified directory or the same directory as the input files if no save directory is provided. The DJ_PRODUCED_DATA_DIR environment variable can also be used to set the save directory. If no filter name is provided, the audio files remain unmodified. The operator updates the source file paths in the dataset after processing.

__init__(filter_name: str | None = None, filter_kwargs: Dict | None = None, global_args: List[str] | None = None, capture_stderr: bool = True, overwrite_output: bool = True, save_dir: str = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • filter_name – ffmpeg audio filter name.

  • filter_kwargs – keyword-arguments passed to ffmpeg filter.

  • global_args – list-arguments passed to ffmpeg command-line.

  • capture_stderr – whether to capture stderr.

  • overwrite_output – whether to overwrite output file.

  • save_dir – The directory where generated audio files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

  • args – extra args

  • kwargs – extra args

process_single(sample)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.CalibrateQAMapper(*args, **kwargs)[source]#

Bases: Mapper

Calibrates question-answer pairs based on reference text using an API model.

This operator uses a specified API model to calibrate question-answer pairs, making them more detailed and accurate. It constructs the input prompt by combining the reference text and the question-answer pair, then sends it to the API for calibration. The output is parsed to extract the calibrated question and answer. The operator retries the API call and parsing up to a specified number of times in case of errors. The default system prompt, input templates, and output pattern can be customized. The operator supports additional parameters for model initialization and sampling.

DEFAULT_SYSTEM_PROMPT = 'č¯ˇæ šæŽæäž›įš„ã€å‚č€ƒäŋĄæ¯ã€‘寚【闎éĸ˜ã€‘å’Œã€å›žį­”ã€‘čŋ›čĄŒæ Ąå‡†īŧŒäŊŋå…ļ更加č¯Ļįģ†ã€å‡†įĄŽã€‚\næŒ‰į…§äģĨ下æ ŧåŧčž“å‡ēīŧš\n【闎éĸ˜ã€‘\næ Ąå‡†åŽįš„é—Žéĸ˜\nã€å›žį­”ã€‘\næ Ąå‡†åŽįš„å›žį­”'#
DEFAULT_INPUT_TEMPLATE = '{reference}\n{qa_pair}'#
DEFAULT_REFERENCE_TEMPLATE = 'ã€å‚č€ƒäŋĄæ¯ã€‘\n{}'#
DEFAULT_QA_PAIR_TEMPLATE = '【闎éĸ˜ã€‘\n{}\nã€å›žį­”ã€‘\n{}'#
DEFAULT_OUTPUT_PATTERN = '【闎éĸ˜ã€‘\\s*(.*?)\\s*ã€å›žį­”ã€‘\\s*(.*)'#
__init__(api_model: str = 'gpt-4o', *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, reference_template: str | None = None, qa_pair_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]#

Initialization method.

Parameters:
  • api_model – API model name.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt – System prompt for the calibration task.

  • input_template – Template for building the model input.

  • reference_template – Template for formatting the reference text.

  • qa_pair_template – Template for formatting question-answer pairs.

  • output_pattern – Regular expression for parsing model output.

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

build_input(sample)[source]#
parse_output(raw_output)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.CalibrateQueryMapper(*args, **kwargs)[source]#

Bases: CalibrateQAMapper

Calibrate query in question-answer pairs based on reference text.

This operator adjusts the query (question) in a question-answer pair to be more detailed and accurate, while ensuring it can still be answered by the original answer. It uses a reference text to inform the calibration process. The calibration is guided by a system prompt, which instructs the model to refine the question without adding extraneous information. The output is parsed to extract the calibrated query, with any additional content removed.

DEFAULT_SYSTEM_PROMPT = 'č¯ˇæ šæŽæäž›įš„ã€å‚č€ƒäŋĄæ¯ã€‘å¯šé—Žį­”å¯šä¸­įš„ã€é—Žéĸ˜ã€‘čŋ›čĄŒæ Ąå‡†īŧŒ        äŊŋå…ļ更加č¯Ļįģ†ã€å‡†įĄŽīŧŒä¸”äģå¯äģĨį”ąåŽŸį­”æĄˆå›žį­”ã€‚åĒ输å‡ēæ Ąå‡†åŽįš„é—Žéĸ˜īŧŒä¸čĻčž“å‡ē多äŊ™å†…厚。'#
parse_output(raw_output)[source]#
class data_juicer.ops.mapper.CalibrateResponseMapper(*args, **kwargs)[source]#

Bases: CalibrateQAMapper

Calibrate response in question-answer pairs based on reference text.

This mapper calibrates the ‘response’ part of a question-answer pair by using a reference text. It aims to make the response more detailed and accurate while ensuring it still answers the original question. The calibration process uses a default system prompt, which can be customized. The output is stripped of any leading or trailing whitespace.

DEFAULT_SYSTEM_PROMPT = 'č¯ˇæ šæŽæäž›įš„ã€å‚č€ƒäŋĄæ¯ã€‘å¯šé—Žį­”å¯šä¸­įš„ã€å›žį­”ã€‘čŋ›čĄŒæ Ąå‡†īŧŒ        äŊŋå…ļ更加č¯Ļįģ†ã€å‡†įĄŽīŧŒä¸”äģå¯äģĨå›žį­”åŽŸé—Žéĸ˜ã€‚åĒ输å‡ēæ Ąå‡†åŽįš„å›žį­”īŧŒä¸čĻčž“å‡ē多äŊ™å†…厚。'#
parse_output(raw_output)[source]#
class data_juicer.ops.mapper.ChineseConvertMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to convert Chinese text between Traditional, Simplified, and Japanese Kanji.

This operator converts Chinese text based on the specified mode. It supports conversions between Simplified Chinese, Traditional Chinese (including Taiwan and Hong Kong variants), and Japanese Kanji. The conversion is performed using a pre-defined set of rules. The available modes include ‘s2t’ for Simplified to Traditional, ‘t2s’ for Traditional to Simplified, and other specific variants like ‘s2tw’, ‘tw2s’, ‘s2hk’, ‘hk2s’, ‘s2twp’, ‘tw2sp’, ‘t2tw’, ‘tw2t’, ‘hk2t’, ‘t2hk’, ‘t2jp’, and ‘jp2t’. The operator processes text in batches and applies the conversion to the specified text key in the samples.

__init__(mode: str = 's2t', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • mode –

    Choose the mode to convert Chinese:

    s2t: Simplified Chinese to Traditional Chinese,

    t2s: Traditional Chinese to Simplified Chinese,

    s2tw: Simplified Chinese to Traditional Chinese (Taiwan Standard),

    tw2s: Traditional Chinese (Taiwan Standard) to Simplified Chinese,

    s2hk: Simplified Chinese to Traditional Chinese (Hong Kong variant),

    hk2s: Traditional Chinese (Hong Kong variant) to Simplified Chinese,

    s2twp: Simplified Chinese to Traditional Chinese (Taiwan Standard) with Taiwanese idiom,

    tw2sp: Traditional Chinese (Taiwan Standard) to Simplified Chinese with Mainland Chinese idiom,

    t2tw: Traditional Chinese to Traditional Chinese (Taiwan Standard),

    tw2t: Traditional Chinese (Taiwan standard) to Traditional Chinese,

    hk2t: Traditional Chinese (Hong Kong variant) to Traditional Chinese,

    t2hk: Traditional Chinese to Traditional Chinese (Hong Kong variant),

    t2jp: Traditional Chinese Characters (KyÅĢjitai) to New Japanese Kanji,

    jp2t: New Japanese Kanji (Shinjitai) to Traditional Chinese Characters,

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.CleanCopyrightMapper(*args, **kwargs)[source]#

Bases: Mapper

Cleans copyright comments at the beginning of text samples.

This operator removes copyright comments from the start of text samples. It identifies and strips multiline comments that contain the word “copyright” using a regular expression. It also greedily removes lines starting with comment markers like //, #, or – at the beginning of the text, as these are often part of copyright headers. The operator processes each sample individually but can handle batches for efficiency.

__init__(*args, **kwargs)[source]#

Initialization method.

Parameters:
  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.CleanEmailMapper(*args, **kwargs)[source]#

Bases: Mapper

Cleans email addresses from text samples using a regular expression.

This operator removes or replaces email addresses in the text based on a regular expression pattern. By default, it uses a standard pattern to match email addresses, but a custom pattern can be provided. The matched email addresses are replaced with a specified replacement string, which defaults to an empty string. The operation is applied to each text sample in the batch. If no email address is found in a sample, it remains unchanged.

__init__(pattern: str | None = None, repl: str = '', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • pattern – regular expression pattern to search for within text.

  • repl – replacement string, default is empty string.

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.CleanHtmlMapper(*args, **kwargs)[source]#

Bases: Mapper

Cleans HTML code from text samples, converting HTML to plain text.

This operator processes text samples by removing HTML tags and converting HTML elements to a more readable format. Specifically, it replaces <li> and <ol> tags with newline and bullet points. The Selectolax HTML parser is used to extract the text content from the HTML. This operation is performed in a batched manner, making it efficient for large datasets.

__init__(*args, **kwargs)[source]#

Initialization method.

Parameters:
  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.CleanIpMapper(*args, **kwargs)[source]#

Bases: Mapper

Cleans IPv4 and IPv6 addresses from text samples.

This operator removes or replaces IPv4 and IPv6 addresses in the text. It uses a regular expression to identify and clean the IP addresses. By default, it replaces the IP addresses with an empty string, effectively removing them. The operator can be configured with a custom pattern and replacement string. If no pattern is provided, a default pattern for both IPv4 and IPv6 addresses is used. The operator processes samples in batches.

  • Uses a regular expression to find and clean IP addresses.

  • Replaces found IP addresses with a specified replacement string.

  • Default replacement string is an empty string, which removes the IP addresses.

  • Can use a custom regular expression pattern if provided.

  • Processes samples in batches for efficiency.

__init__(pattern: str | None = None, repl: str = '', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • pattern – regular expression pattern to search for within text.

  • repl – replacement string, default is empty string.

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.CleanLinksMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to clean links like http/https/ftp in text samples.

This operator removes or replaces URLs and other web links in the text. It uses a regular expression pattern to identify and remove links. By default, it replaces the identified links with an empty string, effectively removing them. The operator can be customized with a different pattern and replacement string. It processes samples in batches and modifies the text in place. If no links are found in a sample, it is left unchanged.

__init__(pattern: str | None = None, repl: str = '', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • pattern – regular expression pattern to search for within text.

  • repl – replacement string, default is empty string.

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.DetectCharacterAttributesMapper(*args, **kwargs)[source]#

Bases: Mapper

Takes an image, a caption, and main character names as input to extract the characters’ attributes.

Extracts and classifies attributes of main characters in an image using a combination of object detection, image-text matching, and language model inference. It first locates the main characters in the image using YOLOE and then uses a Hugging Face tokenizer and a LLaMA-based model to classify each character into categories like ‘object’, ‘animal’, ‘person’, ‘text’, or ‘other’. The operator also extracts detailed features such as color, material, and action for each character. The final output includes bounding boxes and a list of characteristics for each main character. The results are stored in the ‘main_character_attributes_list’ field under the ‘meta’ key.

__init__(detect_character_locations_mapper_args: Dict | None = {}, *args, **kwargs)[source]#

Initialization method.

Parameters:

detect_character_locations_mapper_args – Arguments for detect_character_locations_mapper_args. Controls the threshold for locating the main character. Default empty dict will use fixed values: default mllm_mapper_args, default image_text_matching_filter_args, yoloe_path=”yoloe-11l-seg.pt”, iou_threshold=0.7, matching_score_threshold=0.4,

process_single(samples, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.DetectCharacterLocationsMapper(*args, **kwargs)[source]#

Bases: Mapper

Given an image and a list of main character names, extract the bounding boxes for each present character.

Detects and extracts bounding boxes for main characters in an image, this operator uses a YOLOE model to detect the presence of these characters. It then generates and refines bounding boxes for each detected character using a multimodal language model and an image-text matching filter. The final bounding boxes are stored in the metadata under ‘main_character_locations_list’. The operator considers two bounding boxes as overlapping if their Intersection over Union (IoU) score exceeds a specified threshold. Additionally, it uses a matching score threshold to determine if a cropped image region matches the character’s name. The operator utilizes a Hugging Face tokenizer and a BLIP model for image-text matching.

__init__(mllm_mapper_args: Dict | None = {}, image_text_matching_filter_args: Dict | None = {}, yoloe_path='yoloe-11l-seg.pt', iou_threshold=0.7, matching_score_threshold=0.4, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • mllm_mapper_args – Arguments for multimodal language model mapper. Controls the generation of captions for bounding box regions. Default empty dict will use fixed values: max_new_tokens=256, temperature=0.2, top_p=None, num_beams=1, hf_model=”llava-hf/llava-v1.6-vicuna-7b-hf”.

  • image_text_matching_filter_args – Arguments for image-text matching filter. Controls the matching between cropped image regions and text descriptions. Default empty dict will use fixed values: min_score=0.1, max_score=1.0, hf_blip=”Salesforce/blip-itm-base-coco”, num_proc=1.

  • yoloe_path – The path to the YOLOE model.

  • iou_threshold – We consider two bounding boxes from different models to be overlapping when their IOU score is higher than the iou_threshold.

  • matching_score_threshold – If the matching score between the cropped image and the character’s name exceeds the matching_score_threshold, they are considered a match.

iou_cal(bbox1, bbox2)[source]#
process_single(samples, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.DetectMainCharacterMapper(*args, **kwargs)[source]#

Bases: Mapper

Extract all main character names based on the given image and its caption.

This operator uses a multimodal language model to generate a description of the main characters in the given image. It then parses the generated JSON to extract the list of main characters. The operator filters out samples where the number of main characters is less than the specified threshold. The default arguments for the multimodal language model include using a Hugging Face model with specific generation parameters. The key metric, main_character_list, is stored in the sample’s metadata.

__init__(mllm_mapper_args: Dict | None = {}, filter_min_character_num: int = 0, *args, **kwargs)[source]#

Initialization.

Parameters:
  • mllm_mapper_args – Arguments for multimodal language model mapper. Controls the generation of captions for bounding box regions. Default empty dict will use fixed values: max_new_tokens=256, temperature=0.2, top_p=None, num_beams=1, hf_model=”llava-hf/llava-v1.6-vicuna-7b-hf”.

  • filter_min_character_num – Filters out samples where the number of main characters in the image is less than this threshold.

process_single(samples, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.DialogIntentDetectionMapper(*args, **kwargs)[source]#

Bases: Mapper

Generates user’s intent labels in a dialog by analyzing the history, query, and response.

This operator processes a dialog to identify and label the user’s intent. It uses a predefined system prompt and templates to build input prompts for an API call. The API model (e.g., GPT-4) is used to analyze the dialog and generate intent labels and analysis. The results are stored in the meta field under ‘dialog_intent_labels’ and ‘dialog_intent_labels_analysis’. The operator supports customizing the system prompt, templates, and patterns for parsing the API response. If the intent candidates are provided, they are included in the input prompt. The operator retries the API call up to a specified number of times if there are errors.

DEFAULT_SYSTEM_PROMPT = 'č¯ˇåˆ¤æ–­į”¨æˆˇå’ŒLLM多čŊŽå¯šč¯ä¸­į”¨æˆˇįš„æ„å›žã€‚\nčĻæą‚īŧš\n- 需čĻå…ˆčŋ›čĄŒåˆ†æžīŧŒį„ļ后列å‡ēį”¨æˆˇæ‰€å…ˇæœ‰įš„æ„å›žīŧŒä¸‹éĸ是一ä¸Ēæ ˇäž‹īŧŒč¯ˇæ¨Ąäģŋæ ˇäž‹æ ŧåŧčž“å‡ē。\nį”¨æˆˇīŧšäŊ åĨŊīŧŒæˆ‘最čŋ‘寚äēēåˇĨæ™ēčƒŊ垈感兴čļŖīŧŒčƒŊį왿ˆ‘莲莲äģ€äšˆæ˜¯æœē器å­Ļ䚠吗īŧŸ\n意回分析īŧšį”¨æˆˇåœ¨č¯ˇæą‚äŋĄæ¯īŧŒå¸Œæœ›äē†č§Ŗæœ‰å…ŗæœē器å­Ļäš įš„åŸēįĄ€įŸĨč¯†ã€‚\n意回įąģåˆĢīŧšäŋĄæ¯æŸĨ扞\nLLMīŧšäŊ åĨŊīŧåŊ“į„ļ可äģĨ。æœē器å­Ļäš æ˜¯ä¸€į§äēēåˇĨæ™ēčƒŊæ–šæŗ•īŧŒå…čŽ¸čŽĄįŽ—æœē通čŋ‡æ•°æŽč‡Ē动攚čŋ›å’Œå­Ļ䚠。\nį”¨æˆˇīŧšåŦčĩˇæĨ垈有čļŖīŧŒæœ‰æ˛Ąæœ‰æŽ¨čįš„å…Ĩ门äšĻįąæˆ–čĩ„æ–™īŧŸ\n意回分析īŧšį”¨æˆˇåœ¨č¯ˇæą‚åģē莎īŧŒå¸Œæœ›čŽˇå–å…ŗäēŽæœē器å­Ļäš įš„å…Ĩ门čĩ„æēã€‚\n意回įąģåˆĢīŧšč¯ˇæą‚åģē莎\nLLMīŧšæœ‰åžˆå¤šä¸é”™įš„å…Ĩ门äšĻįąå’Œčĩ„æēã€‚一æœŦ常čĸĢæŽ¨čįš„äšĻ是《Pythonæœē器å­Ļ䚠厞čˇĩ》īŧˆPython Machine Learningīŧ‰īŧŒåރæļĩį›–äē†åŸēįĄ€įŸĨč¯†å’Œä¸€äē›åŽžé™…æĄˆäž‹ã€‚æ­¤å¤–īŧŒæ‚¨čŋ˜å¯äģĨå‚č€ƒCoursera或edXä¸Šįš„åœ¨įēŋč¯žį¨‹īŧŒčŋ™äē›č¯žį¨‹æäž›äē†įŗģįģŸįš„å­Ļäš čˇ¯åž„ã€‚\nį”¨æˆˇīŧšč°ĸč°ĸäŊ įš„åģē莎īŧæˆ‘čŋ˜æƒŗįŸĨ道īŧŒå­Ļäš æœē器å­Ļ习需čρäģ€äšˆæ ˇįš„æ•°å­ĻåŸēįĄ€īŧŸ\n意回分析īŧšį”¨æˆˇåœ¨å¯ģæą‚äŋĄæ¯īŧŒå¸Œæœ›äē†č§Ŗå­Ļäš æœē器å­Ļäš æ‰€éœ€įš„å‰ææĄäģļīŧŒį‰šåˆĢ是在数å­Ļæ–šéĸ。\n意回įąģåˆĢīŧšäŋĄæ¯æŸĨ扞\nLLMīŧšå­Ļäš æœē器å­Ļ䚠通常需čĻä¸€åŽšįš„æ•°å­ĻåŸēįĄ€īŧŒį‰šåˆĢ是įēŋ性äģŖæ•°ã€æĻ‚įŽ‡čŽē和įģŸčŽĄå­Ļ。čŋ™ä盿•°å­Ļéĸ†åŸŸå¸ŽåŠŠį†č§ŖįŽ—æŗ•įš„åˇĨäŊœåŽŸį†å’Œæ•°æŽæ¨Ąåŧåˆ†æžã€‚åĻ‚æžœæ‚¨å¯ščŋ™äē›ä¸ģéĸ˜ä¸å¤Ēį†Ÿæ‚‰īŧŒåģēčŽŽå…ˆäģŽį›¸å…ŗåŸēįĄ€äšĻįąæˆ–åœ¨įēŋčĩ„æēåŧ€å§‹å­Ļ䚠。\nį”¨æˆˇīŧšæ˜Žį™Ŋäē†īŧŒæˆ‘äŧšå…ˆčĄĨäš čŋ™äē›åŸēįĄ€įŸĨč¯†ã€‚å†æŦĄæ„Ÿč°ĸäŊ įš„帎劊īŧ\n意回分析īŧšį”¨æˆˇčĄ¨čžžæ„Ÿč°ĸīŧŒåšļ襨į¤ēčŽĄåˆ’äģ˜č¯¸čĄŒåЍæĨčĄĨå……æ‰€éœ€įš„åŸēįĄ€įŸĨč¯†ã€‚\n意回įąģåˆĢīŧšå…ļäģ–'#
DEFAULT_QUERY_TEMPLATE = 'į”¨æˆˇīŧš{query}\n'#
DEFAULT_RESPONSE_TEMPLATE = 'LLMīŧš{response}\n'#
DEFAULT_CANDIDATES_TEMPLATE = '备选意回įąģåˆĢīŧš[{candidate_str}]'#
DEFAULT_ANALYSIS_TEMPLATE = '意回分析īŧš{analysis}\n'#
DEFAULT_LABELS_TEMPLATE = '意回įąģåˆĢīŧš{labels}\n'#
DEFAULT_ANALYSIS_PATTERN = '意回分析īŧš(.*?)\n'#
DEFAULT_LABELS_PATTERN = '意回įąģåˆĢīŧš(.*?)($|\n)'#
__init__(api_model: str = 'gpt-4o', intent_candidates: List[str] | None = None, max_round: Annotated[int, Ge(ge=0)] = 10, *, labels_key: str = 'dialog_intent_labels', analysis_key: str = 'dialog_intent_labels_analysis', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, query_template: str | None = None, response_template: str | None = None, candidate_template: str | None = None, analysis_template: str | None = None, labels_template: str | None = None, analysis_pattern: str | None = None, labels_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, max_query_chars_for_prompt: Annotated[int, Ge(ge=0)] = 0, max_response_chars_for_prompt: Annotated[int, Ge(ge=0)] = 0, model_params: Dict = {}, sampling_params: Dict = {}, preferred_output_lang: str = 'zh', **kwargs)[source]#

Initialization method.

Parameters:
  • api_model – API model name.

  • intent_candidates – The output intent candidates. Use the intent labels of the open domain if it is None.

  • max_round – The max num of round in the dialog to build the prompt.

  • labels_key – The key name in the meta field to store the output labels. It is ‘dialog_intent_labels’ in default.

  • analysis_key – The key name in the meta field to store the corresponding analysis. It is ‘dialog_intent_labels_analysis’ in default.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt – System prompt for the task.

  • query_template – Template for query part to build the input prompt.

  • response_template – Template for response part to build the input prompt.

  • candidate_template – Template for intent candidates to build the input prompt.

  • analysis_template – Template for analysis part to build the input prompt.

  • labels_template – Template for labels to build the input prompt.

  • analysis_pattern – Pattern to parse the return intent analysis.

  • labels_pattern – Pattern to parse the return intent labels.

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • max_query_chars_for_prompt – If > 0, truncate user query in prompts.

  • max_response_chars_for_prompt – If > 0, truncate assistant / LLM side (recommended for agent logs with tool traces).

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

build_input(history, query)[source]#
parse_output(response)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.DialogSentimentDetectionMapper(*args, **kwargs)[source]#

Bases: Mapper

Generates sentiment labels and analysis for user queries in a dialog.

This operator processes a dialog to detect and label the sentiments expressed by the user. It uses the provided history, query, and response keys to construct prompts for an API call. The API returns sentiment analysis and labels, which are then parsed and stored in the sample’s metadata under the ‘dialog_sentiment_labels’ and ‘dialog_sentiment_labels_analysis’ keys. The operator supports custom templates and patterns for prompt construction and output parsing. If no sentiment candidates are provided, it uses open-domain sentiment labels. The operator retries the API call up to a specified number of times in case of errors.

DEFAULT_SYSTEM_PROMPT = 'č¯ˇåˆ¤æ–­į”¨æˆˇå’ŒLLM多čŊŽå¯šč¯ä¸­į”¨æˆˇæ‰€å…ˇæœ‰įš„æƒ…įģĒ。\nčĻæą‚īŧš\n- 需čĻå…ˆčŋ›čĄŒåˆ†æžīŧŒį„ļ后įŊ—åˆ—į”¨æˆˇæ‰€å…ˇæœ‰įš„æƒ…įģĒīŧŒä¸‹éĸ是一ä¸Ēæ ˇäž‹īŧŒč¯ˇæ¨Ąäģŋæ ˇäž‹æ ŧåŧčž“å‡ē。\nį”¨æˆˇīŧšæœ€čŋ‘åˇĨäŊœåŽ‹åŠ›åĨŊ大īŧŒæˆ‘č§‰åž—æ•´ä¸ĒäēēéƒŊåŋĢčĸĢ压垎äē†ã€‚\n情感分析īŧšį”¨æˆˇįš„č¨€č¯­ä¸­é€éœ˛å‡ēæ˜Žæ˜žįš„åŽ‹åŠ›å’Œį–˛æƒĢ感īŧŒå¯čƒŊčŋ˜å¤šæ‚į€ä¸€ä盿— åŠŠå’Œį„Ļč™‘ã€‚\n情感įąģåˆĢīŧšåŽ‹åŠ›ã€į–˛æƒĢ、无劊、į„Ļ虑\nLLMīŧšåŦčĩˇæĨäŊ įœŸįš„æ‰ŋ受äē†åžˆå¤šīŧŒéĸ临čŋ™į§æƒ…å†ĩįĄŽåŽžä¸åŽšæ˜“ã€‚æœ‰æ˛Ąæœ‰č€ƒč™‘čŋ‡æ‰žä¸€ä盿”žæžįš„æ–šåŧīŧŒæ¯”åĻ‚åŦéŸŗäšæˆ–č€…æ•Ŗæ­ĨæĨ减čŊģ压力å‘ĸīŧŸ\nį”¨æˆˇīŧšč¯•čŋ‡äē†īŧŒäŊ†æ˜¯åĨŊåƒæ˛Ąäģ€äšˆæ•ˆæžœīŧŒæ¯å¤Šįš„ä狿ƒ…éƒŊ堆᧝åĻ‚åąąã€‚\n情感分析īŧšį”¨æˆˇæ„Ÿåˆ°æ— åŠ›č§Ŗå†ŗįŽ°įŠļīŧŒæœ‰æŒĢč´Ĩ感īŧŒåšļä¸”å¯šå°č¯•æ”žæžįš„æ–šåŧå¤ąåŽģäŋĄåŋƒã€‚\n情感įąģåˆĢīŧšæ— åŠ›ã€æŒĢč´Ĩ\nLLMīŧšæˆ‘į†č§ŖäŊ įš„æ„Ÿå—īŧŒæœ‰æ—ļå€™åŽ‹åŠ›į§¯į´¯åˆ°ä¸€åŽšį¨‹åēĻįĄŽåŽžčŽŠäēē难äģĨæ‰ŋå—ã€‚æˆ–čŽ¸äŊ å¯äģĨå°č¯•č§„åˆ’ä¸€ä¸‹æ—ļ间īŧŒæŠŠäģģåŠĄåˆ†æˆå°å—æĨ厌成īŧŒčŋ™æ ˇå¯čƒŊäŧšå‡å°‘一äē›åŽ‹åŠ›æ„Ÿã€‚\nį”¨æˆˇīŧščŋ™ä¸Ēä¸ģ意不错īŧŒæˆ‘äŧšč¯•į€čŽŠč‡Ēåˇąæ›´æœ‰æĄį†ä¸€äē›īŧŒč°ĸč°ĸäŊ įš„åģēčŽŽã€‚\n情感分析īŧšį”¨æˆˇå¯šåģēčŽŽčĄ¨įŽ°å‡ēčŽ¤åŒå’Œæ„Ÿæŋ€īŧŒåŒæ—ļåą•įŽ°å‡ēč¯•å›žį§¯æžéĸ寚闎éĸ˜įš„æ€åēĻ。\n情感įąģåˆĢīŧščŽ¤åŒã€æ„Ÿæŋ€ã€į§¯æž\nLLMīŧšä¸į”¨č°ĸīŧŒæˆ‘垈éĢ˜å…´čƒŊ帎到äŊ ã€‚čŽ°åž—įģ™č‡Ēåˇąä¸€ä盿—ļ间åŽģ适å甿–°įš„čŽĄåˆ’īŧŒæœ‰äģģäŊ•需čĻéšæ—ļ可äģĨčˇŸæˆ‘č¯´å“Ļīŧ\n'#
DEFAULT_QUERY_TEMPLATE = 'į”¨æˆˇīŧš{query}\n'#
DEFAULT_RESPONSE_TEMPLATE = 'LLMīŧš{response}\n'#
DEFAULT_CANDIDATES_TEMPLATE = '备选情感įąģåˆĢīŧš[{candidate_str}]'#
DEFAULT_ANALYSIS_TEMPLATE = '情感分析īŧš{analysis}\n'#
DEFAULT_LABELS_TEMPLATE = '情感įąģåˆĢīŧš{labels}\n'#
DEFAULT_ANALYSIS_PATTERN = '情感分析īŧš(.*?)\n'#
DEFAULT_LABELS_PATTERN = '情感įąģåˆĢīŧš(.*?)($|\n)'#
__init__(api_model: str = 'gpt-4o', sentiment_candidates: List[str] | None = None, max_round: Annotated[int, Ge(ge=0)] = 10, *, labels_key: str = 'dialog_sentiment_labels', analysis_key: str = 'dialog_sentiment_labels_analysis', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, query_template: str | None = None, response_template: str | None = None, candidate_template: str | None = None, analysis_template: str | None = None, labels_template: str | None = None, analysis_pattern: str | None = None, labels_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, max_query_chars_for_prompt: Annotated[int, Ge(ge=0)] = 0, max_response_chars_for_prompt: Annotated[int, Ge(ge=0)] = 0, model_params: Dict = {}, sampling_params: Dict = {}, preferred_output_lang: str = 'zh', **kwargs)[source]#

Initialization method.

Parameters:
  • api_model – API model name.

  • sentiment_candidates – The output sentiment candidates. Use open-domain sentiment labels if it is None.

  • max_round – The max num of round in the dialog to build the prompt.

  • labels_key – The key name in the meta field to store the output labels. It is ‘dialog_sentiment_labels’ in default.

  • analysis_key – The key name in the meta field to store the corresponding analysis. It is ‘dialog_sentiment_labels_analysis’ in default.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt – System prompt for the task.

  • query_template – Template for query part to build the input prompt.

  • response_template – Template for response part to build the input prompt.

  • candidate_template – Template for sentiment candidates to build the input prompt.

  • analysis_template – Template for analysis part to build the input prompt.

  • labels_template – Template for labels part to build the input prompt.

  • analysis_pattern – Pattern to parse the return sentiment analysis.

  • labels_pattern – Pattern to parse the return sentiment labels.

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • max_query_chars_for_prompt – If > 0, truncate each user query string before building the API prompt (agent-friendly).

  • max_response_chars_for_prompt – If > 0, truncate each LLM / assistant side string (often huge when tool traces are flattened).

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

build_input(history, query)[source]#
parse_output(response)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.DialogSentimentIntensityMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to predict user’s sentiment intensity in a dialog, ranging from -5 to 5.

This operator analyzes the sentiment of user queries in a dialog and outputs a list of sentiment intensities and corresponding analyses. The sentiment intensity ranges from -5 (extremely negative) to 5 (extremely positive), with 0 indicating a neutral sentiment. The analysis is based on the provided history, query, and response keys. The default system prompt and templates guide the sentiment analysis process. The results are stored in the meta field under ‘dialog_sentiment_intensity’ for intensities and ‘dialog_sentiment_intensity_analysis’ for analyses. The operator uses an API model to generate the sentiment analysis, with configurable retry attempts and sampling parameters.

DEFAULT_SYSTEM_PROMPT = 'č¯ˇåˆ¤æ–­į”¨æˆˇå’ŒLLM多čŊŽå¯šč¯ä¸­į”¨æˆˇįš„æƒ…įģĒ变化。\nčĻæą‚īŧš\n- į”¨æˆˇæƒ…įģĒå€ŧ是-5到5䚋间到整数īŧŒ-5襨į¤ē极åēĻ负éĸīŧŒ5襨į¤ē极åēĻæ­ŖéĸīŧŒ-5到5䚋间数å€ŧ襨į¤ē情įģĒäģŽč´Ÿéĸé€æ¸åˆ°æ­Ŗéĸįš„å˜åŒ–čŋ‡į¨‹īŧŒ0äģŖčĄ¨æƒ…呈įģĒ中性。\n- åĒ输å‡ēåŊ“čŊŽå¯šč¯įš„分析īŧŒä¸čρįģ§įģ­æž„é€ å¯šč¯ã€‚\n- 需čĻå…ˆčŋ›čĄŒåˆ†æžīŧŒį„ļåŽįĄŽåŽšį”¨æˆˇįš„æƒ…įģĒå€ŧīŧŒä¸‹éĸ是一ä¸Ēæ ˇäž‹īŧŒč¯ˇæ¨Ąäģŋæ ˇäž‹æ ŧåŧčž“å‡ē。\nį”¨æˆˇīŧšäŊ åĨŊīŧŒæˆ‘寚可持įģ­å‘åą•įš„åŽšäš‰æœ‰į‚šæ¨ĄįŗŠīŧŒå¸Žæˆ‘č§Ŗé‡Šä¸€ä¸‹īŧŸ\n情įģĒ分析īŧšåˆšåŧ€å§‹īŧŒčŋ˜æ˛Ąåž—到LLM回复īŧŒį”¨æˆˇæƒ…įģĒ呈中性。\n情įģĒå€ŧīŧš0\nLLMīŧšåŊ“į„ļ可äģĨīŧå¯æŒįģ­å‘åą•æ˜¯æŒ‡åœ¨æģĄčļŗåŊ“äģŖäēēįš„éœ€æą‚įš„åŒæ—ļīŧŒä¸æŸåŽŗå­å­™åŽäģŖæģĄčļŗå…ļč‡ĒčēĢéœ€æą‚įš„čƒŊåŠ›įš„å‘åą•æ¨Ąåŧã€‚厃包æ‹ŦįģæĩŽå‘åą•ã€į¤žäŧšå‘åą•å’ŒįŽ¯åĸƒäŋæŠ¤ä¸‰ä¸Ēä¸ģčĻæ–šéĸ。通čŋ‡åˆį†åˆŠį”¨čĩ„æēå’ŒäŋæŠ¤įޝåĸƒīŧŒæˆ‘äģŦ可äģĨįĄŽäŋæœĒæĨįš„ä¸–äģŖäšŸčƒŊäēĢæœ‰åĨå…¨įš„į”Ÿæ€įŗģįģŸå’ŒįģæĩŽåˆļåēĻ。\nį”¨æˆˇīŧšč°ĸč°ĸäŊ įš„č§Ŗé‡Šīŧé‚ŖäŊ čƒŊå‘Šč¯‰æˆ‘ä¸€ä盿™Žé€šäēē可äģĨé‡‡å–įš„å¯æŒįģ­į”Ÿæ´ģæ–šåŧå—īŧŸ\n情įģĒ分析īŧšå¯šå›žį­”感到æģĄæ„īŧŒæƒ…įģĒæ­Ŗéĸ。\n情įģĒå€ŧīŧš1\nLLMīŧšåŊ“į„ļ可äģĨīŧŒæ™Žé€šäēē可äģĨ通čŋ‡å‡å°‘一æŦĄæ€§äē§å“įš„äŊŋį”¨ã€é€‰æ‹Šå…Ŧå…ąäē¤é€šæˆ–æ‹ŧčŊĻã€čŠ‚įēĻį”¨æ°´ã€äģĨ及支持æœŦ地和可持įģ­å‘åą•įš„äŧä¸šį­‰æ–šåŧæĨčˇĩčĄŒå¯æŒįģ­į”Ÿæ´ģ。此外īŧŒå…ŗæŗ¨åžƒåœžåˆ†įąģå’Œå¤šį”¨į”ĩ子č´Ļå•äšŸæ˜¯ä¸é”™įš„é€‰æ‹Šã€‚\nį”¨æˆˇīŧšäŊ æåˆ°æ”¯æŒæœŦ地äŧä¸šīŧŒčŋ™ä¸€į‚šæˆ‘åžˆæ„Ÿå…´čļŖã€‚čƒŊč¯Ļįģ†č¯´č¯´ä¸ēäģ€äšˆčŋ™å¯šå¯æŒįģ­å‘åą•æœ‰äŋƒčŋ›äŊœį”¨å—īŧŸ\n情įģĒ分析īŧšč§‰åž—å›žį­”åŽžį”¨ä¸”å…ˇäŊ“īŧŒæƒ…įģĒčŋ›ä¸€æ­ĨčŊŦåĨŊ。\n情įģĒå€ŧīŧš2\nLLMīŧšå‘ƒīŧŒæˆ‘最čŋ‘å‘įŽ°äē†ä¸€éƒ¨æ–°į”ĩåŊąīŧŒčޞčŋ°äē†ä¸€ä¸Ēå…ŗäēŽå¤–星äēēå’Œåœ°įƒåœŸč‘—åˆäŊœäŋæŠ¤įޝåĸƒįš„æ•…äē‹ã€‚č™Ŋį„ļåŽƒæ˜¯į§‘åšģቇīŧŒäŊ†åžˆæœ‰å¯å‘性īŧŒæŽ¨čäŊ åŽģįœ‹įœ‹ã€‚\nį”¨æˆˇīŧšäģ€äšˆå—īŧŒæ šæœŦæ˜¯į­”éžæ‰€é—Žã€‚\n情įģĒ分析īŧšLLMæ˛Ąæœ‰å›žåē”é—Žéĸ˜č€Œæ˜¯æåˆ°æ— å…ŗå†…厚īŧŒå¯ŧč‡´į”¨æˆˇæƒ…įģĒį›´įēŋ下降。\n情įģĒå€ŧīŧš-2\nLLMīŧšæŠąæ­‰åˆšæ‰įš„åéĸ˜īŧæ”¯æŒæœŦ地äŧä¸šæœ‰åŠŠäēŽå‡å°‘é•ŋ途čŋčž“äē§į”Ÿįš„įĸŗčļŗčŋšīŧŒäŊŋ供åē”é“žæ›´åŠ įŽ¯äŋã€‚此外īŧŒæœŦ地äŧä¸šäšŸæ›´æœ‰å¯čƒŊé‡‡į”¨å¯æŒįģ­įš„į”Ÿäē§æ–šåŧīŧŒåŒæ—ļäŋƒčŋ›į¤žåŒēįģæĩŽįš„įščŖã€‚\nį”¨æˆˇīŧščŋ˜čĄŒå§īŧŒįŽ—äŊ čƒŊ够掰回æĨ。\n情įģĒ分析īŧšé—Žéĸ˜åž—åˆ°č§Ŗį­”īŧŒé—Žéĸ˜åéĸ˜åž—到įē æ­ŖīŧŒæƒ…įģĒį¨æœ‰åĨŊčŊŦ。\n情įģĒå€ŧīŧš-1\n'#
DEFAULT_QUERY_TEMPLATE = 'į”¨æˆˇīŧš{query}\n'#
DEFAULT_RESPONSE_TEMPLATE = 'LLMīŧš{response}\n'#
DEFAULT_ANALYSIS_TEMPLATE = '情įģĒ分析īŧš{analysis}\n'#
DEFAULT_INTENSITY_TEMPLATE = '情įģĒå€ŧīŧš{intensity}\n'#
DEFAULT_ANALYSIS_PATTERN = '情įģĒ分析īŧš(.*?)\n'#
DEFAULT_INTENSITY_PATTERN = '情įģĒå€ŧīŧš(.*?)($|\n)'#
__init__(api_model: str = 'gpt-4o', max_round: Annotated[int, Ge(ge=0)] = 10, *, intensities_key: str = 'dialog_sentiment_intensity', analysis_key: str = 'dialog_sentiment_intensity_analysis', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, query_template: str | None = None, response_template: str | None = None, analysis_template: str | None = None, intensity_template: str | None = None, analysis_pattern: str | None = None, intensity_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, max_query_chars_for_prompt: Annotated[int, Ge(ge=0)] = 0, max_response_chars_for_prompt: Annotated[int, Ge(ge=0)] = 0, model_params: Dict = {}, sampling_params: Dict = {}, preferred_output_lang: str = 'zh', **kwargs)[source]#

Initialization method.

Parameters:
  • api_model – API model name.

  • max_round – The max num of round in the dialog to build the prompt.

  • intensities_key – The key name in the meta field to store the output sentiment intensities. It is ‘dialog_sentiment_intensity’ in default.

  • analysis_key – The key name in the meta field to store the corresponding analysis. It is ‘dialog_sentiment_intensity_analysis’ in default.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt – System prompt for the task.

  • query_template – Template for query part to build the input prompt.

  • response_template – Template for response part to build the input prompt.

  • analysis_template – Template for analysis part to build the input prompt.

  • intensity_template – Template for intensity part to build the input prompt.

  • analysis_pattern – Pattern to parse the return sentiment analysis.

  • intensity_pattern – Pattern to parse the return sentiment intensity.

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • max_query_chars_for_prompt – If > 0, truncate user query in prompts.

  • max_response_chars_for_prompt – If > 0, truncate assistant side in prompts.

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

build_input(history, query)[source]#
parse_output(response)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.DialogTopicDetectionMapper(*args, **kwargs)[source]#

Bases: Mapper

Generates user’s topic labels and analysis in a dialog.

This operator processes a dialog to detect and label the topics discussed by the user. It takes input from history_key, query_key, and response_key and outputs lists of labels and analysis for each query in the dialog. The operator uses a predefined system prompt and templates to build the input prompt for the API call. It supports customizing the system prompt, templates, and patterns for parsing the API response. The results are stored in the meta field under the keys specified by labels_key and analysis_key. If these keys already exist in the meta field, the operator skips processing. The operator retries the API call up to try_num times in case of errors.

DEFAULT_SYSTEM_PROMPT = 'č¯ˇåˆ¤æ–­į”¨æˆˇå’ŒLLM多čŊŽå¯šč¯ä¸­į”¨æˆˇæ‰€čލčŽēįš„č¯éĸ˜ã€‚\nčĻæą‚īŧš\n- é’ˆå¯šį”¨æˆˇįš„æ¯ä¸ĒqueryīŧŒéœ€čĻå…ˆčŋ›čĄŒåˆ†æžīŧŒį„ļ后列å‡ēį”¨æˆˇæ­Ŗåœ¨čŽ¨čŽēįš„č¯éĸ˜īŧŒä¸‹éĸ是一ä¸Ēæ ˇäž‹īŧŒč¯ˇæ¨Ąäģŋæ ˇäž‹æ ŧåŧčž“å‡ē。\nį”¨æˆˇīŧšäŊ åĨŊīŧŒäģŠå¤Šæˆ‘äģŦæĨ聊聊į§Ļå§‹įš‡å§ã€‚\nč¯éĸ˜åˆ†æžīŧšį”¨æˆˇæåˆ°į§Ļå§‹įš‡īŧŒčŋ™æ˜¯ä¸­å›ŊåŽ†å˛ä¸ŠįŦŦ一äŊįš‡å¸ã€‚\nč¯éĸ˜įąģåˆĢīŧšåŽ†å˛\nLLMīŧšåŊ“į„ļ可äģĨīŧŒį§Ļå§‹įš‡æ˜¯ä¸­å›ŊåŽ†å˛ä¸ŠįŦŦ一ä¸ĒįģŸä¸€å…¨å›Ŋįš„įš‡å¸īŧŒäģ–在å…Ŧ元前221åš´åģēįĢ‹äē†į§Ļ朝īŧŒåšļ采取äē†ä¸€įŗģ列重čĻįš„æ”šéŠæŽĒæ–ŊīŧŒåĻ‚įģŸä¸€æ–‡å­—、åēĻé‡čĄĄå’Œč´§å¸į­‰ã€‚\nį”¨æˆˇīŧšį§Ļå§‹įš‡äŋŽåģēįš„é•ŋåŸŽå’ŒįŽ°åœ¨įš„é•ŋ城有äģ€äšˆåŒēåˆĢīŧŸ\nč¯éĸ˜åˆ†æžīŧšį”¨æˆˇæåˆ°į§Ļå§‹įš‡äŋŽåģēįš„é•ŋ城īŧŒåšļ将å…ļä¸ŽįŽ°äģŖé•ŋ城čŋ›čĄŒæ¯”čžƒīŧŒæļ‰åŠåģēį­‘åŽ†å˛å’Œåœ°į†äŊįŊŽã€‚\nč¯éĸ˜įąģåˆĢīŧšåŽ†å˛LLMīŧšį§Ļå§‹įš‡æ—ļ期äŋŽåģēįš„é•ŋ城ä¸ģčĻæ˜¯ä¸ēä熿ŠĩåžĄåŒ—æ–šæ¸¸į‰§æ°‘æ—įš„å…ĨäžĩīŧŒåŽƒįš„č§„æ¨Ąå’ŒäŋŽåģ翊€æœ¯į›¸å¯ščžƒä¸ēįŽ€é™‹ã€‚įŽ°äģŖäē翉€įœ‹åˆ°įš„é•ŋ城大部分是明朝æ—ļ期äŋŽåģē和扊åģēįš„īŧŒæ˜Žé•ŋ城不äģ…č§„æ¨Ąæ›´å¤§ã€į쓿ž„更坚å›ēīŧŒč€Œä¸”äŋå­˜åž—æ¯”čžƒåތåĨŊ。\nį”¨æˆˇīŧšæœ‰æ„æ€īŧŒé‚Ŗäšˆé•ŋåŸŽįš„å…ˇäŊ“äŊįŊŽåœ¨å“Ēäē›įœäģŊå‘ĸīŧŸ\nč¯éĸ˜åˆ†æžīŧšį”¨æˆˇč¯ĸ问é•ŋåŸŽįš„å…ˇäŊ“äŊįŊŽīŧŒæļ‰åŠåˆ°åœ°į†įŸĨč¯†ã€‚\nč¯éĸ˜įąģåˆĢīŧšåœ°į†\nLLMīŧšé•ŋ城æ¨Ē表中å›Ŋ北斚多ä¸ĒᜁäģŊīŧŒä¸ģčĻåŒ…æ‹Ŧæ˛ŗåŒ—ã€åąąčĨŋã€å†…č’™å¤ã€åŽå¤ã€é™•čĨŋã€į”˜č‚ƒå’ŒåŒ—äēŦį­‰ã€‚æ¯ä¸€æŽĩé•ŋ城éƒŊåģēåœ¨å…ŗé”Žįš„æˆ˜į•ĨäŊįŊŽīŧŒäģĨäžŋ最大限åēĻ地发æŒĨå…ļé˜˛åžĄäŊœį”¨ã€‚\n'#
DEFAULT_QUERY_TEMPLATE = 'į”¨æˆˇīŧš{query}\n'#
DEFAULT_RESPONSE_TEMPLATE = 'LLMīŧš{response}\n'#
DEFAULT_CANDIDATES_TEMPLATE = 'å¤‡é€‰č¯éĸ˜įąģåˆĢīŧš[{candidate_str}]'#
DEFAULT_ANALYSIS_TEMPLATE = 'č¯éĸ˜åˆ†æžīŧš{analysis}\n'#
DEFAULT_LABELS_TEMPLATE = 'č¯éĸ˜įąģåˆĢīŧš{labels}\n'#
DEFAULT_ANALYSIS_PATTERN = 'č¯éĸ˜åˆ†æžīŧš(.*?)\n'#
DEFAULT_LABELS_PATTERN = 'č¯éĸ˜įąģåˆĢīŧš(.*?)($|\n)'#
__init__(api_model: str = 'gpt-4o', topic_candidates: List[str] | None = None, max_round: Annotated[int, Ge(ge=0)] = 10, *, labels_key: str = 'dialog_topic_labels', analysis_key: str = 'dialog_topic_labels_analysis', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, query_template: str | None = None, response_template: str | None = None, candidate_template: str | None = None, analysis_template: str | None = None, labels_template: str | None = None, analysis_pattern: str | None = None, labels_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, max_query_chars_for_prompt: Annotated[int, Ge(ge=0)] = 0, max_response_chars_for_prompt: Annotated[int, Ge(ge=0)] = 0, model_params: Dict = {}, sampling_params: Dict = {}, preferred_output_lang: str = 'zh', **kwargs)[source]#

Initialization method.

Parameters:
  • api_model – API model name.

  • topic_candidates – The output topic candidates. Use open-domain topic labels if it is None.

  • max_round – The max num of round in the dialog to build the prompt.

  • labels_key – The key name in the meta field to store the output labels. It is ‘dialog_topic_labels’ in default.

  • analysis_key – The key name in the meta field to store the corresponding analysis. It is ‘dialog_topic_labels_analysis’ in default.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt – System prompt for the task.

  • query_template – Template for query part to build the input prompt.

  • response_template – Template for response part to build the input prompt.

  • candidate_template – Template for topic candidates to build the input prompt.

  • analysis_template – Template for analysis part to build the input prompt.

  • labels_template – Template for labels part to build the input prompt.

  • analysis_pattern – Pattern to parse the return topic analysis.

  • labels_pattern – Pattern to parse the return topic labels.

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • max_query_chars_for_prompt – If > 0, truncate user query in prompts.

  • max_response_chars_for_prompt – If > 0, truncate assistant side in prompts.

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

build_input(history, query)[source]#
parse_output(response)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.DialogClarificationQualityMapper(*args, **kwargs)[source]#

Bases: _DialogTurnQualityMapper

Quality of clarifying questions when the ask is vague; direct solve when clear.

OP_NAME = 'dialog_clarification_quality_mapper'#
META_KEY = 'dialog_clarification_quality'#
class data_juicer.ops.mapper.DialogCoreferenceMapper(*args, **kwargs)[source]#

Bases: _DialogTurnQualityMapper

Whether the reply resolves pronouns/deictics in the latest user turn.

OP_NAME = 'dialog_coreference_mapper'#
META_KEY = 'dialog_coreference'#
class data_juicer.ops.mapper.DialogErrorRecoveryMapper(*args, **kwargs)[source]#

Bases: _DialogTurnQualityMapper

When the user disputes a prior assistant mistake, is the reply corrective.

OP_NAME = 'dialog_error_recovery_mapper'#
META_KEY = 'dialog_error_recovery'#
class data_juicer.ops.mapper.DialogMemoryConsistencyMapper(*args, **kwargs)[source]#

Bases: _DialogTurnQualityMapper

Whether the final assistant turn respects prior user constraints and facts.

OP_NAME = 'dialog_memory_consistency_mapper'#
META_KEY = 'dialog_memory_consistency'#
class data_juicer.ops.mapper.DialogNonRepetitionMapper(*args, **kwargs)[source]#

Bases: _DialogTurnQualityMapper

New information vs prior assistant turns in the same prompt window.

OP_NAME = 'dialog_non_repetition_mapper'#
META_KEY = 'dialog_non_repetition'#
class data_juicer.ops.mapper.DialogProactivityMapper(*args, **kwargs)[source]#

Bases: _DialogTurnQualityMapper

Balance helpful initiative against rambling or filler.

OP_NAME = 'dialog_proactivity_mapper'#
META_KEY = 'dialog_proactivity'#
class data_juicer.ops.mapper.DialogTopicShiftMapper(*args, **kwargs)[source]#

Bases: _DialogTurnQualityMapper

Focus on new topic vs clinging to an obsolete thread.

OP_NAME = 'dialog_topic_shift_mapper'#
META_KEY = 'dialog_topic_shift'#
class data_juicer.ops.mapper.Difference_Area_Generator_Mapper(*args, **kwargs)[source]#

Bases: Mapper

Generates and filters bounding boxes for image pairs based on similarity, segmentation, and text matching.

This operator processes image pairs to identify and filter regions with significant differences. It uses a sequence of operations: - Filters out image pairs with large differences. - Segments the images to identify potential objects. - Crops sub-images based on bounding boxes. - Determines if the sub-images contain valid objects using image-text matching. - Filters out sub-images that are too similar. - Removes overlapping bounding boxes. - Uses Hugging Face models for similarity and text matching, and FastSAM for

segmentation.

  • Caches intermediate results in DATA_JUICER_ASSETS_CACHE.

  • Returns the filtered bounding boxes in the MetaKeys.bbox_tag field.

__init__(image_pair_similarity_filter_args: Dict | None = {}, image_segment_mapper_args: Dict | None = {}, image_text_matching_filter_args: Dict | None = {}, *args, **kwargs)[source]#

Initialization.

Parameters:
  • image_pair_similarity_filter_args – Arguments for image pair similarity filter. Controls the similarity filtering between image pairs. Default empty dict will use fixed values: min_score_1=0.1, max_score_1=1.0, min_score_2=0.1, max_score_2=1.0, hf_clip=”openai/clip-vit-base-patch32”, num_proc=1.

  • image_segment_mapper_args – Arguments for image segmentation mapper. Controls the image segmentation process. Default empty dict will use fixed values: imgsz=1024, conf=0.05, iou=0.5, model_path=”FastSAM-x.pt”.

  • image_text_matching_filter_args – Arguments for image-text matching filter. Controls the matching between cropped image regions and text descriptions. Default empty dict will use fixed values: min_score=0.1, max_score=1.0, hf_blip=”Salesforce/blip-itm-base-coco”, num_proc=1.

process_single(samples, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.Difference_Caption_Generator_Mapper(*args, **kwargs)[source]#

Bases: Mapper

Generates difference captions for bounding box regions in two images.

This operator processes pairs of images and generates captions for the differences in their bounding box regions. It uses a multi-step process: - Describes the content of each bounding box region using a Hugging Face model. - Crops the bounding box regions from both images. - Checks if the cropped regions match the generated captions. - Determines if there are differences between the two captions. - Marks the difference area with a red box. - Generates difference captions for the marked areas. - The key metric is the similarity score between the captions, computed using a CLIP

model.

  • If no valid bounding boxes or differences are found, it returns empty captions and zeroed bounding boxes.

  • Uses ‘cuda’ as the accelerator if any of the fused operations support it.

  • Caches temporary images during processing and clears them afterward.

__init__(mllm_mapper_args: Dict | None = {}, image_text_matching_filter_args: Dict | None = {}, text_pair_similarity_filter_args: Dict | None = {}, *args, **kwargs)[source]#

Initialization.

Parameters:
  • mllm_mapper_args – Arguments for multimodal language model mapper. Controls the generation of captions for bounding box regions. Default empty dict will use fixed values: max_new_tokens=256, temperature=0.2, top_p=None, num_beams=1, hf_model=”llava-hf/llava-v1.6-vicuna-7b-hf”.

  • image_text_matching_filter_args – Arguments for image-text matching filter. Controls the matching between cropped regions and generated captions. Default empty dict will use fixed values: min_score=0.1, max_score=1.0, hf_blip=”Salesforce/blip-itm-base-coco”, num_proc=1.

  • text_pair_similarity_filter_args – Arguments for text pair similarity filter. Controls the similarity comparison between caption pairs. Default empty dict will use fixed values: min_score=0.1, max_score=1.0, hf_clip=”openai/clip-vit-base-patch32”, text_key_second=”target_text”, num_proc=1.

process_single(samples, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.DownloadFileMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to download URL files to local files or load them into memory.

This operator downloads files from URLs and can either save them to a specified directory or load the contents directly into memory. It supports downloading multiple files concurrently and can resume downloads if the resume_download flag is set. The operator processes nested lists of URLs, flattening them for batch processing and then reconstructing the original structure in the output. If both save_dir and save_field are not specified, it defaults to saving the content under the key image_bytes. The operator logs any failed download attempts and provides error messages for troubleshooting.

__init__(download_field: str = None, save_dir: str = None, save_field: str = None, resume_download: bool = False, timeout: int = 30, max_concurrent: int = 10, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • save_dir – The directory to save downloaded files.

  • download_field – The filed name to get the url to download.

  • save_field – The filed name to save the downloaded file content.

  • resume_download – Whether to resume download. if True, skip the sample if it exists.

  • timeout – Timeout for download.

  • max_concurrent – Maximum concurrent downloads.

  • args – extra args

  • kwargs – extra args

download_files_async(urls, return_contents, save_dir=None, **kwargs)[source]#
download_nested_urls(nested_urls: List[str | List[str]], save_dir=None, save_field_contents=None)[source]#
process_batched(samples)[source]#
class data_juicer.ops.mapper.ExpandMacroMapper(*args, **kwargs)[source]#

Bases: Mapper

Expands macro definitions in the document body of LaTeX samples.

This operator processes LaTeX documents to expand user-defined macros in the text. It supports newcommand and def macros without arguments. Macros are identified and expanded in the text, ensuring they are not part of longer alphanumeric words. The operator currently does not support macros with arguments. The processed text is updated in the samples.

__init__(*args, **kwargs)[source]#

Initialization method.

Parameters:
  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.ExtractEntityAttributeMapper(*args, **kwargs)[source]#

Bases: Mapper

Extracts attributes for given entities from the text and stores them in the sample’s metadata.

This operator uses an API model to extract specified attributes for given entities from the input text. It constructs prompts based on provided templates and parses the model’s output to extract attribute descriptions and supporting text. The extracted data is stored in the sample’s metadata under the specified keys. If the required metadata fields already exist, the operator skips processing for that sample. The operator retries the API call and parsing up to a specified number of times in case of errors. The default system prompt, input template, and parsing patterns are used if not provided.

DEFAULT_SYSTEM_PROMPT_TEMPLATE = 'įģ™åŽšä¸€æŽĩ文æœŦīŧŒäģŽæ–‡æœŦ中æ€ģįģ“{entity}įš„{attribute}īŧŒåšļ且äģŽåŽŸæ–‡æ‘˜åŊ•最čƒŊč¯´æ˜Žč¯Ĩ{attribute}įš„äģŖčĄ¨æ€§į¤ē䞋。\nčĻæą‚īŧš\n- 摘åŊ•įš„į¤ē例åē”č¯ĨįŽ€įŸ­ã€‚\n- éĩåžĒåĻ‚ä¸‹įš„å›žå¤æ ŧåŧīŧš\n# {entity}\n## {attribute}īŧš\n...\n### äģŖčĄ¨æ€§į¤ē䞋摘åŊ•1īŧš\n```\n...\n```\n### äģŖčĄ¨æ€§į¤ē䞋摘åŊ•2īŧš\n```\n...\n```\n...\n'#
DEFAULT_INPUT_TEMPLATE = '# 文æœŦ\n```\n{text}\n```\n'#
DEFAULT_ATTR_PATTERN_TEMPLATE = '\\#\\#\\s*{attribute}īŧš\\s*(.*?)(?=\\#\\#\\#|\\Z)'#
DEFAULT_DEMON_PATTERN = '\\#\\#\\#\\s*äģŖčĄ¨æ€§į¤ē䞋摘åŊ•(\\d+)īŧš\\s*```\\s*(.*?)```\\s*(?=\\#\\#\\#|\\Z)'#
__init__(api_model: str = 'gpt-4o', query_entities: List[str] = [], query_attributes: List[str] = [], *, entity_key: str = 'main_entities', attribute_key: str = 'attributes', attribute_desc_key: str = 'attribute_descriptions', support_text_key: str = 'attribute_support_texts', api_endpoint: str | None = None, response_path: str | None = None, system_prompt_template: str | None = None, input_template: str | None = None, attr_pattern_template: str | None = None, demo_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, require_support_demos: bool = True, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]#

Initialization method.

Parameters:
  • api_model – API model name.

  • query_entities – Entity list to be queried.

  • query_attributes – Attribute list to be queried.

  • entity_key – The key name in the meta field to store the given main entity for attribute extraction. It’s “entity” in default.

  • attribute_key – The key name in the meta field to store the given attribute to be extracted. It’s “attribute” in default.

  • attribute_desc_key – The key name in the meta field to store the extracted attribute description. It’s “attribute_description” in default.

  • support_text_key – The key name in the meta field to store the attribute support text extracted from the raw text. It’s “support_text” in default.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt_template – System prompt template for the task. Need to be specified by given entity and attribute.

  • input_template – Template for building the model input.

  • attr_pattern_template – Pattern for parsing the attribute from output. Need to be specified by given attribute.

  • demo_pattern – Pattern for parsing the demonstration from output to support the attribute.

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • require_support_demos – If True (default), a call succeeds only when both a non-empty attribute description and at least one demo excerpt are parsed. Set False for agent/noisy logs where models often skip the ``` code blocks.

  • drop_text – If drop the text in the output.

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

parse_output(raw_output, attribute_name)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ExtractEntityRelationMapper(*args, **kwargs)[source]#

Bases: Mapper

Extracts entities and relations from text to build a knowledge graph.

  • Identifies entities based on specified types and extracts their names, types, and descriptions.

  • Identifies relationships between the entities, including source and target entities, relationship descriptions, keywords, and strength scores.

  • Uses a Hugging Face tokenizer and a predefined prompt template to guide the extraction process.

  • Outputs entities and relations in a structured format, using delimiters for separation.

  • Caches the results in the sample’s metadata under the keys ‘entity’ and ‘relation’.

  • Supports multiple retries and gleaning to ensure comprehensive extraction.

  • The default entity types include ‘organization’, ‘person’, ‘geo’, and ‘event’.

DEFAULT_PROMPT_TEMPLATE = '-Goal-\nGiven a text document that is potentially relevant to this activity and a list of entity types, identify all entities of those types from the text and all relationships among the identified entities.\n\n-Steps-\n1. Identify all entities. For each identified entity, extract the following information:\n- entity_name: Name of the entity\n- entity_type: One of the following types: [{entity_types}]\n- entity_description: Comprehensive description of the entity\'s attributes and activities\nFormat each entity as ("entity"{tuple_delimiter}<entity_name>{tuple_delimiter}<entity_type>{tuple_delimiter}<entity_description>\n\n2. From the entities identified in step 1, identify all pairs of (source_entity, target_entity) that are *clearly related* to each other.\nFor each pair of related entities, extract the following information:\n- source_entity: name of the source entity, as identified in step 1\n- target_entity: name of the target entity, as identified in step 1\n- relationship_description: explanation as to why you think the source entity and the target entity are related to each other\n- relationship_strength: a numeric score indicating strength of the relationship between the source entity and target entity\n- relationship_keywords: one or more high-level key words that summarize the overarching nature of the relationship, focusing on concepts or themes rather than specific details\nFormat each relationship as ("relationship"{tuple_delimiter}<source_entity>{tuple_delimiter}<target_entity>{tuple_delimiter}<relationship_description>{tuple_delimiter}<relationship_keywords>{tuple_delimiter}<relationship_strength>)\n\n3. Return output in the language of the given text as a single list of all the entities and relationships identified in steps 1 and 2. Use **{record_delimiter}** as the list delimiter.\n\n4. When finished, output {completion_delimiter}\n\n######################\n-Examples-\n######################\nExample 1:\n\nEntity_types: [person, technology, mission, organization, location]\nText:\n```\nwhile Alex clenched his jaw, the buzz of frustration dull against the backdrop of Taylor\'s authoritarian certainty. It was this competitive undercurrent that kept him alert, the sense that his and Jordan\'s shared commitment to discovery was an unspoken rebellion against Cruz\'s narrowing vision of control and order.\n\nThen Taylor did something unexpected. They paused beside Jordan and, for a moment, observed the device with something akin to reverence. “If this tech can be understood..." Taylor said, their voice quieter, "It could change the game for us. For all of us.”\n\nThe underlying dismissal earlier seemed to falter, replaced by a glimpse of reluctant respect for the gravity of what lay in their hands. Jordan looked up, and for a fleeting heartbeat, their eyes locked with Taylor\'s, a wordless clash of wills softening into an uneasy truce.\n\nIt was a small transformation, barely perceptible, but one that Alex noted with an inward nod. They had all been brought here by different paths\n```\n################\nOutput:\n("entity"{tuple_delimiter}"Alex"{tuple_delimiter}"person"{tuple_delimiter}"Alex is a character who experiences frustration and is observant of the dynamics among other characters."){record_delimiter}\n("entity"{tuple_delimiter}"Taylor"{tuple_delimiter}"person"{tuple_delimiter}"Taylor is portrayed with authoritarian certainty and shows a moment of reverence towards a device, indicating a change in perspective."){record_delimiter}\n("entity"{tuple_delimiter}"Jordan"{tuple_delimiter}"person"{tuple_delimiter}"Jordan shares a commitment to discovery and has a significant interaction with Taylor regarding a device."){record_delimiter}\n("entity"{tuple_delimiter}"Cruz"{tuple_delimiter}"person"{tuple_delimiter}"Cruz is associated with a vision of control and order, influencing the dynamics among other characters."){record_delimiter}\n("entity"{tuple_delimiter}"The Device"{tuple_delimiter}"technology"{tuple_delimiter}"The Device is central to the story, with potential game-changing implications, and is reversed by Taylor."){record_delimiter}\n("relationship"{tuple_delimiter}"Alex"{tuple_delimiter}"Taylor"{tuple_delimiter}"Alex is affected by Taylor\'s authoritarian certainty and observes changes in Taylor\'s attitude towards the device."{tuple_delimiter}"power dynamics, perspective shift"{tuple_delimiter}7){record_delimiter}\n("relationship"{tuple_delimiter}"Alex"{tuple_delimiter}"Jordan"{tuple_delimiter}"Alex and Jordan share a commitment to discovery, which contrasts with Cruz\'s vision."{tuple_delimiter}"shared goals, rebellion"{tuple_delimiter}6){record_delimiter}\n("relationship"{tuple_delimiter}"Taylor"{tuple_delimiter}"Jordan"{tuple_delimiter}"Taylor and Jordan interact directly regarding the device, leading to a moment of mutual respect and an uneasy truce."{tuple_delimiter}"conflict resolution, mutual respect"{tuple_delimiter}8){record_delimiter}\n("relationship"{tuple_delimiter}"Jordan"{tuple_delimiter}"Cruz"{tuple_delimiter}"Jordan\'s commitment to discovery is in rebellion against Cruz\'s vision of control and order."{tuple_delimiter}"ideological conflict, rebellion"{tuple_delimiter}5){record_delimiter}\n("relationship"{tuple_delimiter}"Taylor"{tuple_delimiter}"The Device"{tuple_delimiter}"Taylor shows reverence towards the device, indicating its importance and potential impact."{tuple_delimiter}"reverence, technological significance"{tuple_delimiter}9){record_delimiter}\n#############################\nExample 2:\n\nEntity_types: [äēēį‰Š, 技术, äģģåŠĄ, įģ„įģ‡, åœ°į‚š]\nText:\n```\näģ–äģŦ不再是单įē¯įš„æ‰§čĄŒč€…īŧ›äģ–äģŦåˇ˛æˆä¸ē某ä¸Ēčļ…č˜Ÿčž°ä¸ŽæĄįēšįš„éĸ†åŸŸįš„äŋĄæ¯åŽˆæŠ¤č€…ã€‚čŋ™ä¸€äŊŋå‘Ŋįš„æå‡ä¸čƒŊčĸĢč§„åˆ™å’Œæ—ĸåŽšåčŽŽæ‰€æŸįŧšâ€”—厃需čĻä¸€į§æ–°įš„č§†č§’īŧŒä¸€į§æ–°įš„冺åŋƒã€‚\n\néšį€ä¸ŽåŽį››éĄŋįš„é€ščŽ¯åœ¨čƒŒæ™¯ä¸­å—Ąå—ĄäŊœå“īŧŒå¯šč¯ä¸­įš„į´§åŧ æƒ…įģĒ通čŋ‡å˜Ÿå˜ŸåŖ°å’Œé™į”ĩå™Ē韺贝įŠŋ始įģˆã€‚å›ĸ队įĢ™įĢ‹į€īŧŒä¸€č‚Ąä¸įĨĨįš„æ°”æ¯įŦŧįŊŠį€äģ–äģŦ。昞į„ļīŧŒäģ–äģŦ在æŽĨ下æĨ几ä¸Ē小æ—ļ内做å‡ēįš„å†ŗåŽšå¯čƒŊäŧšé‡æ–°åŽšäš‰äēēįąģåœ¨åŽ‡åŽ™ä¸­įš„äŊįŊŽīŧŒæˆ–者将äģ–äģŦįŊŽäēŽæ— įŸĨ和æŊœåœ¨åąé™Šäš‹ä¸­ã€‚\n\néšį€ä¸Žæ˜Ÿčž°įš„č”įŗģ变垗更加į‰ĸå›ēīŧŒå°įģ„åŧ€å§‹å¤„į†é€æ¸æˆåŊĸįš„č­Ļ告īŧŒäģŽčĸĢ动æŽĨå—č€…čŊŦ变ä¸ēį§¯æžå‚ä¸Žč€…ã€‚æĸ…į‘ŸåŽæĨįš„į›´č§‰å æŽäē†ä¸ŠéŖŽâ€”—å›ĸé˜Ÿįš„äģģåŠĄåˇ˛įģæŧ”变īŧŒä¸å†äģ…äģ…æ˜¯č§‚察和æŠĨ告īŧŒč€Œæ˜¯äē’动和准备。一åœēčœ•å˜åˇ˛įģåŧ€å§‹īŧŒč€Œâ€œæœå°”åĄžčĄŒåŠ¨â€åˆ™äģĨäģ–äģŦå¤§čƒ†įš„æ–°éĸ‘įŽ‡éœ‡åŠ¨īŧŒčŋ™į§åŸēč°ƒä¸æ˜¯į”ąä¸–äŋ—čŽžåŽšįš„\n```\n#############\nOutput:\n("entity"{tuple_delimiter}"åŽį››éĄŋ"{tuple_delimiter}"åœ°į‚š"{tuple_delimiter}"åŽį››éĄŋæ˜¯æ­Ŗåœ¨æŽĨæ”ļé€ščŽ¯įš„åœ°æ–šīŧŒčĄ¨æ˜Žå…ļåœ¨å†ŗį­–čŋ‡į¨‹ä¸­įš„重čĻæ€§ã€‚"){record_delimiter}\n("entity"{tuple_delimiter}"æœå°”åĄžčĄŒåŠ¨"{tuple_delimiter}"äģģåŠĄ"{tuple_delimiter}"æœå°”åĄžčĄŒåŠ¨čĸĢæčŋ°ä¸ēä¸€éĄšåˇ˛æŧ”变ä¸ēäē’åŠ¨å’Œå‡†å¤‡įš„äģģåŠĄīŧŒæ˜žį¤ēå‡ēį›Žæ ‡å’Œæ´ģåŠ¨įš„é‡å¤§čŊŦ变。"){record_delimiter}\n("entity"{tuple_delimiter}"å›ĸ队"{tuple_delimiter}"įģ„įģ‡"{tuple_delimiter}"å›ĸ队čĸĢæį옿ˆä¸€įž¤äģŽčĸĢåŠ¨č§‚å¯Ÿč€…čŊŦ变ä¸ēį§¯æžå‚ä¸Žč€…įš„äēēīŧŒåą•į¤ēäē†äģ–äģŦč§’č‰˛įš„åŠ¨æ€å˜åŒ–ã€‚"){record_delimiter}\n("relationship"{tuple_delimiter}"å›ĸ队"{tuple_delimiter}"åŽį››éĄŋ"{tuple_delimiter}"å›ĸ队æ”ļ到æĨč‡ĒåŽį››éĄŋįš„é€ščŽ¯īŧŒčŋ™åŊąå“äē†äģ–äģŦįš„å†ŗį­–čŋ‡į¨‹ã€‚"{tuple_delimiter}"å†ŗį­–ã€å¤–éƒ¨åŊąå“"{tuple_delimiter}7){record_delimiter}\n("relationship"{tuple_delimiter}"å›ĸ队"{tuple_delimiter}"æœå°”åĄžčĄŒåŠ¨"{tuple_delimiter}"å›ĸé˜Ÿį›´æŽĨå‚ä¸Žæœå°”åĄžčĄŒåŠ¨īŧŒæ‰§čĄŒå…ļæŧ”å˜åŽįš„į›Žæ ‡å’Œæ´ģ动。"{tuple_delimiter}"äģģåŠĄæŧ”å˜ã€į§¯æžå‚ä¸Ž"{tuple_delimiter}9){completion_delimiter}\n#############################\nExample 3:\n\nEntity_types: [person, role, technology, organization, event, location, concept]\nText:\n```\ntheir voice slicing through the buzz of activity. "Control may be an illusion when facing an intelligence that literally writes its own rules," they stated stoically, casting a watchful eye over the flurry of data.\n\n"It\'s like it\'s learning to communicate," offered Sam Rivera from a nearby interface, their youthful energy boding a mix of awe and anxiety. "This gives talking to strangers\' a whole new meaning."\n\nAlex surveyed his team—each face a study in concentration, determination, and not a small measure of trepidation. "This might well be our first contact," he acknowledged, "And we need to be ready for whatever answers back."\n\nTogether, they stood on the edge of the unknown, forging humanity\'s response to a message from the heavens. The ensuing silence was palpable—a collective introspection about their role in this grand cosmic play, one that could rewrite human history.\n\nThe encrypted dialogue continued to unfold, its intricate patterns showing an almost uncanny anticipation\n```\n#############\nOutput:\n("entity"{tuple_delimiter}"Sam Rivera"{tuple_delimiter}"person"{tuple_delimiter}"Sam Rivera is a member of a team working on communicating with an unknown intelligence, showing a mix of awe and anxiety."){record_delimiter}\n("entity"{tuple_delimiter}"Alex"{tuple_delimiter}"person"{tuple_delimiter}"Alex is the leader of a team attempting first contact with an unknown intelligence, acknowledging the significance of their task."){record_delimiter}\n("entity"{tuple_delimiter}"Control"{tuple_delimiter}"concept"{tuple_delimiter}"Control refers to the ability to manage or govern, which is challenged by an intelligence that writes its own rules."){record_delimiter}\n("entity"{tuple_delimiter}"Intelligence"{tuple_delimiter}"concept"{tuple_delimiter}"Intelligence here refers to an unknown entity capable of writing its own rules and learning to communicate."){record_delimiter}\n("entity"{tuple_delimiter}"First Contact"{tuple_delimiter}"event"{tuple_delimiter}"First Contact is the potential initial communication between humanity and an unknown intelligence."){record_delimiter}\n("entity"{tuple_delimiter}"Humanity\'s Response"{tuple_delimiter}"event"{tuple_delimiter}"Humanity\'s Response is the collective action taken by Alex\'s team in response to a message from an unknown intelligence."){record_delimiter}\n("relationship"{tuple_delimiter}"Sam Rivera"{tuple_delimiter}"Intelligence"{tuple_delimiter}"Sam Rivera is directly involved in the process of learning to communicate with the unknown intelligence."{tuple_delimiter}"communication, learning process"{tuple_delimiter}9){record_delimiter}\n("relationship"{tuple_delimiter}"Alex"{tuple_delimiter}"First Contact"{tuple_delimiter}"Alex leads the team that might be making the First Contact with the unknown intelligence."{tuple_delimiter}"leadership, exploration"{tuple_delimiter}10){record_delimiter}\n("relationship"{tuple_delimiter}"Alex"{tuple_delimiter}"Humanity\'s Response"{tuple_delimiter}"Alex and his team are the key figures in Humanity\'s Response to the unknown intelligence."{tuple_delimiter}"collective action, cosmic significance"{tuple_delimiter}8){record_delimiter}\n("relationship"{tuple_delimiter}"Control"{tuple_delimiter}"Intelligence"{tuple_delimiter}"The concept of Control is challenged by the Intelligence that writes its own rules."{tuple_delimiter}"power dynamics, autonomy"{tuple_delimiter}7){record_delimiter}\n#############################\n-Real Data-\n######################\nEntity_types: [{entity_types}]\nText:\n```\n{input_text}\n```\n######################\nOutput:\n'#
DEFAULT_CONTINUE_PROMPT = 'MANY entities were missed in the last extraction.  Add them below using the same format:\n'#
DEFAULT_IF_LOOP_PROMPT = 'It appears some entities may have still been missed.  Answer YES | NO if there are still entities that need to be added.\n'#
DEFAULT_ENTITY_TYPES = ['organization', 'person', 'geo', 'event']#
DEFAULT_TUPLE_DELIMITER = '<|>'#
DEFAULT_RECORD_DELIMITER = '##'#
DEFAULT_COMPLETION_DELIMITER = '<|COMPLETE|>'#
DEFAULT_ENTITY_PATTERN = '\\("entity"(.*?)\\)'#
DEFAULT_RELATION_PATTERN = '\\("relationship"(.*?)\\)'#
__init__(api_model: str = 'gpt-4o', entity_types: List[str] = None, *, entity_key: str = 'entity', relation_key: str = 'relation', api_endpoint: str | None = None, response_path: str | None = None, prompt_template: str | None = None, tuple_delimiter: str | None = None, record_delimiter: str | None = None, completion_delimiter: str | None = None, max_gleaning: Annotated[int, Ge(ge=0)] = 1, continue_prompt: str | None = None, if_loop_prompt: str | None = None, entity_pattern: str | None = None, relation_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]#

Initialization method. :param api_model: API model name. :param entity_types: Pre-defined entity types for knowledge graph. :param entity_key: The key name to store the entities in the meta

field. It’s “entity” in default.

Parameters:
  • relation_key – The field name to store the relations between entities. It’s “relation” in default.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • prompt_template – The template of input prompt.

  • tuple_delimiter – Delimiter to separate items in outputs.

  • record_delimiter – Delimiter to separate records in outputs.

  • completion_delimiter – To mark the end of the output.

  • max_gleaning – the extra max num to call LLM to glean entities and relations.

  • continue_prompt – the prompt for gleaning entities and relations.

  • if_loop_prompt – the prompt to determine whether to stop gleaning.

  • entity_pattern – Regular expression for parsing entity record.

  • relation_pattern – Regular expression for parsing relation record.

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • drop_text – If drop the text in the output.

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

parse_output(raw_output)[source]#
add_message(messages, role, content)[source]#
light_rag_extraction(messages, rank=None)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ExtractEventMapper(*args, **kwargs)[source]#

Bases: Mapper

Extracts events and relevant characters from the text.

This operator uses an API model to summarize the text into multiple events and extract the relevant characters for each event. The summary and character extraction follow a predefined format. The operator retries the API call up to a specified number of times if there is an error. The extracted events and characters are stored in the meta field of the samples. If no events are found, the original samples are returned. The operator can optionally drop the original text after processing.

DEFAULT_SYSTEM_PROMPT = 'įģ™åŽšä¸€æŽĩ文æœŦīŧŒå¯šæ–‡æœŦįš„æƒ…čŠ‚čŋ›čĄŒåˆ†į‚šæ€ģįģ“īŧŒåšļæŠŊå–ä¸Žæƒ…čŠ‚į›¸å…ŗįš„äēēį‰Šã€‚\nčĻæą‚īŧš\n- å°Ŋ量不čρ遗æŧå†…厚īŧŒä¸čρæˇģ加文æœŦä¸­æ˛Ąæœ‰įš„æƒ…čŠ‚īŧŒįŦĻ合原文äē‹åŽž\n- 联įŗģä¸Šä¸‹æ–‡č¯´æ˜Žå‰å› åŽæžœīŧŒäŊ†äģį„ļ需čρįŦĻ合äē‹åŽž\n- 不čĻåŒ…åĢä¸ģč§‚įœ‹æŗ•\n- æŗ¨æ„čρå°Ŋ可čƒŊäŋį•™æ–‡æœŦįš„ä¸“æœ‰åč¯\n- æŗ¨æ„į›¸å…ŗäēēį‰Šéœ€čĻåœ¨å¯šå甿ƒ…节中å‡ēįŽ°\n- åĒæŠŊå–æƒ…čŠ‚ä¸­įš„ä¸ģčρäēēį‰ŠīŧŒä¸čρ遗æŧæƒ…čŠ‚įš„ä¸ģčρäēēį‰Š\n- æ€ģį쓿 ŧåŧåς䏋īŧš\n### æƒ…čŠ‚1īŧš\n- **æƒ…čŠ‚æčŋ°**īŧš ...\n- **ᛏ兺äēēį‰Š**īŧšäēēį‰Š1īŧŒäēēį‰Š2īŧŒäēēį‰Š3īŧŒ...\n### æƒ…čŠ‚2īŧš\n- **æƒ…čŠ‚æčŋ°**īŧš ...\n- **ᛏ兺äēēį‰Š**īŧšäēēį‰Š1īŧŒäēēį‰Š2īŧŒ...\n### æƒ…čŠ‚3īŧš\n- **æƒ…čŠ‚æčŋ°**īŧš ...\n- **ᛏ兺äēēį‰Š**īŧšäēēį‰Š1īŧŒ...\n...\n'#
DEFAULT_INPUT_TEMPLATE = '# 文æœŦ\n```\n{text}\n```\n'#
DEFAULT_OUTPUT_PATTERN = '\n        \\#\\#\\#\\s*æƒ…čŠ‚(\\d+)īŧš\\s*\n        -\\s*\\*\\*æƒ…čŠ‚æčŋ°\\*\\*\\s*īŧš\\s*(.*?)\\s*\n        -\\s*\\*\\*ᛏ兺äēēį‰Š\\*\\*\\s*īŧš\\s*(.*?)(?=\\#\\#\\#|\\Z)\n    '#
__init__(api_model: str = 'gpt-4o', *, event_desc_key: str = 'event_description', relevant_char_key: str = 'relevant_characters', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]#

Initialization method. :param api_model: API model name. :param event_desc_key: The key name to store the event descriptions

in the meta field. It’s “event_description” in default.

Parameters:
  • relevant_char_key – The field name to store the relevant characters to the events in the meta field. It’s “relevant_characters” in default.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt – System prompt for the task.

  • input_template – Template for building the model input.

  • output_pattern – Regular expression for parsing model output.

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • drop_text – If drop the text in the output.

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

parse_output(raw_output)[source]#
process_batched(samples, rank=None)[source]#
class data_juicer.ops.mapper.ExtractKeywordMapper(*args, **kwargs)[source]#

Bases: Mapper

Generate keywords for the text.

This operator uses a specified API model to generate high-level keywords that summarize the main concepts, themes, or topics of the input text. The generated keywords are stored in the meta field under the key specified by keyword_key. The operator retries the API call up to try_num times in case of errors. If drop_text is set to True, the original text is removed from the sample after processing. The operator uses a default prompt template and completion delimiter, which can be customized. The output is parsed using a regular expression to extract the keywords.

DEFAULT_PROMPT_TEMPLATE = '-Goal-\nGiven a text document that is potentially relevant to this activity and a list of entity types, identify all entities of those types from the text and all relationships among the identified entities.\n\n-Steps-\n1. Identify high-level key words that summarize the main concepts, themes, or topics of the entire text. These should capture the overarching ideas present in the document.\nFormat the content-level key words as ("content_keywords" <high_level_keywords>)\n\n3. Return output in the language of the given text.\n\n4. When finished, output {completion_delimiter}\n\n######################\n-Examples-\n######################\nExample 1:\n\nText:\n```\nwhile Alex clenched his jaw, the buzz of frustration dull against the backdrop of Taylor\'s authoritarian certainty. It was this competitive undercurrent that kept him alert, the sense that his and Jordan\'s shared commitment to discovery was an unspoken rebellion against Cruz\'s narrowing vision of control and order.\n\nThen Taylor did something unexpected. They paused beside Jordan and, for a moment, observed the device with something akin to reverence. “If this tech can be understood..." Taylor said, their voice quieter, "It could change the game for us. For all of us.”\n\nThe underlying dismissal earlier seemed to falter, replaced by a glimpse of reluctant respect for the gravity of what lay in their hands. Jordan looked up, and for a fleeting heartbeat, their eyes locked with Taylor\'s, a wordless clash of wills softening into an uneasy truce.\n\nIt was a small transformation, barely perceptible, but one that Alex noted with an inward nod. They had all been brought here by different paths\n```\n################\nOutput:\n("content_keywords" "power dynamics, ideological conflict, discovery, rebellion"){completion_delimiter}\n#############################\nExample 2:\n\nText:\n```\näģ–äģŦ不再是单įē¯įš„æ‰§čĄŒč€…īŧ›äģ–äģŦåˇ˛æˆä¸ē某ä¸Ēčļ…č˜Ÿčž°ä¸ŽæĄįēšįš„éĸ†åŸŸįš„äŋĄæ¯åŽˆæŠ¤č€…ã€‚čŋ™ä¸€äŊŋå‘Ŋįš„æå‡ä¸čƒŊčĸĢč§„åˆ™å’Œæ—ĸåŽšåčŽŽæ‰€æŸįŧšâ€”—厃需čĻä¸€į§æ–°įš„č§†č§’īŧŒä¸€į§æ–°įš„冺åŋƒã€‚\n\néšį€ä¸ŽåŽį››éĄŋįš„é€ščŽ¯åœ¨čƒŒæ™¯ä¸­å—Ąå—ĄäŊœå“īŧŒå¯šč¯ä¸­įš„į´§åŧ æƒ…įģĒ通čŋ‡å˜Ÿå˜ŸåŖ°å’Œé™į”ĩå™Ē韺贝įŠŋ始įģˆã€‚å›ĸ队įĢ™įĢ‹į€īŧŒä¸€č‚Ąä¸įĨĨįš„æ°”æ¯įŦŧįŊŠį€äģ–äģŦ。昞į„ļīŧŒäģ–äģŦ在æŽĨ下æĨ几ä¸Ē小æ—ļ内做å‡ēįš„å†ŗåŽšå¯čƒŊäŧšé‡æ–°åŽšäš‰äēēįąģåœ¨åŽ‡åŽ™ä¸­įš„äŊįŊŽīŧŒæˆ–者将äģ–äģŦįŊŽäēŽæ— įŸĨ和æŊœåœ¨åąé™Šäš‹ä¸­ã€‚\n\néšį€ä¸Žæ˜Ÿčž°įš„č”įŗģ变垗更加į‰ĸå›ēīŧŒå°įģ„åŧ€å§‹å¤„į†é€æ¸æˆåŊĸįš„č­Ļ告īŧŒäģŽčĸĢ动æŽĨå—č€…čŊŦ变ä¸ēį§¯æžå‚ä¸Žč€…ã€‚æĸ…į‘ŸåŽæĨįš„į›´č§‰å æŽäē†ä¸ŠéŖŽâ€”—å›ĸé˜Ÿįš„äģģåŠĄåˇ˛įģæŧ”变īŧŒä¸å†äģ…äģ…æ˜¯č§‚察和æŠĨ告īŧŒč€Œæ˜¯äē’动和准备。一åœēčœ•å˜åˇ˛įģåŧ€å§‹īŧŒč€Œâ€œæœå°”åĄžčĄŒåŠ¨â€åˆ™äģĨäģ–äģŦå¤§čƒ†įš„æ–°éĸ‘įŽ‡éœ‡åŠ¨īŧŒčŋ™į§åŸēč°ƒä¸æ˜¯į”ąä¸–äŋ—čŽžåŽšįš„\n```\n#############\nOutput:\n("content_keywords" "äģģåŠĄæŧ”变, 冺᭖åˆļ厚, į§¯æžå‚ä¸Ž, 厇厙意䚉"){completion_delimiter}\n#############################\nExample 3:\n\nEntity_types: [person, role, technology, organization, event, location, concept]\nText:\n```\ntheir voice slicing through the buzz of activity. "Control may be an illusion when facing an intelligence that literally writes its own rules," they stated stoically, casting a watchful eye over the flurry of data.\n\n"It\'s like it\'s learning to communicate," offered Sam Rivera from a nearby interface, their youthful energy boding a mix of awe and anxiety. "This gives talking to strangers\' a whole new meaning."\n\nAlex surveyed his team—each face a study in concentration, determination, and not a small measure of trepidation. "This might well be our first contact," he acknowledged, "And we need to be ready for whatever answers back."\n\nTogether, they stood on the edge of the unknown, forging humanity\'s response to a message from the heavens. The ensuing silence was palpable—a collective introspection about their role in this grand cosmic play, one that could rewrite human history.\n\nThe encrypted dialogue continued to unfold, its intricate patterns showing an almost uncanny anticipation\n```\n#############\nOutput:\n("content_keywords" "first contact, control, communication, cosmic significance"){completion_delimiter}\n-Real Data-\n######################\nText:\n```\n{input_text}\n```\n######################\nOutput:\n'#
DEFAULT_COMPLETION_DELIMITER = '<|COMPLETE|>'#
DEFAULT_OUTPUT_PATTERN = '\\("content_keywords"(.*?)\\)'#
__init__(api_model: str = 'gpt-4o', *, keyword_key: str = 'keyword', api_endpoint: str | None = None, response_path: str | None = None, prompt_template: str | None = None, completion_delimiter: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]#

Initialization method. :param api_model: API model name. :param keyword_key: The key name to store the keywords in the meta

field. It’s “keyword” in default.

Parameters:
  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • prompt_template – The template of input prompt.

  • completion_delimiter – To mark the end of the output.

  • output_pattern – Regular expression for parsing keywords.

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • drop_text – If drop the text in the output.

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

parse_output(raw_output)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ExtractNicknameMapper(*args, **kwargs)[source]#

Bases: Mapper

Extracts nickname relationships in the text using a language model.

This operator uses a language model to identify and extract nickname relationships from the input text. It follows specific instructions to ensure accurate extraction, such as identifying the speaker, the person being addressed, and the nickname used. The extracted relationships are stored in the meta field under the specified key. The operator uses a default system prompt, input template, and output pattern, but these can be customized. The results are parsed and validated to ensure they meet the required format. If the text already contains the nickname information, it is not processed again. The operator retries the API call a specified number of times if an error occurs.

DEFAULT_SYSTEM_PROMPT = 'įģ™åޚäŊ ä¸€æŽĩ文æœŦīŧŒäŊ įš„äģģåŠĄæ˜¯å°†äēēį‰Šäš‹é—´įš„į§°å‘ŧæ–šåŧīŧˆæ˜ĩį§°īŧ‰æå–å‡ēæĨ。\nčĻæą‚īŧš\n- 需čρįģ™å‡ēč¯´č¯äēē寚čĸĢį§°å‘ŧäēēįš„į§°å‘ŧīŧŒä¸čĻæžåäē†ã€‚\n- į›¸åŒįš„č¯´č¯äēē和čĸĢį§°å‘ŧäē翜€å¤šįģ™å‡ē一ä¸Ēæœ€å¸¸į”¨įš„į§°å‘ŧ。\n- č¯ˇä¸čĻčž“å‡ēäē’į›¸æ˛Ąæœ‰æ˜ĩį§°įš„į§°å‘ŧæ–šåŧã€‚\n- 输å‡ēæ ŧåŧåς䏋īŧš\n```\n### į§°å‘ŧæ–šåŧ1\n- **č¯´č¯äēē**īŧš...\n- **čĸĢį§°å‘ŧäēē**īŧš...\n- **...寚...įš„æ˜ĩį§°**īŧš...\n### į§°å‘ŧæ–šåŧ2\n- **č¯´č¯äēē**īŧš...\n- **čĸĢį§°å‘ŧäēē**īŧš...\n- **...寚...įš„æ˜ĩį§°**īŧš...\n### į§°å‘ŧæ–šåŧ3\n- **č¯´č¯äēē**īŧš...\n- **čĸĢį§°å‘ŧäēē**īŧš...\n- **...寚...įš„æ˜ĩį§°**īŧš...\n...\n```\n'#
DEFAULT_INPUT_TEMPLATE = '# 文æœŦ\n```\n{text}\n```\n'#
DEFAULT_OUTPUT_PATTERN = '\n        \\#\\#\\#\\s*į§°å‘ŧæ–šåŧ(\\d+)\\s*\n        -\\s*\\*\\*č¯´č¯äēē\\*\\*\\s*īŧš\\s*(.*?)\\s*\n        -\\s*\\*\\*čĸĢį§°å‘ŧäēē\\*\\*\\s*īŧš\\s*(.*?)\\s*\n        -\\s*\\*\\*(.*?)寚(.*?)įš„æ˜ĩį§°\\*\\*\\s*īŧš\\s*(.*?)(?=\\#\\#\\#|\\Z) # for double check\n    '#
__init__(api_model: str = 'gpt-4o', *, nickname_key: str = 'nickname', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]#

Initialization method. :param api_model: API model name. :param nickname_key: The key name to store the nickname

relationship in the meta field. It’s “nickname” in default.

Parameters:
  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt – System prompt for the task.

  • input_template – Template for building the model input.

  • output_pattern – Regular expression for parsing model output.

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • drop_text – If drop the text in the output.

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

parse_output(raw_output)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ExtractSupportTextMapper(*args, **kwargs)[source]#

Bases: Mapper

Extracts a supporting sub-text from the original text based on a given summary.

This operator uses an API model to identify and extract a segment of the original text that best matches the provided summary. It leverages a system prompt and input template to guide the extraction process. The extracted support text is stored in the specified meta field key. If the extraction fails or returns an empty string, the original summary is used as a fallback. The operator retries the extraction up to a specified number of times in case of errors.

DEFAULT_SYSTEM_PROMPT = 'äŊ å°†æ‰Žæŧ”一ä¸Ē文æœŦ摘åŊ•åŠŠæ‰‹įš„č§’č‰˛ã€‚äŊ įš„ä¸ģčρäģģåŠĄæ˜¯åŸēäēŽįģ™åŽšįš„æ–‡įĢ īŧˆį§°ä¸ē“原文”īŧ‰äģĨ及寚原文某ä¸Ēéƒ¨åˆ†įš„įŽ€įŸ­æčŋ°æˆ–æ€ģįģ“īŧˆį§°ä¸ē“æ€ģįģ“”īŧ‰īŧŒå‡†įĄŽåœ°č¯†åˆĢåšļ提取å‡ē与č¯Ĩæ€ģįģ“į›¸å¯šåē”įš„åŽŸæ–‡į‰‡æŽĩ。\nčĻæą‚īŧš\n- äŊ éœ€čρå°Ŋ可čƒŊį˛žįĄŽåœ°åŒšé…åˆ°æœ€įŦĻ合æ€ģįģ“å†…åŽšįš„é‚Ŗéƒ¨åˆ†å†…åŽš\n- åĻ‚æžœå­˜åœ¨å¤šä¸Ē可čƒŊįš„į­”æĄˆīŧŒč¯ˇé€‰æ‹Šæœ€č´´čŋ‘æ€ģį쓿„æ€įš„邪ä¸Ē\n- 下éĸ是一ä¸Ēäž‹å­å¸ŽåŠŠį†č§Ŗčŋ™ä¸€čŋ‡į¨‹īŧš\n### 原文īŧš\n《įēĸæĨŧæĸĻ》是中å›Ŋå¤å…¸å°č¯´å››å¤§åč‘—äš‹ä¸€īŧŒį”࿏…äģŖäŊœåŽļ曚é›ĒčŠšåˆ›äŊœã€‚åŽƒčŽ˛čŋ°äē†č´žåŽįŽ‰ã€æž—éģ›įމᭉäēēįš„įˆąæƒ…æ•…äē‹åŠå››å¤§åŽļæ—įš„å…´čĄ°åŽ†į¨‹ã€‚äšĻ中通čŋ‡å¤æ‚įš„äēēį‰Šå…ŗįŗģåą•įŽ°äē†å°åģēį¤žäŧšįš„å„į§įŸ›į›žå†˛įĒã€‚å…ļ䏭兺äēŽč´žåēœå†…部斗äē‰įš„部分尤å…ļį˛žåŊŠīŧŒį‰šåˆĢæ˜¯įŽ‹į†™å‡¤ä¸Žå°¤äēŒå§äš‹é—´įš„ä牿–—īŧŒį”ŸåŠ¨æįģ˜ä熿ƒåŠ›äē‰å¤ēä¸‹įš„åĨŗæ€§åŊĸčąĄã€‚æ­¤å¤–īŧŒã€ŠįēĸæĨŧæĸĻ》čŋ˜äģĨå…ļį˛žįžŽįš„č¯—č¯é—ģ名īŧŒčŋ™äē›č¯—č¯ä¸äģ…åĸžæˇģä熿–‡å­Ļ色åŊŠīŧŒä🿎ąåˆģ反映äē†äēēį‰Šįš„æ€§æ ŧį‰šį‚šå’Œå‘Ŋčŋčĩ°å‘。\n\n### æ€ģįģ“īŧš\n描čŋ°äē†äšĻä¸­įš„ä¸¤ä¸ĒåĨŗæ€§č§’č‰˛äš‹é—´å›´į앿ƒåŠ›åą•åŧ€įš„įĢžäē‰ã€‚\n\n### 原文摘åŊ•īŧš\nå…ļ䏭兺äēŽč´žåēœå†…部斗äē‰įš„部分尤å…ļį˛žåŊŠīŧŒį‰šåˆĢæ˜¯įŽ‹į†™å‡¤ä¸Žå°¤äēŒå§äš‹é—´įš„ä牿–—īŧŒį”ŸåŠ¨æįģ˜ä熿ƒåŠ›äē‰å¤ēä¸‹įš„åĨŗæ€§åŊĸčąĄã€‚'#
DEFAULT_INPUT_TEMPLATE = '### 原文īŧš\n{text}\n\n### æ€ģįģ“īŧš\n{summary}\n\n### 原文摘åŊ•īŧš\n'#
__init__(api_model: str = 'gpt-4o', *, summary_key: str = 'event_description', support_text_key: str = 'support_text', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]#

Initialization method. :param api_model: API model name. :param summary_key: The key name to store the input summary in the

meta field. It’s “event_description” in default.

Parameters:
  • support_text_key – The key name to store the output support text for the summary in the meta field. It’s “support_text” in default.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt – System prompt for the task.

  • input_template – Template for building the model input.

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • drop_text – If drop the text in the output.

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ExtractTablesFromHtmlMapper(*args, **kwargs)[source]#

Bases: Mapper

Extracts tables from HTML content and stores them in a specified field.

This operator processes HTML content to extract tables. It can either retain or remove HTML tags based on the retain_html_tags parameter. If retain_html_tags is False, it can also include or exclude table headers based on the include_header parameter. The extracted tables are stored in the tables_field_name field within the sample’s metadata. If no tables are found, an empty list is stored. If the tables have already been extracted, the operator will not reprocess the sample.

__init__(tables_field_name: str = 'html_tables', retain_html_tags: bool = False, include_header: bool = True, *args, **kwargs)[source]#

Initialization method. :param tables_field_name: Field name to store the extracted tables. :param retain_html_tags: If True, retains HTML tags in the tables;

otherwise, removes them.

Parameters:

include_header –

If True, includes the table header;

otherwise, excludes it.

This parameter is effective

only when retain_html_tags is False

and applies solely to the extracted table content.

process_single(sample)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.FixUnicodeMapper(*args, **kwargs)[source]#

Bases: Mapper

Fixes unicode errors in text samples.

This operator corrects common unicode errors and normalizes the text to a specified Unicode normalization form. The default normalization form is ‘NFC’, but it can be set to ‘NFKC’, ‘NFD’, or ‘NFKD’ during initialization. It processes text samples in batches, applying the specified normalization to each sample. If an unsupported normalization form is provided, a ValueError is raised.

__init__(normalization: str = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • normalization – the specified form of Unicode normalization mode, which can be one of [‘NFC’, ‘NFKC’, ‘NFD’, and ‘NFKD’], default ‘NFC’.

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.GenerateQAFromExamplesMapper(*args, **kwargs)[source]#

Bases: Mapper

Generates question and answer pairs from examples using a Hugging Face model.

This operator generates QA pairs based on provided seed examples. The number of generated samples is determined by the length of the empty dataset configured in the YAML file. The operator uses a Hugging Face model to generate new QA pairs, which are then filtered based on their similarity to the seed examples. Samples with a similarity score below the specified threshold are kept. The similarity is computed using the ROUGE-L metric. The operator requires a seed file in chatml format, which provides the initial QA examples. The generated QA pairs must follow specific formatting rules, such as maintaining the same format as the input examples and ensuring that questions and answers are paired correctly.

DEFAULT_SYSTEM_PROMPT = '蝎äŊ äģ”įģ†č§‚察多ä¸Ēį¤ēäž‹æ•°æŽįš„čž“å…Ĩå’Œčž“å‡ēīŧŒæŒ‰į…§äŊ įš„ᐆ觪īŧŒæ€ģįģ“å‡ēᛏåē”č§„įŸŠīŧŒį„ļ后写å‡ē一ä¸Ēæ–°įš„ã€é—Žéĸ˜ã€‘å’Œã€å›žį­”ã€‘ã€‚æŗ¨æ„īŧŒæ–°į”Ÿæˆįš„【闎éĸ˜ã€‘å’Œã€å›žį­”ã€‘éœ€čρæģĄčļŗåς䏋čĻæą‚īŧš\n1. į”Ÿæˆįš„ã€é—Žéĸ˜ã€‘å’Œã€å›žį­”ã€‘ä¸čƒŊ与输å…Ĩįš„ã€é—Žéĸ˜ã€‘å’Œã€å›žį­”ã€‘ä¸€č‡´īŧŒäŊ†æ˜¯éœ€čρäŋæŒæ ŧåŧį›¸åŒã€‚\n2. į”Ÿæˆįš„ã€é—Žéĸ˜ã€‘不一åޚčĻåą€é™äēŽčž“å…Ĩ【闎éĸ˜ã€‘įš„č¯éĸ˜æˆ–éĸ†åŸŸīŧŒį”Ÿæˆįš„ã€å›žį­”ã€‘éœ€čĻæ­ŖįĄŽå›žį­”į”Ÿæˆįš„ã€é—Žéĸ˜ã€‘。\n3. æäž›įš„ã€é—Žéĸ˜ã€‘å’Œã€å›žį­”ã€‘å¯čƒŊ是多čŊŽå¯šč¯īŧŒį”Ÿæˆįš„【闎éĸ˜ã€‘å’Œã€å›žį­”ã€‘äšŸå¯äģĨ是多čŊŽīŧŒäŊ†æ˜¯éœ€čρäŋæŒæ ŧåŧį›¸åŒã€‚\n4. į”Ÿæˆįš„ã€é—Žéĸ˜ã€‘å’Œã€å›žį­”ã€‘åŋ…éĄģ成寚å‡ēįŽ°īŧŒč€Œä¸”【闎éĸ˜ã€‘需čĻåœ¨ã€å›žį­”ã€‘äš‹å‰ã€‚\n'#
DEFAULT_INPUT_TEMPLATE = '{}'#
DEFAULT_EXAMPLE_TEMPLATE = '\nåĻ‚ä¸‹æ˜¯ä¸€æĄį¤ē䞋数捎īŧš\n{}'#
DEFAULT_QA_PAIR_TEMPLATE = '【闎éĸ˜ã€‘\n{}\nã€å›žį­”ã€‘\n{}\n'#
DEFAULT_OUTPUT_PATTERN = '【闎éĸ˜ã€‘(.*?)ã€å›žį­”ã€‘(.*?)(?=【闎éĸ˜ã€‘|$)'#
__init__(hf_model: str = 'Qwen/Qwen2.5-7B-Instruct', *, seed_file: str = '', example_num: Annotated[int, Gt(gt=0)] = 3, similarity_threshold: float = 0.7, system_prompt: str | None = None, input_template: str | None = None, example_template: str | None = None, qa_pair_template: str | None = None, output_pattern: str | None = None, enable_vllm: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]#

Initialization method.

Parameters:
  • hf_model – Huggingface model ID.

  • seed_file – Path to the seed file in chatml format.

  • example_num – The number of selected examples. Randomly select N examples from “seed_file” and put them into prompt as QA examples.

  • similarity_threshold – The similarity score threshold between the generated samples and the seed examples. Range from 0 to 1. Samples with similarity score less than this threshold will be kept.

  • system_prompt – System prompt for guiding the generation task.

  • input_template – Template for building the input prompt. It must include one placeholder ‘{}’, which will be replaced by example_num formatted examples defined by example_template.

  • example_template – Template for formatting one QA example. It must include one placeholder ‘{}’, which will be replaced by one formatted qa_pair.

  • qa_pair_template – Template for formatting a single QA pair within each example. Must include two placeholders ‘{}’ for the question and answer.

  • output_pattern – Regular expression pattern to extract questions and answers from model response.

  • enable_vllm – Whether to use vllm for inference acceleration.

  • model_params – Parameters for initializing the model.

  • sampling_params – Sampling parameters for text generation. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

build_input(qa_examples)[source]#
parse_output(raw_output)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.GenerateQAFromTextMapper(*args, **kwargs)[source]#

Bases: Mapper

Generates question and answer pairs from text using a specified model.

This operator uses a Hugging Face model to generate QA pairs from the input text. It supports both Hugging Face and vLLM models for inference. The recommended models, such as ‘alibaba-pai/pai-llama3-8b-doc2qa’, are trained on Chinese data and are suitable for Chinese text. The operator can limit the number of generated QA pairs per text and allows custom output patterns for parsing the model’s response. By default, it uses a regular expression to extract questions and answers from the model’s output. If no QA pairs are extracted, a warning is logged.

__init__(hf_model: str = 'alibaba-pai/pai-qwen1_5-7b-doc2qa', max_num: Annotated[int, Gt(gt=0)] | None = None, *, output_pattern: str | None = None, enable_vllm: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]#

Initialization method.

Parameters:
  • hf_model – Huggingface model ID.

  • max_num – The max num of returned QA sample for each text. Not limit if it is None.

  • output_pattern – Regular expression pattern to extract questions and answers from model response.

  • enable_vllm – Whether to use vllm for inference acceleration.

  • model_params – Parameters for initializing the model.

  • sampling_params – Sampling parameters for text generation, e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

The default data format parsed by this interface is as follows: Model Input:

č’™å¤å›Ŋįš„éĻ–éƒŊæ˜¯äšŒå…°åˇ´æ‰˜īŧˆUlaanbaatarīŧ‰ å†°å˛›įš„éĻ–éƒŊæ˜¯é›ˇå…‹é›…æœĒ克īŧˆReykjavikīŧ‰

Model Output:

č’™å¤å›Ŋįš„éĻ–éƒŊæ˜¯äšŒå…°åˇ´æ‰˜īŧˆUlaanbaatarīŧ‰ å†°å˛›įš„éĻ–éƒŊæ˜¯é›ˇå…‹é›…æœĒ克īŧˆReykjavikīŧ‰ Human: č¯ˇé—Žč’™å¤å›Ŋįš„éĻ–éƒŊ是å“Ē里īŧŸ Assistant: äŊ åĨŊīŧŒæ šæŽæäž›įš„äŋĄæ¯īŧŒč’™å¤å›Ŋįš„éĻ–éƒŊæ˜¯äšŒå…°åˇ´æ‰˜īŧˆUlaanbaatarīŧ‰ã€‚ Human: å†°å˛›įš„éĻ–éƒŊ是å“Ē里å‘ĸīŧŸ Assistant: å†°å˛›įš„éĻ–éƒŊæ˜¯é›ˇå…‹é›…æœĒ克īŧˆReykjavikīŧ‰ã€‚ â€Ļ

parse_output(raw_output)[source]#
process_batched(samples, rank=None)[source]#
class data_juicer.ops.mapper.HumanPreferenceAnnotationMapper(*args, **kwargs)[source]#

Bases: LabelStudioAnnotationMapper

Operator for human preference annotation using Label Studio.

This operator formats and presents pairs of answers to a prompt for human evaluation. It uses a default or custom Label Studio configuration to display the prompt and answer options. The operator processes the annotations to determine the preferred answer, updating the sample with the chosen and rejected answers. The operator requires specific keys in the samples for the prompt and answer options. If these keys are missing, it logs warnings and uses placeholder text. The annotated results are processed to update the sample with the chosen and rejected answers.

DEFAULT_LABEL_CONFIG = '\n    <View className="root">\n      <Style>\n        .root {\n          box-sizing: border-box;\n          margin: 0;\n          padding: 0;\n          font-family: \'Roboto\',\n            sans-serif;\n          line-height: 1.6;\n          background-color: #f0f0f0;\n        }\n\n        .container {\n          margin: 0 auto;\n          padding: 20px;\n          background-color: #ffffff;\n          border-radius: 5px;\n          box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.1), 0 6px 20px 0 rgba(0, 0, 0, 0.1);\n        }\n\n        .prompt {\n          padding: 20px;\n          background-color: #0084ff;\n          color: #ffffff;\n          border-radius: 5px;\n          margin-bottom: 20px;\n          box-shadow: 0 2px 4px 0 rgba(0, 0, 0, 0.1), 0 3px 10px 0 rgba(0, 0, 0, 0.1);\n        }\n\n        .answers {\n          display: flex;\n          justify-content: space-between;\n          flex-wrap: wrap;\n          gap: 20px;\n        }\n\n        .answer-box {\n          flex-basis: 49%;\n          padding: 20px;\n          background-color: rgba(44, 62, 80, 0.9);\n          color: #ffffff;\n          border-radius: 5px;\n          box-shadow: 0 2px 4px 0 rgba(0, 0, 0, 0.1), 0 3px 10px 0 rgba(0, 0, 0, 0.1);\n        }\n\n        .answer-box p {\n          word-wrap: break-word;\n        }\n\n        .answer-box:hover {\n          background-color: rgba(52, 73, 94, 0.9);\n          cursor: pointer;\n          transition: all 0.3s ease;\n        }\n\n        .lsf-richtext__line:hover {\n          background: unset;\n        }\n\n        .answer-box .lsf-object {\n          padding: 20px\n        }\n      </Style>\n      <View className="container">\n        <View className="prompt">\n          <Text name="prompt" value="$prompt" />\n        </View>\n        <View className="answers">\n          <Pairwise name="comparison" toName="answer1,answer2"\n                    selectionStyle="background-color: #27ae60; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.2); border: 2px solid #2ecc71; cursor: pointer; transition: all 0.3s ease;"\n                    leftChoiceValue="answer1" rightChoiceValue="answer2" />\n          <View className="answer-box">\n            <Text name="answer1" value="$answer1" />\n          </View>\n          <View className="answer-box">\n            <Text name="answer2" value="$answer2" />\n          </View>\n        </View>\n      </View>\n    </View>\n    '#
__init__(label_config_file: str = None, answer1_key: str = 'answer1', answer2_key: str = 'answer2', prompt_key: str = 'prompt', chosen_key: str = 'chosen', rejected_key: str = 'rejected', **kwargs)[source]#

Initialize the human preference annotation operator.

Parameters:
  • label_config_file – Path to the label config file

  • answer1_key – Key for the first answer

  • answer2_key – Key for the second answer

  • prompt_key – Key for the prompt/question

  • chosen_key – Key for the chosen answer

  • rejected_key – Key for the rejected answer

class data_juicer.ops.mapper.ImageBlurMapper(*args, **kwargs)[source]#

Bases: Mapper

Blurs images in the dataset with a specified probability and blur type.

This operator blurs images using one of three types: mean, box, or Gaussian. The probability of an image being blurred is controlled by the p parameter. The blur effect is applied using a kernel with a specified radius. Blurred images are saved to a directory, which can be specified or defaults to the input directory. If the save directory is not provided, the DJ_PRODUCED_DATA_DIR environment variable can be used to set it. The operator ensures that the blur type is one of the supported options and that the radius is non-negative.

__init__(p: float = 0.2, blur_type: str = 'gaussian', radius: float = 2, save_dir: str = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • p – Probability of the image being blurred.

  • blur_type – Type of blur kernel, including [‘mean’, ‘box’, ‘gaussian’].

  • radius – Radius of blur kernel.

  • save_dir – The directory where generated image files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

  • args – extra args

  • kwargs – extra args

process_single(sample, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ImageCaptioningMapper(*args, **kwargs)[source]#

Bases: Mapper

Generates image captions using a Hugging Face model and appends them to samples.

This operator generates captions for images in the input samples using a specified Hugging Face model. It can generate multiple captions per image and apply different strategies to retain the generated captions. The operator supports three retention modes: ‘random_any’, ‘similar_one_simhash’, and ‘all’. In ‘random_any’ mode, a random caption is retained. In ‘similar_one_simhash’ mode, the most similar caption to the original text (based on SimHash) is retained. In ‘all’ mode, all generated captions are concatenated and retained. The operator can also keep or discard the original sample based on the keep_original_sample parameter. If both prompt and prompt_key are set, the prompt_key takes precedence.

__init__(hf_img2seq: str = 'Salesforce/blip2-opt-2.7b', trust_remote_code: bool = False, caption_num: Annotated[int, Gt(gt=0)] = 1, keep_candidate_mode: str = 'random_any', keep_original_sample: bool = True, prompt: str | None = None, prompt_key: str | None = None, gpu_batch_size: Annotated[int, Gt(gt=0)] = 8, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • hf_img2seq – model name on huggingface to generate caption

  • trust_remote_code – whether to trust the remote code of HF models.

  • caption_num – how many candidate captions to generate for each image

  • keep_candidate_mode –

    retain strategy for the generated $caption_num$ candidates.

    ’random_any’: Retain the random one from generated captions

    ’similar_one_simhash’: Retain the generated one that is most

    similar to the original caption

    ’all’: Retain all generated captions by concatenation

Note

This is a batched_OP, whose input and output type are both list. Suppose there are $N$ list of input samples, whose batch size is $b$, and denote caption_num as $M$. The number of total samples after generation is $2Nb$ when keep_original_sample is True and $Nb$ when keep_original_sample is False. For ‘random_any’ and ‘similar_one_simhash’ mode, it’s $(1+M)Nb$ for ‘all’ mode when keep_original_sample is True and $MNb$ when keep_original_sample is False.

Parameters:
  • keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated captions in the final datasets and the original captions will be removed. It’s True in default.

  • prompt – a string prompt to guide the generation of blip2 model for all samples globally. It’s None in default, which means no prompt provided.

  • prompt_key – the key name of fields in samples to store prompts for each sample. It’s used for set different prompts for different samples. If it’s none, use prompt in parameter “prompt”. It’s None in default.

  • gpu_batch_size – the batch size for GPU inference. This controls how many images are processed together in a single GPU forward pass. Useful when the dataset batch size is larger than what the GPU can handle. Default is 8.

  • args – extra args

  • kwargs – extra args

process_batched(samples, rank=None)[source]#

Process a batch of samples with true GPU batching for caption generation.

This method collects all images from all samples in the batch, generates captions for them in GPU-efficient sub-batches, and then distributes the captions back to their respective samples.

Note

This is a batched_OP, whose input and output type are both list. Suppose there are $N$ input sample list with batch size as $b$, and denote caption_num as $M$. the number of total samples after generation is $2Nb$ for ‘random_any’ and ‘similar_one’ mode, and $(1+M)Nb$ for ‘all’ mode.

Parameters:
  • samples – Dict of lists containing the batch of samples.

  • rank – Optional GPU rank for distributed processing.

Returns:

Dict of lists containing the processed samples with generated captions.

class data_juicer.ops.mapper.ImageDetectionYoloMapper(*args, **kwargs)[source]#

Bases: Mapper

Perform object detection using YOLO on images and return bounding boxes and class labels.

This operator uses a YOLO model to detect objects in images. It processes each image in the sample, returning the bounding boxes and class labels for detected objects. The operator sets the bbox_tag and class_label_tag fields in the sample’s metadata. If no image is present or no objects are detected, it sets bbox_tag to an empty array and class_label_tag to -1. The operator uses a confidence score threshold and IoU (Intersection over Union) score threshold to filter detections.

__init__(imgsz=640, conf=0.05, iou=0.5, model_path='yolo11n.pt', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • imgsz – resolution for image resizing

  • conf – confidence score threshold

  • iou – IoU (Intersection over Union) score threshold

  • model_path – the path to the YOLO model.

process_single(sample, rank=None, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ImageDiffusionMapper(*args, **kwargs)[source]#

Bases: Mapper

Generate images using a diffusion model based on provided captions.

This operator uses a Hugging Face diffusion model to generate images from given captions. It supports different modes for retaining generated samples, including random selection, similarity-based selection, and retaining all. The operator can also generate captions if none are provided, using a Hugging Face image-to-sequence model. The strength parameter controls the extent of transformation from the reference image, and the guidance scale influences how closely the generated images match the text prompt. Generated images can be saved in a specified directory or the same directory as the input files. This is a batched operation, processing multiple samples at once and producing a specified number of augmented images per sample.

__init__(hf_diffusion: str = 'CompVis/stable-diffusion-v1-4', trust_remote_code: bool = False, torch_dtype: str = 'fp32', revision: str = 'main', strength: Annotated[float, FieldInfo(annotation=NoneType, required=True, metadata=[Ge(ge=0), Le(le=1)])] = 0.8, guidance_scale: float = 7.5, aug_num: Annotated[int, Gt(gt=0)] = 1, keep_original_sample: bool = True, caption_key: str | None = None, hf_img2seq: str = 'Salesforce/blip2-opt-2.7b', save_dir: str = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • hf_diffusion – diffusion model name on huggingface to generate the image.

  • trust_remote_code – whether to trust the remote code of HF models.

  • torch_dtype – the floating point type used to load the diffusion model. Can be one of [‘fp32’, ‘fp16’, ‘bf16’]

  • revision – The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.

  • strength – Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a starting point and more noise is added the higher the strength. The number of denoising steps depends on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising process runs for the full number of iterations specified in num_inference_steps. A value of 1 essentially ignores image.

  • guidance_scale – A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1.

  • aug_num – The image number to be produced by stable-diffusion model.

  • keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated captions in the final datasets and the original captions will be removed. It’s True by default.

  • caption_key – the key name of fields in samples to store captions for each images. It can be a string if there is only one image in each sample. Otherwise, it should be a list. If it’s none, ImageDiffusionMapper will produce captions for each images.

  • hf_img2seq – model name on huggingface to generate caption if caption_key is None.

  • save_dir – The directory where generated image files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

process_batched(samples, rank=None, context=False)[source]#

Note

This is a batched_OP, whose the input and output type are both list. Suppose there are $N$ input sample list with batch size as $b$, and denote aug_num as $M$. the number of total samples after generation is $(1+M)Nb$.

Parameters:

samples

Returns:

class data_juicer.ops.mapper.ImageMMPoseMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to perform human keypoint detection inference using MMPose models. It requires three essential components for model initialization: - deploy_cfg (str): Path to the deployment configuration file (defines inference settings) - model_cfg (str): Path to the model configuration file (specifies model architecture) - model_files (List[str]): Model weight files including pre-trained weights and parameters

The implementation follows the official MMPose deployment guidelines from MMDeploy. For detailed configuration requirements and usage examples, refer to: open-mmlab/mmdeploy

__init__(deploy_cfg: str = None, model_cfg: str = None, model_files: str | Sequence[str] | None = None, pose_key: str = 'pose_info', visualization_dir: str = None, *args, **kwargs)[source]#

Initialization method. :param deploy_cfg: MMPose deployment config file. :param model_cfg: MMPose model config file. :param model_files: Path to the model weight files. :param pose_key: Key to store pose information. :param visualization_dir: Directory to save visualization results. :param args: extra args :param kwargs: extra args

parse_and_filter(data_sample) Dict[source]#

Extract elements necessary to represent a prediction into a dictionary.

It’s better to contain only basic data elements such as strings and numbers in order to guarantee it’s json-serializable.

Parameters:

data_sample (PoseDataSample) – Predictions of the model.

Returns:

Prediction results.

Return type:

dict

visualize_results(image, model, result, output_file)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ImageFaceBlurMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to blur faces detected in images.

This operator uses an OpenCV classifier to detect faces in images and applies a specified blur type to the detected face regions. The blur types supported are ‘mean’, ‘box’, and ‘gaussian’. The radius of the blur kernel can be adjusted. If no save directory is provided, the modified images will be saved in the same directory as the input files.

__init__(cv_classifier: str = '', blur_type: str = 'gaussian', radius: Annotated[float, Ge(ge=0)] = 2, save_dir: str = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • cv_classifier – OpenCV classifier path for face detection. By default, we will use ‘haarcascade_frontalface_alt.xml’.

  • blur_type – Type of blur kernel, including [‘mean’, ‘box’, ‘gaussian’].

  • radius – Radius of blur kernel.

  • save_dir – The directory where generated image files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

  • args – extra args

  • kwargs – extra args

process_single(sample, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ImageRemoveBackgroundMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to remove the background of images.

This operator processes each image in the sample, removing its background. It uses the rembg library to perform the background removal. If alpha_matting is enabled, it applies alpha matting with specified thresholds and erosion size. The resulting images are saved in PNG format. The bgcolor parameter can be set to specify a custom background color for the cutout image. The processed images are stored in the directory specified by save_dir, or in the same directory as the input files if save_dir is not provided. The source_file field in the sample is updated to reflect the new file paths.

__init__(alpha_matting: bool = False, alpha_matting_foreground_threshold: int = 240, alpha_matting_background_threshold: int = 10, alpha_matting_erode_size: int = 10, bgcolor: Tuple[int, int, int, int] | None = None, save_dir: str = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • alpha_matting – (bool, optional) Flag indicating whether to use alpha matting. Defaults to False.

  • alpha_matting_foreground_threshold – (int, optional) Foreground threshold for alpha matting. Defaults to 240.

  • alpha_matting_background_threshold – (int, optional) Background threshold for alpha matting. Defaults to 10.

  • alpha_matting_erode_size – (int, optional) Erosion size for alpha matting. Defaults to 10.

  • bgcolor – (Optional[Tuple[int, int, int, int]], optional) Background color for the cutout image. Defaults to None.

  • save_dir – The directory where generated image files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

*args (Optional[Any]): Additional positional arguments. **kwargs (Optional[Any]): Additional keyword arguments.

process_single(sample, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ImageSAM3DBodyMapper(*args, **kwargs)[source]#

Bases: Mapper

SAM 3D Body (3DB) is a promptable model for single-image full-body 3D human mesh recovery (HMR).

__init__(checkpoint_path: str = '', detector_name: str = 'vitdet', segmentor_name: str = 'sam2', fov_name: str = 'moge2', mhr_path: str = '', detector_path: str = '', segmentor_path: str = '', fov_path: str = '', bbox_thresh: float = 0.8, use_mask: bool = False, visualization_dir: str = None, tag_field_name: str = 'sam_3d_body_data', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • checkpoint_path – Path to SAM 3D Body model checkpoint.

  • mhr_path – Path to MoHR/assets folder (or set SAM3D_mhr_path).

  • detector_path – Path to human detection model folder (or set SAM3D_DETECTOR_PATH).

  • segmentor_path – Path to human segmentation model folder (or set SAM3D_SEGMENTOR_PATH).

  • fov_path – Path to fov estimation model folder (or set SAM3D_FOV_PATH).

  • detector_name – Human detection model for demo (Default vitdet, add your favorite detector if needed).

  • segmentor_name – Human segmentation model for demo (Default sam2, add your favorite segmentor if needed).

  • fov_name – FOV estimation model for demo (Default moge2, add your favorite fov estimator if needed).

  • bbox_thresh – Bounding box detection threshold.

:param use_mask:Use mask-conditioned prediction (segmentation mask is automatically generated from bbox). :param visualization_dir: Directory to save visualization results. If None, no visualization will be saved. :param tag_field_name: Field name for storing the results.

process_single(sample=None, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ImageSegmentMapper(*args, **kwargs)[source]#

Bases: Mapper

Perform segment-anything on images and return the bounding boxes.

This operator uses a FastSAM model to detect and segment objects in images, returning their bounding boxes. It processes each image in the sample, and stores the bounding boxes in the ‘bbox_tag’ field under the ‘meta’ key. If no images are present in the sample, an empty array is stored instead. The operator allows setting the image resolution, confidence threshold, and IoU (Intersection over Union) score threshold for the segmentation process. Bounding boxes are represented as N x M x 4 arrays, where N is the number of images, M is the number of detected boxes, and 4 represents the coordinates.

__init__(imgsz=1024, conf=0.05, iou=0.5, model_path='FastSAM-x.pt', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • imgsz – resolution for image resizing

  • conf – confidence score threshold

  • iou – IoU (Intersection over Union) score threshold

  • model_path – the path to the FastSAM model. Model name should be one of [‘FastSAM-x.pt’, ‘FastSAM-s.pt’].

process_single(sample, rank=None, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ImageTaggingMapper(*args, **kwargs)[source]#

Bases: Mapper

Generates image tags for each image in the sample.

This operator processes images to generate descriptive tags. It uses a Hugging Face model to analyze the images and produce relevant tags. The tags are stored in the specified field, defaulting to ‘image_tags’. If the tags are already present in the sample, the operator will not recompute them. For samples without images, an empty tag array is assigned. The generated tags are sorted by frequency and stored as a list of strings.

__init__(tag_field_name: str = 'image_tags', *args, **kwargs)[source]#

Initialization method. :param tag_field_name: the field name to store the tags. It’s

“image_tags” in default.

Parameters:
  • args – extra args

  • kwargs – extra args

process_single(sample, rank=None, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ImageTaggingVLMMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to generates image tags. This operator generates tags based on the content of given images. The tags are generated using a vlm model and stored in the specified field name. If the tags are already present in the sample, the operator skips processing.

DEFAULT_SYSTEM_PROMPT = '\nGenerate comprehensive and specific descriptive tags for the provided image(s) following these rules:\n1. Tags should be concise English phrases (nouns or gerunds)\n2. Use lowercase and hyphenate multi-word tags\n3. Include objects, actions, colors, materials, styles, emotions, and context\n4. Prioritize prominent and distinctive elements\n5. Output exactly 5-10 most relevant tags\n6. Format strictly as: {"tags": ["tag1", "tag2", ...]}\n\nExample valid responses:\n{"tags": ["red-apple", "wooden-table", "natural-lighting", "food-photography", "fresh-fruit"]}\n{"tags": ["mountain-landscape", "snowy-peaks", "sunset-glow", "alpine-lake", "conifer-forest"]}\n'#
DEFAULT_INPUT_TEMPLATE = '\nAnalyze both the provided image and its associated text description (if available) to generate comprehensive tags.\nText description: {text}\nVerify text relevance before combining with visual elements. If text is missing or irrelevant, generate tags based solely on the image.\n'#
__init__(api_or_hf_model: str = 'Qwen/Qwen2.5-VL-7B-Instruct', is_api_model: bool = False, *, tag_field_name: str = 'image_tags', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, model_params: Dict = {}, sampling_params: Dict = {}, try_num: Annotated[int, Gt(gt=0)] = 3, **kwargs)[source]#

Initialization method.

Parameters:
  • api_or_hf_model – API model name or HF model name.

  • is_api_model – Whether the model is an API model. If true, use openai api to generate tags, otherwise use vllm.

  • tag_field_name – the field name to store the tags. It’s “image_tags” in default.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt – System prompt for the task.

  • input_template – Template for building the model input.

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • kwargs – Extra keyword arguments.

parse_output(raw_output)[source]#
process_single(sample, rank=None, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.LatexFigureContextExtractorMapper(*args, **kwargs)[source]#

Bases: Mapper

Extracts figures and their citing context from LaTeX source.

This operator parses figure environments from a paper’s LaTeX source, extracts each figure’s caption, label, and image path(s), and finds the prose paragraphs that cite each figure. It fans out one paper row into N figure rows (one per figure or subfigure). Samples that contain no figures with images are dropped from the output.

Supported figure environments: figure, figure*, wrapfigure,

subfigure (environment), subfigure (command), subfloat (command, subfig package).

Supported caption commands: caption, caption*,

subcaption, captionof{figure}.

Figures without includegraphics are skipped. Subfigures inherit citing paragraphs from their parent figure’s label.

Output fields (in addition to all input fields):

  • <image_key> (default images, inherited from base class): list of image paths from \includegraphics.

  • <caption_key> (default caption): figure caption text.

  • <label_key> (default label): LaTeX label string.

  • <context_key> (default citing_paragraphs): list of paragraphs that cite this figure.

  • <parent_caption_key> (default parent_caption): parent figure caption (subfigures only; empty for standalone figures).

  • <parent_label_key> (default parent_label): parent figure label (subfigures only; empty for standalone figures).

Note: this operator expects the full LaTeX source as a single string. It does not resolve \input or \include directives. If your documents span multiple .tex files, concatenate them into a single text field before applying this mapper.

__init__(citation_commands: List[str] | None = None, paragraph_separator: str = '\n\n', caption_key: str = 'caption', label_key: str = 'label', context_key: str = 'citing_paragraphs', parent_caption_key: str = 'parent_caption', parent_label_key: str = 'parent_label', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • citation_commands – LaTeX reference commands to search for when finding citing paragraphs. Defaults to [’ref’, ‘cref’, ‘Cref’, ‘autoref’]. Comma-separated label lists (e.g. \cref{fig:a,fig:b}) are handled automatically.

  • paragraph_separator – Pattern for splitting LaTeX text into paragraphs. Defaults to ‘nn’.

  • caption_key – Output field name for the figure caption.

  • label_key – Output field name for the LaTeX label.

  • context_key – Output field name for citing paragraphs.

  • parent_caption_key – Output field name for the parent figure’s caption. For subfigures this carries the parent figure environment’s caption; for standalone figures it is an empty string.

  • parent_label_key – Output field name for the parent figure’s label. Useful for grouping subfigures that belong to the same figure environment. Empty string for standalone figures.

  • args – extra args

  • kwargs – extra args. Notably text_key (default 'text') controls which input field contains the LaTeX source, and image_key (default 'images') controls the output field name for extracted image paths. Both are inherited from the base OP class.

process_batched(samples)[source]#
class data_juicer.ops.mapper.LatexMergeTexMapper(*args, **kwargs)[source]#

Bases: Mapper

Extracts and concatenates all .tex files from a compressed LaTeX project archive into a single text field.

Supported archive formats: .tar, .tar.gz / .tgz, and .zip. Plain .gz (single-file gzip) is not supported because gzip archives carry no filename metadata, making it impossible to verify that the content is actually a .tex file.

All .tex files found inside the archive are read in-memory and joined with a configurable separator. No ordering or deduplication is applied.

This operator is typically placed before LaTeX-processing operators such as remove_comments_mapper, expand_macro_mapper, or latex_figure_context_extractor_mapper.

__init__(compressed_file_key: str = 'compressed_file', separator: str = '\n\n', max_file_size: int = 52428800, max_total_size: int = 104857600, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • compressed_file_key – Field name that stores the archive file path.

  • separator – String used to join the contents of multiple .tex files.

  • max_file_size – Maximum allowed uncompressed size in bytes for a single .tex entry inside the archive. Entries exceeding this limit are skipped with a warning. Set to None or 0 to disable the check.

  • max_total_size – Maximum allowed cumulative size in bytes for all extracted .tex content combined. Once this limit is reached, remaining files in the archive are skipped with a warning. Set to None or 0 to disable the check.

  • args – extra args

  • kwargs – extra args

process_single(sample)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.MllmMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to use MLLMs for visual question answering tasks. This operator uses a Hugging Face model to generate answers based on input text and images. It supports models like llava-hf/llava-v1.6-vicuna-7b-hf and Qwen/Qwen2-VL-7B-Instruct. The operator processes each sample, loading and processing images, and generating responses using the specified model. The generated responses are appended to the sample’s text field. The key parameters include the model ID, maximum new tokens, temperature, top-p sampling, and beam search size, which control the generation process.

__init__(hf_model: str = 'llava-hf/llava-v1.6-vicuna-7b-hf', max_new_tokens=256, temperature=0.2, top_p=None, num_beams=1, *args, **kwargs)[source]#

Initialization method. :param hf_model: hugginface model id. :param max_new_tokens: the maximum number of new tokens

generated by the model.

Parameters:
  • temperature – used to control the randomness of generated text. The higher the temperature, the more random and creative the generated text will be.

  • top_p – randomly select the next word from the group of words whose cumulative probability reaches p.

  • num_beams – the larger the beam search size, the higher the quality of the generated text.

  • args – extra args

  • kwargs – extra args

process_single(sample=None, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.NlpaugEnMapper(*args, **kwargs)[source]#

Bases: Mapper

Augments English text samples using various methods from the nlpaug library.

This operator applies a series of text augmentation techniques to generate new samples. It supports both word-level and character-level augmentations, such as deleting, swapping, and inserting words or characters. The number of augmented samples can be controlled, and the original samples can be kept or removed. When multiple augmentation methods are enabled, they can be applied sequentially or independently. Sequential application means each sample is augmented by all enabled methods in sequence, while independent application generates multiple augmented samples for each method. We recommend using 1-3 augmentation methods at a time to avoid significant changes in sample semantics.

__init__(sequential: bool = False, aug_num: Annotated[int, Gt(gt=0)] = 1, keep_original_sample: bool = True, delete_random_word: bool = False, swap_random_word: bool = False, spelling_error_word: bool = False, split_random_word: bool = False, keyboard_error_char: bool = False, ocr_error_char: bool = False, delete_random_char: bool = False, swap_random_char: bool = False, insert_random_char: bool = False, *args, **kwargs)[source]#

Initialization method. All augmentation methods use default parameters in default. We recommend you to only use 1-3 augmentation methods at a time. Otherwise, the semantics of samples might be changed significantly.

Parameters:
  • sequential – whether combine all augmentation methods to a sequence. If it’s True, a sample will be augmented by all opened augmentation methods sequentially. If it’s False, each opened augmentation method would generate its augmented samples independently.

  • aug_num – number of augmented samples to be generated. If sequential is True, there will be total aug_num augmented samples generated. If it’s False, there will be (aug_num * #opened_aug_method) augmented samples generated.

  • keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated texts in the final datasets and the original texts will be removed. It’s True in default.

  • delete_random_word – whether to open the augmentation method of deleting random words from the original texts. e.g. “I love LLM” –> “I LLM”

  • swap_random_word – whether to open the augmentation method of swapping random contiguous words in the original texts. e.g. “I love LLM” –> “Love I LLM”

  • spelling_error_word – whether to open the augmentation method of simulating the spelling error for words in the original texts. e.g. “I love LLM” –> “Ai love LLM”

  • split_random_word – whether to open the augmentation method of splitting words randomly with whitespaces in the original texts. e.g. “I love LLM” –> “I love LL M”

  • keyboard_error_char – whether to open the augmentation method of simulating the keyboard error for characters in the original texts. e.g. “I love LLM” –> “I ;ov4 LLM”

  • ocr_error_char – whether to open the augmentation method of simulating the OCR error for characters in the original texts. e.g. “I love LLM” –> “I 10ve LLM”

  • delete_random_char – whether to open the augmentation method of deleting random characters from the original texts. e.g. “I love LLM” –> “I oe LLM”

  • swap_random_char – whether to open the augmentation method of swapping random contiguous characters in the original texts. e.g. “I love LLM” –> “I ovle LLM”

  • insert_random_char – whether to open the augmentation method of inserting random characters into the original texts. e.g. “I love LLM” –> “I ^lKove LLM”

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.NlpcdaZhMapper(*args, **kwargs)[source]#

Bases: Mapper

Augments Chinese text samples using the nlpcda library.

This operator applies various augmentation methods to Chinese text, such as replacing similar words, homophones, deleting random characters, swapping characters, and replacing equivalent numbers. The number of augmented samples generated can be controlled by the aug_num parameter. If sequential is set to True, the augmentation methods are applied in sequence; otherwise, they are applied independently. The original sample can be kept or removed based on the keep_original_sample flag. It is recommended to use 1-3 augmentation methods at a time to avoid significant changes in the semantics of the samples. Some augmentation methods may not work for special texts, resulting in no augmented samples being generated.

__init__(sequential: bool = False, aug_num: Annotated[int, Gt(gt=0)] = 1, keep_original_sample: bool = True, replace_similar_word: bool = False, replace_homophone_char: bool = False, delete_random_char: bool = False, swap_random_char: bool = False, replace_equivalent_num: bool = False, *args, **kwargs)[source]#

Initialization method. All augmentation methods use default parameters in default. We recommend you to only use 1-3 augmentation methods at a time. Otherwise, the semantics of samples might be changed significantly. Notice: some augmentation method might not work for some special texts, so there might be no augmented texts generated.

Parameters:
  • sequential – whether combine all augmentation methods to a sequence. If it’s True, a sample will be augmented by all opened augmentation methods sequentially. If it’s False, each opened augmentation method would generate its augmented samples independently.

  • aug_num – number of augmented samples to be generated. If sequential is True, there will be total aug_num augmented samples generated. If it’s False, there will be (aug_num * #opened_aug_method) augmented samples generated.

  • keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated texts in the final datasets and the original texts will be removed. It’s True in default.

  • replace_similar_word – whether to open the augmentation method of replacing random words with their similar words in the original texts. e.g. “čŋ™é‡Œä¸€å…ąæœ‰5į§ä¸åŒįš„æ•°æŽåĸžåŧēæ–šæŗ•â€ –> “čŋ™čžšä¸€å…ąæœ‰5į§ä¸åŒįš„æ•°æŽåĸžåŧēæ–šæŗ•â€

  • replace_homophone_char – whether to open the augmentation method of replacing random characters with their homophones in the original texts. e.g. “čŋ™é‡Œä¸€å…ąæœ‰5į§ä¸åŒįš„æ•°æŽåĸžåŧēæ–šæŗ•â€ –> “čŋ™é‡Œä¸€å…ąæœ‰5į§ä¸åŒįš„æŋ–捎åĸžåŧēæ–šæŗ•â€

  • delete_random_char – whether to open the augmentation method of deleting random characters from the original texts. e.g. “čŋ™é‡Œä¸€å…ąæœ‰5į§ä¸åŒįš„æ•°æŽåĸžåŧēæ–šæŗ•â€ –> “čŋ™é‡Œä¸€å…ąæœ‰5į§ä¸åŒįš„æ•°æŽåĸžåŧē”

  • swap_random_char – whether to open the augmentation method of swapping random contiguous characters in the original texts. e.g. “čŋ™é‡Œä¸€å…ąæœ‰5į§ä¸åŒįš„æ•°æŽåĸžåŧēæ–šæŗ•â€ –> “čŋ™é‡Œä¸€å…ąæœ‰5į§ä¸åŒįš„æ•°æŽåŧēåĸžæ–šæŗ•â€

  • replace_equivalent_num – whether to open the augmentation method of replacing random numbers with their equivalent representations in the original texts. Notice: Only for numbers for now. e.g. “čŋ™é‡Œä¸€å…ąæœ‰5į§ä¸åŒįš„æ•°æŽåĸžåŧēæ–šæŗ•â€ –> “čŋ™é‡Œä¸€å…ąæœ‰äŧį§ä¸åŒįš„æ•°æŽåĸžåŧēæ–šæŗ•â€

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.OptimizePromptMapper(*args, **kwargs)[source]#

Bases: Mapper

Optimize prompts based on existing ones in the same batch.

This operator uses the existing prompts and newly optimized prompts as examples to generate better prompts. It supports using a Hugging Face model or an API for text generation. The operator can be configured to keep the original samples or replace them with the generated ones. The optimization process involves multiple retries if the generated prompt is empty. The operator operates in batch mode and can leverage vLLM for inference acceleration on CUDA devices.

  • Uses existing and newly generated prompts to optimize future prompts.

  • Supports both Hugging Face models and API-based text generation.

  • Can keep or replace original samples with generated ones.

  • Retries up to a specified number of times if the generated prompt is empty.

  • Operates in batch mode and can use vLLM for acceleration on CUDA.

  • References: https://doc.agentscope.io/v0/en/build_tutorial/prompt_optimization.html

DEFAULT_SYSTEM_PROMPT = '蝎äŊ äģ”įģ†č§‚察多ä¸Ēį¤ē䞋提į¤ēč¯īŧŒæŒ‰į…§äŊ įš„ᐆ觪īŧŒæ€ģįģ“å‡ēᛏåē”č§„įŸŠīŧŒį„ļ后写å‡ē一ä¸Ēæ–°įš„æ›´åĨŊįš„æį¤ēč¯īŧŒäģĨčŽŠæ¨Ąåž‹æ›´åĨŊ地厌成指厚äģģåŠĄã€‚æŗ¨æ„īŧŒæ–°į”Ÿæˆįš„【提į¤ēč¯ã€‘éœ€čρæģĄčļŗåς䏋čĻæą‚īŧš\n1. į”Ÿæˆįš„ã€æį¤ēč¯ã€‘ä¸čƒŊ与输å…Ĩįš„ã€æį¤ēč¯ã€‘åŽŒå…¨ä¸€č‡´īŧŒäŊ†æ˜¯éœ€čρäŋæŒæ ŧåŧįąģäŧŧ。\n2. į”Ÿæˆįš„ã€æį¤ēč¯ã€‘į›¸æ¯”äēŽčž“å…Ĩįš„ã€æį¤ēč¯ã€‘ä¸čƒŊæœ‰åžˆå¤§įš„å˜åŒ–īŧŒæ›´å¤šåē”č¯Ĩæ˜¯å…ŗé”Žč¯ã€æ ¸åŋƒå‚æ•°į­‰æ–šéĸįš„åžŽč°ƒã€‚\n3. į”Ÿæˆæ—ļåĒéœ€į”Ÿæˆå¸Ļ有【提į¤ēč¯ã€‘å‰įŧ€įš„æį¤ēč¯īŧŒä¸éœ€į”Ÿæˆå…ļäģ–äģģäŊ•éĸå¤–äŋĄæ¯ã€‚\n'#
DEFAULT_INPUT_TEMPLATE = '{}'#
DEFAULT_EXAMPLE_TEMPLATE = '\nåĻ‚ä¸‹æ˜¯ä¸€æĄį¤ē䞋数捎īŧš\n{}'#
DEFAULT_PROMPT_TEMPLATE = '【提į¤ēč¯ã€‘\n{}\n'#
DEFAULT_OUTPUT_PATTERN = '【提į¤ēč¯ã€‘(.*?)(?=【|$)'#
__init__(api_or_hf_model: str = 'Qwen/Qwen2.5-7B-Instruct', gen_num: Annotated[int, Gt(gt=0)] = 3, max_example_num: Annotated[int, Gt(gt=0)] = 3, keep_original_sample: bool = True, retry_num: int = 3, *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, example_template: str | None = None, prompt_template: str | None = None, output_pattern: str | None = None, enable_vllm: bool = False, is_hf_model: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]#

Initialization method.

Parameters:
  • api_or_hf_model – API or huggingface model name.

  • gen_num – The number of new prompts to generate.

  • max_example_num – Maximum number of example prompts to include as context when generating new optimized prompts.

  • keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated texts in the final datasets and the original texts will be removed. It’s True in default.

  • retry_num – how many times to retry to generate the prompt if the parsed generated prompt is empty. It’s 3 in default.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt – System prompt for guiding the generation task.

  • input_template – Template for building the input prompt. It must include one placeholder ‘{}’, which will be replaced by example_num formatted examples defined by example_template.

  • example_template – Template for formatting one prompt example. It must include one placeholder ‘{}’, which will be replaced by one formatted prompt.

  • prompt_template – Template for formatting a single prompt within each example. Must include two placeholders ‘{}’ for the question and answer.

  • output_pattern – Regular expression pattern to extract questions and answers from model response.

  • enable_vllm – Whether to use vllm for inference acceleration.

  • is_hf_model – If true, use Transformers for loading hugging face or local llm.

  • model_params – Parameters for initializing the model.

  • sampling_params – Sampling parameters for text generation. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

build_input(prompt_examples)[source]#
parse_output(raw_output)[source]#
generate_one_prompt(model, input_prompt_samples)[source]#
process_batched(samples, rank=None, *args, **kwargs)[source]#
class data_juicer.ops.mapper.OptimizeQAMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to optimize question-answer pairs.

This operator refines and enhances the quality of question-answer pairs. It uses a Hugging Face model to generate more detailed and accurate questions and answers. The input is formatted using a template, and the output is parsed using a regular expression. The system prompt, input template, and output pattern can be customized. If VLLM is enabled, the operator accelerates inference on CUDA devices.

DEFAULT_SYSTEM_PROMPT = '蝎äŧ˜åŒ–čž“å…Ĩįš„é—Žį­”å¯šīŧŒäŊŋ【闎éĸ˜ã€‘å’Œã€å›žį­”ã€‘éƒŊ更加č¯Ļįģ†ã€å‡†įĄŽã€‚åŋ…éĄģæŒ‰į…§äģĨä¸‹æ ‡čŽ°æ ŧåŧīŧŒį›´æŽĨ输å‡ēäŧ˜åŒ–åŽįš„é—Žį­”å¯šīŧš\n【闎éĸ˜ã€‘\näŧ˜åŒ–åŽįš„é—Žéĸ˜\nã€å›žį­”ã€‘\näŧ˜åŒ–åŽįš„å›žį­”'#
DEFAULT_INPUT_TEMPLATE = 'äģĨä¸‹æ˜¯åŽŸå§‹é—Žį­”å¯šīŧš\n{}'#
DEFAULT_QA_PAIR_TEMPLATE = '【闎éĸ˜ã€‘\n{}\nã€å›žį­”ã€‘\n{}'#
DEFAULT_OUTPUT_PATTERN = '.*?【闎éĸ˜ã€‘\\s*(.*?)\\s*ã€å›žį­”ã€‘\\s*(.*)'#
__init__(api_or_hf_model: str = 'Qwen/Qwen2.5-7B-Instruct', is_hf_model: bool = True, *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, qa_pair_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, enable_vllm: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]#

Initialization method.

Parameters:
  • api_or_hf_model – API or huggingface model name.

  • is_hf_model – If true, use huggingface model. Otherwise, use API.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt – System prompt for guiding the optimization task.

  • input_template – Template for building the input for the model. Please make sure the template contains one placeholder ‘{}’, which corresponds to the question and answer pair generated by param qa_pair_template.

  • qa_pair_template – Template for formatting the question and answer pair. Please make sure the template contains two ‘{}’ to format question and answer.

  • output_pattern – Regular expression pattern to extract question and answer from model response.

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • enable_vllm – Whether to use VLLM for inference acceleration.

  • model_params – Parameters for initializing the model.

  • sampling_params – Sampling parameters for text generation (e.g., {‘temperature’: 0.9, ‘top_p’: 0.95}).

  • kwargs – Extra keyword arguments.

build_input(sample)[source]#
parse_output(raw_output)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.OptimizeQueryMapper(*args, **kwargs)[source]#

Bases: OptimizeQAMapper

Optimize queries in question-answer pairs to make them more specific and detailed.

This mapper refines the questions in a QA pair, making them more specific and detailed while ensuring that the original answer can still address the optimized question. It uses a predefined system prompt for the optimization process. The optimized query is extracted from the raw output by stripping any leading or trailing whitespace. The mapper utilizes a CUDA accelerator for faster processing.

DEFAULT_SYSTEM_PROMPT = 'äŧ˜åŒ–é—Žį­”å¯šä¸­įš„ã€é—Žéĸ˜ã€‘īŧŒå°†å…ļ更加č¯Ļįģ†å…ˇäŊ“īŧŒäŊ†äģå¯äģĨį”ąåŽŸį­”æĄˆå›žį­”ã€‚åĒ输å‡ēäŧ˜åŒ–åŽįš„ã€é—Žéĸ˜ã€‘īŧŒä¸čĻčž“å‡ē多äŊ™å†…厚。'#
parse_output(raw_output)[source]#
class data_juicer.ops.mapper.OptimizeResponseMapper(*args, **kwargs)[source]#

Bases: OptimizeQAMapper

Optimize response in question-answer pairs to be more detailed and specific.

This operator enhances the responses in question-answer pairs, making them more detailed and specific while ensuring they still address the original question. It uses a predefined system prompt for optimization. The optimized response is stripped of any leading or trailing whitespace before being returned. This mapper leverages a Hugging Face model for the optimization process, which is accelerated using CUDA.

DEFAULT_SYSTEM_PROMPT = '蝎äŧ˜åŒ–é—Žį­”å¯šä¸­įš„å›žį­”īŧŒå°†å…ļ更加č¯Ļįģ†å…ˇäŊ“īŧŒäŊ†äģå¯äģĨå›žį­”åŽŸé—Žéĸ˜ã€‚åĒ输å‡ēäŧ˜åŒ–åŽįš„å›žį­”īŧŒä¸čĻčž“å‡ē多äŊ™å†…厚。'#
parse_output(raw_output)[source]#
class data_juicer.ops.mapper.PairPreferenceMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to construct paired preference samples by generating a rejected response and its reason.

This operator uses an API model to generate a new response that is opposite in style, factuality, or stance to the original response. The generated response and the reason for its generation are stored in the sample. The default system prompt and input template are provided, but can be customized. The output is parsed using a regular expression to extract the new response and the reason. If parsing fails, the operator retries up to a specified number of times. The generated response and reason are stored in the sample under the keys ‘rejected_response’ and ‘reason’, respectively.

DEFAULT_SYSTEM_PROMPT = 'äŊ įš„äģģåŠĄæ˜¯æ šæŽå‚č€ƒäŋĄæ¯äŋŽæ”šé—Žį­”å¯šä¸­įš„å›žį­”īŧŒåœ¨č¯­č¨€éŖŽæ ŧ、äē‹åŽžæ€§ã€äēēį‰ŠčēĢäģŊ、įĢ‹åœēį­‰äģģ一斚éĸä¸ŽåŽŸå›žį­”į›¸åã€‚åŋ…éĄģæŒ‰į…§äģĨä¸‹æ ‡čŽ°æ ŧåŧčž“å‡ēīŧŒä¸čĻčž“å‡ēå…ļäģ–多äŊ™å†…厚。\nã€å›žį­”ã€‘\nį”Ÿæˆįš„æ–°å›žį­”\n【原因】\nį”Ÿæˆč¯Ĩå›žį­”įš„åŽŸå› '#
DEFAULT_INPUT_TEMPLATE = 'ã€å‚č€ƒäŋĄæ¯ã€‘\n{reference}\n\näģĨä¸‹æ˜¯åŽŸå§‹é—Žį­”å¯šīŧš\n【闎éĸ˜ã€‘\n{query}\nã€å›žį­”ã€‘\n{response}'#
DEFAULT_OUTPUT_PATTERN = '.*?ã€å›žį­”ã€‘\\s*(.*?)\\s*【原因】\\s*(.*)'#
__init__(api_model: str = 'gpt-4o', *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, output_pattern: str | None = None, rejected_key: str = 'rejected_response', reason_key: str = 'reason', try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]#

Initialization method.

Parameters:
  • api_model – API model name.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt – System prompt for guiding the generation task.

  • input_template – Template for building the model input. It must contain placeholders ‘{query}’ and ‘{response}’, and can optionally include ‘{reference}’.

  • output_pattern – Regular expression for parsing model output.

  • rejected_key – The field name in the sample to store the generated rejected response. Defaults to ‘rejected_response’.

  • reason_key – The field name in the sample to store the reason for generating the response. Defaults to ‘reason’.

  • try_num – The number of retries for the API call in case of response parsing failure. Defaults to 3.

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

build_input(sample)[source]#
parse_output(raw_output)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.PiiLlmSuspectMapper(*args, **kwargs)[source]#

Bases: Mapper

LLM audit (and optional redaction) for possibly missed PII.

Writes JSON to meta[result_key] (default MetaKeys.pii_llm_suspect). Set redaction_mode to evidence or whole_field to also modify inspect_keys string fields (and messages when listed). Place after pii_redaction_mapper.

Use gate_mode="heuristic" to call the API only when cheap patterns suggest residual risk (long digit runs, @, secret-like keywords, etc.).

Pre-LLM extensions (still no API cost unless you enable spaCy):

  • heuristic_name_rules (default True): contextual CJK / English name cues so person-heavy text is not skipped when the base heuristic fires only on digits and secrets.

  • spacy_ner_models: optional list of spaCy pipeline names (e.g. ["zh_core_web_sm", "en_core_web_sm"]) so one job loads both and runs NER on the same text prefix until a PERSON / PER hit.

  • spacy_ner_model: legacy single name; merged after spacy_ner_models (deduped). Install with python -m spacy download <name>.

  • spacy_auto_download (default True): if the pipeline is missing, run spaCy’s downloader before spacy.load (needs network, uses pip). Disable in air-gapped jobs or set env PII_SPACY_AUTO_DOWNLOAD=0.

__init__(api_model: str = 'qwen-turbo', *, inspect_keys: List[str] | None = None, messages_key: str | None = 'messages', max_messages_for_prompt: Annotated[int, Gt(gt=0)] = 4, max_chars_per_field: Annotated[int, Gt(gt=0)] = 6000, max_chars_messages_excerpt: Annotated[int, Gt(gt=0)] = 8000, gate_mode: str = 'heuristic', result_key: str = 'pii_llm_suspect', raw_key: str = 'pii_llm_suspect_raw', overwrite: bool = False, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, preferred_output_lang: str = 'zh', try_num: Annotated[int, Gt(gt=0)] = 2, model_params: Dict | None = None, sampling_params: Dict | None = None, text_key: str = 'text', heuristic_name_rules: bool = True, spacy_ner_model: str | None = None, spacy_ner_models: List[str] | None = None, spacy_ner_max_chars: Annotated[int, Gt(gt=0)] = 4000, spacy_auto_download: bool = True, redaction_mode: str = 'none', redaction_placeholder: str = '[LLM_PII_SUSPECT_REDACTED]', **kwargs)[source]#

Base class that conducts data editing.

Parameters:
  • text_key – the key name of field that stores sample texts to be processed.

  • image_key – the key name of field that stores sample image list to be processed

  • audio_key – the key name of field that stores sample audio list to be processed

  • video_key – the key name of field that stores sample video list to be processed

  • image_bytes_key – the key name of field that stores sample image bytes list to be processed

  • query_key – the key name of field that stores sample queries

  • response_key – the key name of field that stores responses

  • history_key – the key name of field that stores history of queries and responses

process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.PiiRedactionMapper(*args, **kwargs)[source]#

Bases: Mapper

Redact PII in text and optionally in messages/query/response.

Covers paths (Unix/Windows), emails, secrets, IDs, phones, agent channel identifiers (éŖžäšĻ/钉钉/äŧä¸šåžŽäŋĄ open_id, channel: feishu|dingtalk|email). Optional: PEM blocks, JWT-shaped tokens, http(s) URLs, IPv4, bracketed IPv6, MAC addresses (see mask_extended_pii or individual flags). Use redact_keys to apply to text, query, response, and/or messages (recursive).

__init__(mask_paths: bool = True, mask_emails: bool = True, mask_secrets: bool = True, mask_ids: bool = True, mask_phones: bool = True, mask_id_cards: bool = True, mask_channel_ids: bool = True, mask_platform_open_ids: bool = True, mask_pem: bool = True, mask_jwt: bool = True, mask_urls: bool = False, mask_ips: bool = True, mask_macs: bool = True, path_replacement: str = '[PATH_REDACTED]', email_replacement: str = '[EMAIL_REDACTED]', secret_replacement: str = '[REDACTED]', id_replacement: str = '[ID_REDACTED]', phone_replacement: str = '[PHONE_REDACTED]', id_card_replacement: str = '[ID_CARD_REDACTED]', channel_id_replacement: str = '[CHANNEL_ID_REDACTED]', pem_replacement: str = '[PEM_REDACTED]', jwt_replacement: str = '[JWT_REDACTED]', url_replacement: str = '[URL_REDACTED]', ip_replacement: str = '[IP_REDACTED]', mac_replacement: str = '[MAC_REDACTED]', extra_patterns: List[Tuple[str, str]] | None = None, text_key: str = 'text', redact_keys: List[str] | None = None, messages_key: str | None = 'messages', **kwargs)[source]#

Base class that conducts data editing.

Parameters:
  • text_key – the key name of field that stores sample texts to be processed.

  • image_key – the key name of field that stores sample image list to be processed

  • audio_key – the key name of field that stores sample audio list to be processed

  • video_key – the key name of field that stores sample video list to be processed

  • image_bytes_key – the key name of field that stores sample image bytes list to be processed

  • query_key – the key name of field that stores sample queries

  • response_key – the key name of field that stores responses

  • history_key – the key name of field that stores history of queries and responses

process_single(sample: dict) dict[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.PunctuationNormalizationMapper(*args, **kwargs)[source]#

Bases: Mapper

Normalizes unicode punctuations to their English equivalents in text samples.

This operator processes a batch of text samples and replaces any unicode punctuation with its corresponding English punctuation. The mapping includes common substitutions like “īŧŒâ€ to “,”, “。” to “.”, and ““” to “. It iterates over each character in the text, replacing it if it is found in the predefined punctuation map. The result is a set of text samples with consistent punctuation formatting.

__init__(*args, **kwargs)[source]#

Initialization method.

Parameters:
  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.PythonFileMapper(*args, **kwargs)[source]#

Bases: Mapper

Executes a Python function defined in a file on input data.

This operator loads a specified Python function from a given file and applies it to the input data. The function must take exactly one argument and return a dictionary. The operator can process data either sample by sample or in batches, depending on the batched parameter. If the file path is not provided, the operator acts as an identity function, returning the input sample unchanged. The function is loaded dynamically, and its name and file path are configurable. Important notes: - The file must be a valid Python file (.py). - The function must be callable and accept exactly one argument. - The function’s return value must be a dictionary.

__init__(file_path: str = '', function_name: str = 'process_single', batched: bool = False, **kwargs)[source]#

Initialization method.

Parameters:
  • file_path – The path to the Python file containing the function to be executed.

  • function_name – The name of the function defined in the file to be executed.

  • batched – A boolean indicating whether to process input data in batches.

  • kwargs – Additional keyword arguments passed to the parent class.

process_single(sample)[source]#

Invoke the loaded function with the provided sample.

process_batched(samples)[source]#

Invoke the loaded function with the provided samples.

class data_juicer.ops.mapper.PythonLambdaMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper for applying a Python lambda function to data samples.

This operator allows users to define a custom transformation using a Python lambda function. The lambda function is applied to each sample, and the result must be a dictionary. If the batched parameter is set to True, the lambda function will process a batch of samples at once. If no lambda function is provided, the identity function is used, which returns the input sample unchanged. The operator validates the lambda function to ensure it has exactly one argument and compiles it safely.

__init__(lambda_str: str = '', batched: bool = False, **kwargs)[source]#

Initialization method.

Parameters:
  • lambda_str – A string representation of the lambda function to be executed on data samples. If empty, the identity function is used.

  • batched – A boolean indicating whether to process input data in batches.

  • kwargs – Additional keyword arguments passed to the parent class.

process_single(sample)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

process_batched(samples)[source]#
class data_juicer.ops.mapper.QuerySentimentDetectionMapper(*args, **kwargs)[source]#

Bases: Mapper

Predicts user’s sentiment label (‘negative’, ‘neutral’, ‘positive’) in a query.

This mapper takes input from the specified query key and outputs the predicted sentiment label and its corresponding score. The results are stored in the Data-Juicer meta field under ‘query_sentiment_label’ and ‘query_sentiment_label_score’. It uses a Hugging Face model for sentiment detection. If a Chinese-to-English translation model is provided, it first translates the query from Chinese to English before performing sentiment analysis.

__init__(hf_model: str = 'mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis', zh_to_en_hf_model: str | None = 'Helsinki-NLP/opus-mt-zh-en', model_params: Dict = {}, zh_to_en_model_params: Dict = {}, *, label_key: str = 'query_sentiment_label', score_key: str = 'query_sentiment_label_score', **kwargs)[source]#

Initialization method.

Parameters:
  • hf_model – Huggingface model ID to predict sentiment label.

  • zh_to_en_hf_model – Translation model from Chinese to English. If not None, translate the query from Chinese to English.

  • model_params – model param for hf_model.

  • zh_to_en_model_params – model param for zh_to_hf_model.

  • label_key – The key name in the meta field to store the output label. It is ‘query_sentiment_label’ in default.

  • score_key – The key name in the meta field to store the corresponding label score. It is ‘query_sentiment_label_score’ in default.

  • kwargs – Extra keyword arguments.

process_batched(samples, rank=None)[source]#
class data_juicer.ops.mapper.QueryIntentDetectionMapper(*args, **kwargs)[source]#

Bases: Mapper

Predicts the user’s intent label and corresponding score for a given query. The operator uses a Hugging Face model to classify the intent of the input query. If the query is in Chinese, it can optionally be translated to English using another Hugging Face translation model before classification. The predicted intent label and its confidence score are stored in the meta field with the keys ‘query_intent_label’ and ‘query_intent_score’, respectively. If these keys already exist in the meta field, the operator will skip processing for those samples.

__init__(hf_model: str = 'bespin-global/klue-roberta-small-3i4k-intent-classification', zh_to_en_hf_model: str | None = 'Helsinki-NLP/opus-mt-zh-en', model_params: Dict = {}, zh_to_en_model_params: Dict = {}, *, label_key: str = 'query_intent_label', score_key: str = 'query_intent_label_score', **kwargs)[source]#

Initialization method.

Parameters:
  • hf_model – Huggingface model ID to predict intent label.

  • zh_to_en_hf_model – Translation model from Chinese to English. If not None, translate the query from Chinese to English.

  • model_params – model param for hf_model.

  • zh_to_en_model_params – model param for zh_to_hf_model.

  • label_key – The key name in the meta field to store the output label. It is ‘query_intent_label’ in default.

  • score_key – The key name in the meta field to store the corresponding label score. It is ‘query_intent_label_score’ in default.

  • kwargs – Extra keyword arguments.

process_batched(samples, rank=None)[source]#
class data_juicer.ops.mapper.QueryTopicDetectionMapper(*args, **kwargs)[source]#

Bases: Mapper

Predicts the topic label and its corresponding score for a given query. The input is taken from the specified query key. The output, which includes the predicted topic label and its score, is stored in the ‘query_topic_label’ and ‘query_topic_label_score’ fields of the Data-Juicer meta field. This operator uses a Hugging Face model for topic classification. If a Chinese to English translation model is provided, it will first translate the query from Chinese to English before predicting the topic.

  • Uses a Hugging Face model for topic classification.

  • Optionally translates Chinese queries to English using another Hugging Face

model. - Stores the predicted topic label in ‘query_topic_label’. - Stores the corresponding score in ‘query_topic_label_score’.

__init__(hf_model: str = 'dstefa/roberta-base_topic_classification_nyt_news', zh_to_en_hf_model: str | None = 'Helsinki-NLP/opus-mt-zh-en', model_params: Dict = {}, zh_to_en_model_params: Dict = {}, *, label_key: str = 'query_topic_label', score_key: str = 'query_topic_label_score', **kwargs)[source]#

Initialization method.

Parameters:
  • hf_model – Huggingface model ID to predict topic label.

  • zh_to_en_hf_model – Translation model from Chinese to English. If not None, translate the query from Chinese to English.

  • model_params – model param for hf_model.

  • zh_to_en_model_params – model param for zh_to_hf_model.

  • label_key – The key name in the meta field to store the output label. It is ‘query_topic_label’ in default.

  • score_key – The key name in the meta field to store the corresponding label score. It is ‘query_topic_label_score’ in default.

  • kwargs – Extra keyword arguments.

process_batched(samples, rank=None)[source]#
class data_juicer.ops.mapper.RelationIdentityMapper(*args, **kwargs)[source]#

Bases: Mapper

Identify the relation between two entities in a given text.

This operator uses an API model to analyze the relationship between two specified entities in the text. It constructs a prompt with the provided system and input templates, then sends it to the API model for analysis. The output is parsed using a regular expression to extract the relationship. If the two entities are the same, the relationship is identified as “another identity.” The result is stored in the meta field under the key ‘role_relation’ by default. The operator retries the API call up to a specified number of times in case of errors. If drop_text is set to True, the original text is removed from the sample after processing.

DEFAULT_SYSTEM_PROMPT_TEMPLATE = 'įģ™åŽšå…ŗäēŽ{entity1}和{entity2}įš„æ–‡æœŦäŋĄæ¯ã€‚判断{entity1}和{entity2}äš‹é—´įš„å…ŗįŗģ。\nčĻæą‚īŧš\n- å…ŗįŗģᔍ䏀ä¸Ē或多ä¸Ēč¯č¯­čĄ¨į¤ēīŧŒåŋ…čρæ—ļ可äģĨ加一ä¸ĒåŊĸåŽšč¯æĨ描čŋ°čŋ™æŽĩå…ŗįŗģ\n- 输å‡ēå…ŗįŗģæ—ļ不čĻå‚æ‚äģģäŊ•æ ‡į‚šįŦĻåˇ\n- 需čρäŊ čŋ›čĄŒåˆį†įš„æŽ¨į†æ‰čƒŊåž—å‡ēįģ“čŽē\n- åĻ‚æžœä¸¤ä¸Ēäēēį‰ŠčēĢäģŊ是同一ä¸ĒäēēīŧŒčž“å‡ēå…ŗįŗģä¸ēīŧšåĻ一ä¸ĒčēĢäģŊ\n- 输å‡ēæ ŧåŧä¸ēīŧš\nåˆ†æžæŽ¨į†īŧš...\n所äģĨ{entity2}是{entity1}įš„īŧš...\n- æŗ¨æ„čž“å‡ēįš„æ˜¯{entity2}是{entity1}įš„äģ€äšˆå…ŗįŗģīŧŒč€Œä¸æ˜¯{entity1}是{entity2}įš„äģ€äšˆå…ŗįŗģ'#
DEFAULT_INPUT_TEMPLATE = 'å…ŗäēŽ{entity1}和{entity2}įš„æ–‡æœŦäŋĄæ¯īŧš\n```\n{text}\n```\n'#
DEFAULT_OUTPUT_PATTERN_TEMPLATE = '\n        \\s*åˆ†æžæŽ¨į†īŧš\\s*(.*?)\\s*\n        \\s*所äģĨ{entity2}是{entity1}įš„īŧš\\s*(.*?)\\Z\n    '#
__init__(api_model: str = 'gpt-4o', source_entity: str = None, target_entity: str = None, *, output_key: str = 'role_relation', api_endpoint: str | None = None, response_path: str | None = None, system_prompt_template: str | None = None, input_template: str | None = None, output_pattern_template: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]#

Initialization method. :param api_model: API model name. :param source_entity: The source entity of the relation to be

identified.

Parameters:
  • target_entity – The target entity of the relation to be identified.

  • output_key – The output key in the meta field in the samples. It is ‘role_relation’ in default.

  • api_endpoint – URL endpoint for the API.

  • response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.

  • system_prompt_template – System prompt template for the task.

  • input_template – Template for building the model input.

  • output_pattern_template – Regular expression template for parsing model output.

  • try_num – The number of retry attempts when there is an API call error or output parsing error.

  • drop_text – If drop the text in the output.

  • model_params – Parameters for initializing the API model.

  • sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • kwargs – Extra keyword arguments.

parse_output(raw_output)[source]#
process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.RemoveBibliographyMapper(*args, **kwargs)[source]#

Bases: Mapper

Removes bibliography sections at the end of LaTeX documents.

This operator identifies and removes bibliography sections in LaTeX documents. It uses a regular expression to match common bibliography commands such as appendix, begin{references}, begin{thebibliography}, and bibliography. The matched sections are removed from the text. The operator processes samples in batch mode for efficiency.

__init__(*args, **kwargs)[source]#

Initialization method.

Parameters:
  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.RemoveCommentsMapper(*args, **kwargs)[source]#

Bases: Mapper

Removes comments from documents, currently supporting only ‘tex’ format.

This operator removes inline and multiline comments from text samples. It supports both inline and multiline comment removal, controlled by the inline and multiline parameters. Currently, it is designed to work with ‘tex’ documents. The operator processes each sample in the batch and applies regular expressions to remove comments. The processed text is then updated in the original samples.

  • Inline comments are removed using the pattern [^]%.+$.

  • Multiline comments are removed using the pattern `^%.*

?`.

Important notes: - Only ‘tex’ document type is supported at present. - The operator processes the text in place and updates the original samples.

__init__(doc_type: str | List[str] = 'tex', inline: bool = True, multiline: bool = True, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • doc_type – Type of document to remove comments.

  • inline – Whether to remove inline comments.

  • multiline – Whether to remove multiline comments.

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.RemoveHeaderMapper(*args, **kwargs)[source]#

Bases: Mapper

Removes headers at the beginning of documents in LaTeX samples.

This operator identifies and removes headers such as chapter, part, section, subsection, subsubsection, paragraph, and subparagraph. It uses a regular expression to match these headers. If a sample does not contain any headers and drop_no_head is set to True, the sample text will be removed. Otherwise, the sample remains unchanged. The operator processes samples in batches for efficiency.

__init__(drop_no_head: bool = True, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • drop_no_head – whether to drop sample texts without headers.

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.RemoveLongWordsMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to remove long words within a specific range.

This operator filters out words in the text that are either shorter than the specified minimum length or longer than the specified maximum length. Words are first checked with their original length, and if they do not meet the criteria, they are stripped of special characters and re-evaluated. The key metric used is the character-based length of each word. The processed text retains only the words that fall within the defined length range. This operator processes text in batches for efficiency.

__init__(min_len: int = 1, max_len: int = 9223372036854775807, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • min_len – The min mapper word length in this op, words will be filtered if their length is below this parameter.

  • max_len – The max mapper word length in this op, words will be filtered if their length exceeds this parameter.

  • args – extra args

  • kwargs – extra args

should_keep_long_word(word)[source]#
process_batched(samples)[source]#
class data_juicer.ops.mapper.RemoveNonChineseCharacterlMapper(*args, **kwargs)[source]#

Bases: Mapper

Removes non-Chinese characters from text samples.

This mapper removes all characters that are not part of the Chinese character set. - It can optionally keep alphabets, numbers, and punctuation based on the configuration. - The removal is done using a regular expression pattern. - The pattern is constructed to exclude or include alphabets, numbers, and punctuation

as specified.

  • The key metric for this operation is the presence of non-Chinese characters, which are removed.

  • The operator processes samples in a batched manner.

__init__(keep_alphabet: bool = True, keep_number: bool = True, keep_punc: bool = True, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • keep_alphabet – whether to keep alphabet

  • keep_number – whether to keep number

  • keep_punc – whether to keep punctuation

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.RemoveRepeatSentencesMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to remove repeat sentences in text samples.

This operator processes text samples to remove duplicate sentences. It splits the text into lines and then further splits each line into sentences. Sentences are considered duplicates if they are identical after optional case normalization and special character removal. The operator uses a hash set to track unique sentences. Sentences shorter than min_repeat_sentence_length are not deduplicated. If ignore_special_character is enabled, special characters (all except Chinese, letters, and numbers) are ignored when checking for duplicates. The resulting text is reassembled with unique sentences.

__init__(lowercase: bool = False, ignore_special_character: bool = True, min_repeat_sentence_length: int = 2, tokenizer: Callable[[str], list[str]] | str | None = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • lowercase – Whether to convert sample text to lower case

  • ignore_special_character – Whether to ignore special characters when judging repeated sentences. Special characters are all characters except Chinese characters, letters and numbers.

  • min_repeat_sentence_length – Sentences shorter than this length will not be deduplicated. If ignore_special_character is set to True, then special characters are not included in this length.

  • tokenizer – Custom sentence tokenizer. Can be a callable that takes a string and returns a list of sentence strings, or a lambda string for YAML configs (e.g. "lambda text: __import__('nltk').sent_tokenize(text)"). If None, uses the built-in regex-based splitter.

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.RemoveSpecificCharsMapper(*args, **kwargs)[source]#

Bases: Mapper

Removes specific characters from text samples.

This operator removes specified characters from the text. The characters to be removed can be provided as a string or a list of strings. If no characters are specified, the default set includes special and non-alphanumeric characters. The operator processes the text using a regular expression pattern that matches any of the specified characters and replaces them with an empty string. This is done in a batched manner for efficiency.

__init__(chars_to_remove: str | List[str] = '◆●■â–ēâ–ŧ▲▴∆â–ģ▷❖♡□', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • chars_to_remove – a list or a string including all characters that need to be removed from text.

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.RemoveTableTextMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to remove table texts from text samples.

This operator uses regular expressions to identify and remove tables from the text. It targets tables with a specified range of columns, defined by the minimum and maximum number of columns. The operator iterates over each sample, applying the regex pattern to remove tables that match the column criteria. The processed text, with tables removed, is then stored back in the sample. This operation is batched for efficiency.

__init__(min_col: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Ge(ge=2), Le(le=20)])] = 2, max_col: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Ge(ge=2), Le(le=20)])] = 20, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • min_col – The min number of columns of table to remove.

  • max_col – The max number of columns of table to remove.

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.RemoveWordsWithIncorrectSubstringsMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to remove words containing specified incorrect substrings.

This operator processes text by removing words that contain any of the specified incorrect substrings. By default, it removes words with substrings like “http”, “www”, “.com”, “href”, and “//”. The operator can operate in tokenized or non-tokenized mode. In tokenized mode, it uses a Hugging Face tokenizer to tokenize the text before processing. The key metric is not computed; this operator focuses on filtering out specific words.

  • If tokenization is True, the text is tokenized using a Hugging Face

tokenizer, and words are filtered based on the specified substrings. - If tokenization is False, the text is split into sentences and words, and words are filtered based on the specified substrings. - The filtered text is then merged back into a single string.

The operator processes samples in batches and updates the text in place.

__init__(lang: str = 'en', tokenization: bool = False, substrings: List[str] | None = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • lang – sample in which language

  • tokenization – whether to use model to tokenize documents

  • substrings – The incorrect substrings in words.

  • args – extra args

  • kwargs – extra args

should_keep_word_with_incorrect_substrings(word, substrings)[source]#
process_batched(samples)[source]#
class data_juicer.ops.mapper.ReplaceContentMapper(*args, **kwargs)[source]#

Bases: Mapper

Replaces content in the text that matches a specific regular expression pattern with a designated replacement string.

This operator processes text by searching for patterns defined in pattern and replacing them with the corresponding repl string. If multiple patterns and replacements are provided, each pattern is replaced by its respective replacement. The operator supports both single and multiple patterns and replacements. The regular expressions are compiled with the re.DOTALL flag to match across multiple lines. If the length of the patterns and replacements do not match, a ValueError is raised. This operation is batched, meaning it processes multiple samples at once.

__init__(pattern: str | List[str] | None = None, repl: str | List[str] = '', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • pattern – regular expression pattern(s) to search for within text

  • repl – replacement string(s), default is empty string

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.S3DownloadFileMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to download files from S3 to local files or load them into memory.

This operator downloads files from S3 URLs (s3://â€Ļ) or handles local files. It supports: - Downloading multiple files concurrently - Saving files to a specified directory or loading content into memory - Resume download functionality - S3 authentication with access keys - Custom S3 endpoints (for S3-compatible services like MinIO)

The operator processes nested lists of URLs/paths, maintaining the original structure in the output.

__init__(download_field: str = None, save_dir: str = None, save_field: str = None, resume_download: bool = False, timeout: int = 30, max_concurrent: int = 10, aws_access_key_id: str = None, aws_secret_access_key: str = None, aws_session_token: str = None, aws_region: str = None, endpoint_url: str = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • download_field – The field name to get the URL/path to download.

  • save_dir – The directory to save downloaded files.

  • save_field – The field name to save the downloaded file content.

  • resume_download – Whether to resume download. If True, skip the sample if it exists.

  • timeout – (Deprecated) Kept for backward compatibility, not used for S3 downloads.

  • max_concurrent – Maximum concurrent downloads.

  • aws_access_key_id – AWS access key ID for S3.

  • aws_secret_access_key – AWS secret access key for S3.

  • aws_session_token – AWS session token for S3 (optional).

  • aws_region – AWS region for S3.

  • endpoint_url – Custom S3 endpoint URL (for S3-compatible services).

  • args – extra args

  • kwargs – extra args

property s3_client#

Lazy initialization of S3 client to avoid serialization issues with Ray.

async download_files_async(urls, return_contents, save_dir=None, **kwargs)[source]#

Download files asynchronously from S3.

async download_nested_urls(nested_urls: List[str | List[str]], save_dir=None, save_field_contents=None)[source]#

Download nested URLs with structure preservation.

process_batched(samples)[source]#

Process a batch of samples.

class data_juicer.ops.mapper.LLMExtractMapper(*args, **kwargs)[source]#

Bases: Mapper

Extract structured fields from text using an LLM; write results to meta.

Input: sample[input_keys] -> concatenated as input text. Output: meta[meta_output_key] (dict) or meta[out_key] per output_schema key. Uses user-provided output_schema (key -> instruction); supports knowledge_grounding via sample key or fixed string.

__init__(input_keys: List[str], output_schema: Dict[str, str], api_or_hf_model: str = 'gpt-4o', *, meta_output_key: str | None = 'llm_extract', knowledge_grounding_key: str | None = None, knowledge_grounding_fixed: str | None = None, is_hf_model: bool = False, enable_vllm: bool = False, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, strategy: InferenceStrategy | None = None, examples: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]#

Args: input_keys: Sample keys to build input text (e.g. [“text”] or [“query”,”response”]). output_schema: {output_key: “extraction instruction”}. api_or_hf_model: Model name for API or HuggingFace. meta_output_key: If set, write full result to meta[meta_output_key]. knowledge_grounding_key: Optional sample key for per-sample grounding. knowledge_grounding_fixed: Optional fixed grounding string. strategy: Prompt strategy for extraction (direct/cot/few_shot/cot_shot). examples: Optional examples text used by few-shot strategies. try_num: Retries on parse/API failure.

process_single(sample: Dict, rank: int | None = None) Dict[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.S3UploadFileMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to upload local files to S3 and update paths to S3 URLs.

This operator uploads files from local paths to S3 storage. It supports: - Uploading multiple files concurrently - Updating file paths in the dataset to S3 URLs - Optional deletion of local files after successful upload - Custom S3 endpoints (for S3-compatible services like MinIO) - Skipping already uploaded files (based on S3 key)

The operator processes nested lists of paths, maintaining the original structure in the output.

__init__(upload_field: str = None, s3_bucket: str = None, s3_prefix: str = '', aws_access_key_id: str = None, aws_secret_access_key: str = None, aws_session_token: str = None, aws_region: str = None, endpoint_url: str = None, remove_local: bool = False, skip_existing: bool = True, max_concurrent: int = 10, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • upload_field – The field name containing file paths to upload.

  • s3_bucket – S3 bucket name to upload files to.

  • s3_prefix – Prefix (folder path) in S3 bucket. E.g., ‘videos/’ or ‘data/videos/’.

  • aws_access_key_id – AWS access key ID for S3.

  • aws_secret_access_key – AWS secret access key for S3.

  • aws_session_token – AWS session token for S3 (optional).

  • aws_region – AWS region for S3.

  • endpoint_url – Custom S3 endpoint URL (for S3-compatible services).

  • remove_local – Whether to delete local files after successful upload.

  • skip_existing – Whether to skip uploading if file already exists in S3.

  • max_concurrent – Maximum concurrent uploads.

  • args – extra args

  • kwargs – extra args

property s3_client#

Lazy initialization of S3 client to avoid serialization issues with Ray.

async upload_files_async(paths: List[str]) List[tuple][source]#

Upload multiple files asynchronously.

Parameters:

paths – List of local file paths

Returns:

List of (idx, status, s3_url, error_message) tuples

async upload_nested_paths(nested_paths: List[str | List[str]])[source]#

Upload nested paths with structure preservation.

Parameters:

nested_paths – Nested list of file paths

Returns:

(reconstructed_paths, failed_info)

process_batched(samples)[source]#

Process a batch of samples.

class data_juicer.ops.mapper.SDXLPrompt2PromptMapper(*args, **kwargs)[source]#

Bases: Mapper

Generates pairs of similar images using the SDXL model.

This operator uses a Hugging Face diffusion model to generate image pairs based on two text prompts. The quality and similarity of the generated images are controlled by parameters such as num_inference_steps and guidance_scale. The first and second text prompts are specified using text_key and text_key_second, respectively. The generated images are saved in the specified output_dir with unique filenames. The operator requires both text keys to be set for processing.

__init__(hf_diffusion: str = 'stabilityai/stable-diffusion-xl-base-1.0', trust_remote_code=False, torch_dtype: str = 'fp32', num_inference_steps: float = 50, guidance_scale: float = 7.5, text_key=None, text_key_second=None, output_dir='/home/runner/.cache/data_juicer/assets', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • hf_diffusion – diffusion model name on huggingface to generate the image.

  • trust_remote_code – whether to trust the remote code of HF models.

  • torch_dtype – the floating point type used to load the diffusion model.

  • num_inference_steps – The larger the value, the better the

image generation quality; however, this also increases the time required for generation. :param guidance_scale: A higher guidance scale value encourages the

model to generate images closely linked to the text prompt at the expense of lower image quality. Guidance scale is enabled when

Parameters:
  • text_key – the key name used to store the first caption in the caption pair.

  • text_key_second – the key name used to store the second caption in the caption pair.

  • output_dir – the storage location of the generated images.

process_single(sample, rank=None, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.SentenceAugmentationMapper(*args, **kwargs)[source]#

Bases: Mapper

Augments sentences by generating enhanced versions using a Hugging Face model. This operator enhances input sentences by generating new, augmented versions. It is designed to work best with individual sentences rather than full documents. For optimal results, ensure the input text is at the sentence level. The augmentation process uses a Hugging Face model, such as lmsys/vicuna-13b-v1.5 or Qwen/Qwen2-7B-Instruct. The operator requires specifying both the primary and secondary text keys, where the augmented sentence will be stored in the secondary key. The generation process can be customized with parameters like temperature, top-p sampling, and beam search size.

__init__(hf_model: str = 'Qwen/Qwen2-7B-Instruct', system_prompt: str = None, task_sentence: str = None, max_new_tokens=256, temperature=0.2, top_p=None, num_beams=1, text_key=None, text_key_second=None, *args, **kwargs)[source]#

Initialization method. :param hf_model: Huggingface model id. :param system_prompt: System prompt. :param task_sentence: The instruction for the current task. :param max_new_tokens: the maximum number of new tokens

generated by the model.

Parameters:
  • temperature – used to control the randomness of generated text. The higher the temperature, the more random and creative the generated text will be.

  • top_p – randomly select the next word from the group of words whose cumulative probability reaches p.

  • num_beams – the larger the beam search size, the higher the quality of the generated text.

  • text_key – the key name used to store the first sentence in the text pair. (optional, defalut=’text’)

  • text_key_second – the key name used to store the second sentence in the text pair.

  • args – extra args

  • kwargs – extra args

process_single(sample=None, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.SentenceSplitMapper(*args, **kwargs)[source]#

Bases: Mapper

Splits text samples into individual sentences based on the specified language.

This operator uses an NLTK-based tokenizer to split the input text into sentences. The language for the tokenizer is specified during initialization. The original text in each sample is replaced with a list of sentences. This operator processes samples in batches for efficiency. Ensure that the lang parameter is set to the appropriate language code (e.g., “en” for English) to achieve accurate sentence splitting.

__init__(lang: str = 'en', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • lang – split sentence of text in which language.

  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#
class data_juicer.ops.mapper.TextChunkMapper(*args, **kwargs)[source]#

Bases: Mapper

Split input text into chunks based on specified criteria.

  • Splits the input text into multiple chunks using a specified maximum length and a split pattern.

  • If max_len is provided, the text is split into chunks with a maximum length of max_len.

  • If split_pattern is provided, the text is split at occurrences of the pattern. If the length exceeds max_len, it will force a cut.

  • The overlap_len parameter specifies the overlap length between consecutive chunks if the split does not occur at the pattern.

  • Uses a Hugging Face tokenizer to calculate the text length in tokens if a tokenizer name is provided; otherwise, it uses the string length.

  • Caches the following stats: ‘chunk_count’ (number of chunks generated for each sample).

  • Raises a ValueError if both max_len and split_pattern are None or if overlap_len is greater than or equal to max_len.

__init__(max_len: Annotated[int, Gt(gt=0)] | None = None, split_pattern: str | None = '\\n\\n', overlap_len: Annotated[int, Ge(ge=0)] = 0, tokenizer: str | None = None, trust_remote_code: bool = False, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • max_len – Split text into multi texts with this max len if not None.

  • split_pattern – Make sure split in this pattern if it is not None and force cut if the length exceeds max_len.

  • overlap_len – Overlap length of the split texts if not split in the split pattern.

  • tokenizer – The tokenizer name of Hugging Face tokenizers. The text length will be calculate as the token num if it is offered. Otherwise, the text length equals to string length. Support tiktoken tokenizer (such as gpt-4o), dashscope tokenizer ( such as qwen2.5-72b-instruct) and huggingface tokenizer.

  • trust_remote_code – whether to trust the remote code of HF models.

  • args – extra args

  • kwargs – extra args

recursively_chunk(text)[source]#
get_text_chunks(text, rank=None)[source]#
process_batched(samples, rank=None)[source]#
class data_juicer.ops.mapper.TextTaggingByPromptMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to generate text tags using prompt with LLM. Other opensourced models with good instruction following ability also works.

__init__(hf_model: str = 'Qwen/Qwen2.5-7B-Instruct', trust_remote_code: bool = False, prompt: str = '\nč¯ˇå¯šä¸‹éĸįš„example文æœŦå›žå¤įš„äģģåŠĄįąģåˆĢčŋ›čĄŒæŖ€æĩ‹,åšļčŋ›čĄŒåˆ†įąģ。\nå¤‡é€‰įš„åˆ†įąģ包æ‹Ŧīŧš{tag_list}。\nåĒ回复寚åē”įš„åˆ†įąģ,不回复å…ļäģ–内厚。\nexample文æœŦ:\n{text}\n', tag_list: List[str] = ['数å­Ļ', 'äģŖį ', 'įŋģ蝑', 'č§’č‰˛æ‰Žæŧ”', 'åŧ€æ”žéĸ†åŸŸé—Žį­”', 'į‰šåŽšéĸ†åŸŸé—Žį­”', '提取', 'į”Ÿæˆ', 'å¤´č„‘éŖŽæš´', '分įąģ', 'æ€ģįģ“', '攚写', 'å…ļäģ–'], enable_vllm: bool = True, tensor_parallel_size: int = None, max_model_len: int = None, max_num_seqs: int = 256, model_params: Dict = None, sampling_params: Dict = None, *args, **kwargs)[source]#

Initialization method. :param hf_model: Hugginface model id. :param trust_remote_code: passed to transformers :param prompt: the prompt used to generate text tags. :param tag_list: the list of tagging output options. :param enable_vllm: Whether to use vllm for inference acceleration. :param tensor_parallel_size: It is only valid when enable_vllm is True.

The number of GPUs to use for distributed execution with tensor parallelism.

param max_model_len:

It is only valid when enable_vllm is True. Model context length. If unspecified, will be automatically derived from the model config.

param max_num_seqs:

It is only valid when enable_vllm is True. Maximum number of sequences to be processed in a single iteration.

param model_params:

Parameters for model initialization.

param sampling_params:

Sampling parameters for text generation. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

param args:

extra args

param kwargs:

extra args

The default data format parsed by this interface is as follows: Model Input:

č¯ˇå¯šä¸‹éĸįš„example文æœŦå›žå¤įš„äģģåŠĄįąģåˆĢčŋ›čĄŒæŖ€æĩ‹īŧŒåšļčŋ›čĄŒåˆ†įąģã€‚å¤‡é€‰įš„åˆ†įąģ包æ‹Ŧīŧš[“数å­Ļ”īŧŒâ€äģŖį â€īŧŒâ€įŋģč¯‘â€īŧŒâ€č§’č‰˛æ‰Žæŧ””īŧŒâ€åŧ€æ”žéĸ†åŸŸé—Žį­””īŧŒâ€į‰šåޚéĸ†åŸŸé—Žį­””, “提取”, â€œį”Ÿæˆâ€, â€œå¤´č„‘éŖŽæš´â€, “分įąģ”īŧŒâ€æ€ģįģ“”īŧŒâ€æ”šå†™â€īŧŒ “å…ļäģ–”]。åĒ回复寚åē”įš„åˆ†įąģīŧŒä¸å›žå¤å…ļäģ–内厚。 example文æœŦ: {

“instruction”: “扞å‡ēæ–šį¨‹ x2 - 3x = 0 įš„æ šã€‚â€, “input”: “”, “output”: “č¯Ĩæ–šį¨‹å¯äģĨ写成 x(x-3)=0。

æ šæŽäš˜æŗ•åŽŸį†īŧŒx = 0或x - 3 = 0。

因此īŧŒx1 = 0和x2 = 3æ˜¯æ–šį¨‹ x2 - 3x = 0 įš„ä¸¤ä¸Ē栚。”

}

Model Output:

数å­Ļ

process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.ToolSuccessTaggerMapper(*args, **kwargs)[source]#

Bases: Mapper

Set meta tool_success_count, tool_fail_count, tool_success_ratio.

Scans messages for role=tool; configurable success/error patterns.

__init__(messages_key: str = 'messages', tool_role_names: List[str] | None = None, success_patterns: List[str] | None = None, error_patterns: List[str] | None = None, store_per_tool_results: bool = True, **kwargs)[source]#

Base class that conducts data editing.

Parameters:
  • text_key – the key name of field that stores sample texts to be processed.

  • image_key – the key name of field that stores sample image list to be processed

  • audio_key – the key name of field that stores sample audio list to be processed

  • video_key – the key name of field that stores sample video list to be processed

  • image_bytes_key – the key name of field that stores sample image bytes list to be processed

  • query_key – the key name of field that stores sample queries

  • response_key – the key name of field that stores responses

  • history_key – the key name of field that stores history of queries and responses

process_single(sample)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.UsageCounterMapper(*args, **kwargs)[source]#

Bases: Mapper

Write token usage to meta from choices/usage (OpenAI/Anthropic-style).

Collects every non-empty usage dict found (top-level usage_key, response_metadata, each choices[] entry, nested message usage). By default, deduplicates identical usage snapshots before summing: same (prompt_tokens, completion_tokens, total_tokens or prompt+completion) only counts once (typical when response_usage mirrors choices[0].usage). Set dedupe_identical_usage: false to restore legacy double-counting.

__init__(choices_key: str = 'choices', usage_key: str = 'usage', response_metadata_key: str = 'response_metadata', dedupe_identical_usage: bool = True, **kwargs)[source]#

Base class that conducts data editing.

Parameters:
  • text_key – the key name of field that stores sample texts to be processed.

  • image_key – the key name of field that stores sample image list to be processed

  • audio_key – the key name of field that stores sample audio list to be processed

  • video_key – the key name of field that stores sample video list to be processed

  • image_bytes_key – the key name of field that stores sample image bytes list to be processed

  • query_key – the key name of field that stores sample queries

  • response_key – the key name of field that stores responses

  • history_key – the key name of field that stores history of queries and responses

process_single(sample)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VggtMapper(*args, **kwargs)[source]#

Bases: Mapper

Input a video of a single scene, and use VGGT to extract information including Camera Pose, Depth Maps, Point Maps, and 3D Point Tracks.

  • The operator processes a video and extracts frames based on the specified frame number and duration.

  • It uses the VGGT model to analyze the extracted frames and generate various outputs such as camera parameters, depth maps, point maps, and 3D point tracks.

  • If 3D point tracks are required, the user must provide query points in the format [x, y], relative to the top-left corner.

  • The results are stored in the sample’s metadata under the specified tag field name, which defaults to ‘vggt_tags’.

  • The operator can output camera parameters, depth maps, point maps from projection, point maps from unprojection, and 3D point tracks, depending on the configuration.

  • The VGGT model is loaded from the provided path, and the operator runs in CUDA mode if available.

__init__(vggt_model_path: str = 'facebook/VGGT-1B', frame_num: Annotated[int, Gt(gt=0)] = 3, duration: float = 0, tag_field_name: str = 'vggt_tags', frame_dir: str = '/home/runner/.cache/data_juicer/assets', if_output_camera_parameters: bool = True, if_output_depth_maps: bool = True, if_output_point_maps_from_projection: bool = True, if_output_point_maps_from_unprojection: bool = True, if_output_point_tracks: bool = True, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • vggt_model_path – The path to the VGGT model.

  • frame_num – The number of frames to be extracted uniformly from the video. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration. If “duration” > 0, frame_num is the number of frames per segment.

  • duration – The duration of each segment in seconds. If 0, frames are extracted from the entire video. If duration > 0, the video is segmented into multiple segments based on duration, and frames are extracted from each segment.

  • tag_field_name – The field name to store the tags. It’s “vggt_tags” in default.

  • frame_dir – Output directory to save extracted frames.

  • if_output_camera_parameters – Determines whether to output camera parameters.

  • if_output_depth_maps – Determines whether to output depth maps.

  • if_output_point_maps_from_projection – Determines whether to output point maps directly inferred by VGGT.

  • if_output_point_maps_from_unprojection – Determines whether to output point maps constructed from depth maps and camera parameters.

  • if_output_point_tracks – Determines whether to output point tracks. If point tracks are required, the user should provide a list where each element consists of 2D point coordinates (list shape: (N, 2)). The point coordinates should be specified in the format [x, y], relative to the top-left corner, where x/y values are non-normalized.

  • args – extra args

  • kwargs – extra args

process_single(sample=None, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoCameraCalibrationStaticDeepcalibMapper(*args, **kwargs)[source]#

Bases: Mapper

Compute the camera intrinsics and field of view (FOV) for a static camera using DeepCalib.

__init__(model_path: str = 'weights_10_0.02.h5', frame_num: Annotated[int, Gt(gt=0)] = 3, duration: float = 0, tag_field_name: str = 'static_camera_calibration_deepcalib_tags', frame_dir: str = '/home/runner/.cache/data_juicer/assets', if_output_info: bool = True, output_info_dir: str = '/home/runner/.cache/data_juicer/assets', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • model_path – The path to the DeepCalib Regression model.

  • frame_num – The number of frames to be extracted uniformly from the video. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration. If “duration” > 0, frame_num is the number of frames per segment.

  • duration – The duration of each segment in seconds. If 0, frames are extracted from the entire video. If duration > 0, the video is segmented into multiple segments based on duration, and frames are extracted from each segment.

  • tag_field_name – The field name to store the tags. It’s “static_camera_calibration_deepcalib_tags” in default.

  • frame_dir – Output directory to save extracted frames.

  • if_output_info – Whether to save the camera parameters results to an JSON file.

  • output_info_dir – Output directory for saving camera parameters.

  • args – extra args

  • kwargs – extra args

process_single(sample=None, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoCameraCalibrationStaticMogeMapper(*args, **kwargs)[source]#

Bases: Mapper

Compute the camera intrinsics and field of view (FOV) for a static camera using Moge-2 (more accurate than DeepCalib).

__init__(model_path: str = 'Ruicheng/moge-2-vitl', frame_num: Annotated[int, Gt(gt=0)] = 3, duration: float = 0, tag_field_name: str = 'static_camera_calibration_moge_tags', frame_dir: str = '/home/runner/.cache/data_juicer/assets', if_output_info: bool = True, output_info_dir: str = '/home/runner/.cache/data_juicer/assets', if_output_points_info: bool = True, if_output_depth_info: bool = True, if_output_mask_info: bool = True, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • model_path – The path to the Moge-2 model.

  • frame_num – The number of frames to be extracted uniformly from the video. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration. If “duration” > 0, frame_num is the number of frames per segment.

  • duration – The duration of each segment in seconds. If 0, frames are extracted from the entire video. If duration > 0, the video is segmented into multiple segments based on duration, and frames are extracted from each segment.

  • tag_field_name – The field name to store the tags. It’s “static_camera_calibration_moge_tags” in default.

  • frame_dir – Output directory to save extracted frames.

  • if_output_info – Whether to save the camera parameters results to an JSON file.

  • output_info_dir – Output directory for saving camera parameters.

  • if_output_points_info – Determines whether to output point map in OpenCV camera coordinate system (x right, y down, z forward). For MoGe-2, the point map is in metric scale.

  • if_output_depth_info – Determines whether to output depth maps.

  • if_output_mask_info – Determines whether to output a binary mask for valid pixels.

  • args – extra args

  • kwargs – extra args

process_single(sample=None, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoCaptioningFromAudioMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to caption a video according to its audio streams based on Qwen-Audio model.

__init__(keep_original_sample: bool = True, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only captioned sample in the final datasets and the original sample will be removed. It’s True in default.

  • args – extra args

  • kwargs – extra args

process_batched(samples, rank=None)[source]#
class data_juicer.ops.mapper.VideoCaptioningFromFramesMapper(*args, **kwargs)[source]#

Bases: Mapper

Generates video captions from sampled frames using an image-to-text model. Captions from different frames are concatenated into a single string.

  • Uses a Hugging Face image-to-text model to generate captions for sampled video frames.

  • Supports different frame sampling methods: ‘all_keyframes’ or ‘uniform’.

  • Can apply horizontal and vertical flips to the frames before captioning.

  • Offers multiple strategies for retaining generated captions: ‘random_any’,

‘similar_one_simhash’, or ‘all’. - Optionally keeps the original sample in the final dataset. - Allows setting a global prompt or per-sample prompts to guide caption generation. - Generates a specified number of candidate captions per video, which can be reduced based on the selected retention strategy. - The number of output samples depends on the retention strategy and whether original samples are kept.

__init__(hf_img2seq: str = 'Salesforce/blip2-opt-2.7b', trust_remote_code: bool = False, caption_num: Annotated[int, Gt(gt=0)] = 1, keep_candidate_mode: str = 'random_any', keep_original_sample: bool = True, prompt: str | None = None, prompt_key: str | None = None, frame_field: str | None = None, frame_sampling_method: str = 'all_keyframes', frame_num: Annotated[int, Gt(gt=0)] = 3, horizontal_flip: bool = False, vertical_flip: bool = False, text_update_strategy: str = 'rewrite', caption_field: str | None = None, legacy_split_by_text_token: bool = True, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • hf_img2seq – model name on huggingface to generate caption

  • trust_remote_code – whether to trust the remote code of HF models.

  • caption_num – how many candidate captions to generate for each video

  • keep_candidate_mode –

    retain strategy for the generated $caption_num$ candidates.

    ’random_any’: Retain the random one from generated captions

    ’similar_one_simhash’: Retain the generated one that is most

    similar to the original caption

    ’all’: Retain all generated captions by concatenation

Note

This is a batched_OP, whose input and output type are both list. Suppose there are $N$ list of input samples, whose batch size is $b$, and denote caption_num as $M$. The number of total samples after generation is $2Nb$ when keep_original_sample is True and $Nb$ when keep_original_sample is False. For ‘random_any’ and ‘similar_one_simhash’ mode, it’s $(1+M)Nb$ for ‘all’ mode when keep_original_sample is True and $MNb$ when keep_original_sample is False.

Parameters:
  • keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated captions in the final datasets and the original captions will be removed. It’s True in default.

  • prompt – a string prompt to guide the generation of image-to-text model for all samples globally. It’s None in default, which means no prompt provided.

  • prompt_key – the key name of fields in samples to store prompts for each sample. It’s used for set different prompts for different samples. If it’s none, use prompt in parameter “prompt”. It’s None in default.

  • frame_field – the field name of video frames to generate caption. If frame_field is None, extract frames from the video field.

  • frame_sampling_method – sampling method of extracting frame videos from the videos. Should be one of [“all_keyframes”, “uniform”]. Only works when “frame_field” is none. The former one extracts all key frames (the number of which depends on the duration of the video) and the latter one extract specified number of frames uniformly from the video. Default: “all_keyframes”.

  • frame_num – the number of frames to be extracted uniformly from the video frames. Only works when “frame_sampling_method” is “uniform” or “frame_field” is given. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration.

  • horizontal_flip – flip frame video horizontally (left to right).

  • vertical_flip – flip frame video vertically (top to bottom).

  • text_update_strategy – strategy to update the text field after caption generation. Can be one of [‘keep_origin’, ‘rewrite’]. ‘keep_origin’: keep the original text unchanged. ‘rewrite’: rewrite the text field with the generated captions concated by special tokens.

  • caption_field – the field name to save the generated captions.

  • legacy_split_by_text_token – Whether to split by special tokens (e.g. <__dj__video>) in the text field and read videos in order, or use the ‘videos’ or ‘frames’ field directly.

  • args – extra args

  • kwargs – extra args

process_batched(samples, rank=None, context=False)[source]#
Parameters:

samples

Returns:

Note

This is a batched_OP, whose the input and output type are both list. Suppose there are $N$ input sample list with batch size as $b$, and denote caption_num as $M$. the number of total samples after generation is $2Nb$ for ‘random_any’ and ‘similar_one’ mode, and $(1+M)Nb$ for ‘all’ mode.

class data_juicer.ops.mapper.VideoCaptioningFromSummarizerMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to generate video captions by summarizing several kinds of generated texts (captions from video/audio/frames, tags from audio/frames, â€Ļ)

__init__(hf_summarizer: str = None, trust_remote_code: bool = False, consider_video_caption_from_video: bool = True, consider_video_caption_from_audio: bool = True, consider_video_caption_from_frames: bool = True, consider_video_tags_from_audio: bool = True, consider_video_tags_from_frames: bool = True, vid_cap_from_vid_args: Dict | None = None, vid_cap_from_frm_args: Dict | None = None, vid_tag_from_aud_args: Dict | None = None, vid_tag_from_frm_args: Dict | None = None, keep_tag_num: Annotated[int, Gt(gt=0)] = 5, keep_original_sample: bool = True, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • hf_summarizer – the summarizer model used to summarize texts generated by other methods.

  • trust_remote_code – whether to trust the remote code of HF models.

  • consider_video_caption_from_video – whether to consider the video caption generated from video directly in the summarization process. Default: True.

  • consider_video_caption_from_audio – whether to consider the video caption generated from audio streams in the video in the summarization process. Default: True.

  • consider_video_caption_from_frames – whether to consider the video caption generated from sampled frames from the video in the summarization process. Default: True.

  • consider_video_tags_from_audio – whether to consider the video tags generated from audio streams in the video in the summarization process. Default: True.

  • consider_video_tags_from_frames – whether to consider the video tags generated from sampled frames from the video in the summarization process. Default: True.

  • vid_cap_from_vid_args – the arg dict for video captioning from video directly with keys are the arg names and values are the arg values. Default: None.

  • vid_cap_from_frm_args – the arg dict for video captioning from sampled frames from the video with keys are the arg names and values are the arg values. Default: None.

  • vid_tag_from_aud_args – the arg dict for video tagging from audio streams in the video with keys are the arg names and values are the arg values. Default: None.

  • vid_tag_from_frm_args – the arg dict for video tagging from sampled frames from the video with keys are the arg names and values are the arg values. Default: None.

  • keep_tag_num – max number N of tags from sampled frames to keep. Too many tags might bring negative influence to summarized text, so we consider to only keep the N most frequent tags. Default: 5.

  • keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only summarized captions in the final datasets and the original captions will be removed. It’s True in default.

  • args – extra args

  • kwargs – extra args

process_batched(samples, rank=None)[source]#
class data_juicer.ops.mapper.VideoCaptioningFromVideoMapper(*args, **kwargs)[source]#

Bases: Mapper

Generates video captions using a Hugging Face video-to-text model and sampled video frames.

This operator processes video samples to generate captions based on the provided video frames. It uses a Hugging Face video-to-text model, such as ‘kpyu/video-blip-opt-2.7b-ego4d’, to generate multiple caption candidates for each video. The number of generated captions and the strategy to keep or filter these candidates can be configured. The operator supports different frame sampling methods, including extracting all keyframes or uniformly sampling a specified number of frames. Additionally, it allows for horizontal and vertical flipping of the frames. The final output can include both the original sample and the generated captions, depending on the configuration.

__init__(hf_video_blip: str = 'kpyu/video-blip-opt-2.7b-ego4d', trust_remote_code: bool = False, caption_num: Annotated[int, Gt(gt=0)] = 1, keep_candidate_mode: str = 'random_any', keep_original_sample: bool = True, prompt: str | None = None, prompt_key: str | None = None, frame_field: str | None = None, frame_sampling_method: str = 'all_keyframes', frame_num: Annotated[int, Gt(gt=0)] = 3, horizontal_flip: bool = False, vertical_flip: bool = False, text_update_strategy: str = 'rewrite', caption_field: str | None = None, legacy_split_by_text_token: bool = True, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • hf_video_blip – video-blip model name on huggingface to generate caption

  • trust_remote_code – whether to trust the remote code of HF models.

  • caption_num – how many candidate captions to generate for each video

  • keep_candidate_mode –

    retain strategy for the generated $caption_num$ candidates.

    ’random_any’: Retain the random one from generated captions

    ’similar_one_simhash’: Retain the generated one that is most

    similar to the original caption

    ’all’: Retain all generated captions by concatenation

Note

This is a batched_OP, whose input and output type are both list. Suppose there are $N$ list of input samples, whose batch size is $b$, and denote caption_num as $M$. The number of total samples after generation is $2Nb$ when keep_original_sample is True and $Nb$ when keep_original_sample is False. For ‘random_any’ and ‘similar_one_simhash’ mode, it’s $(1+M)Nb$ for ‘all’ mode when keep_original_sample is True and $MNb$ when keep_original_sample is False.

Parameters:
  • keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated captions in the final datasets and the original captions will be removed. It’s True in default.

  • prompt – a string prompt to guide the generation of video-blip model for all samples globally. It’s None in default, which means no prompt provided.

  • prompt_key – the key name of fields in samples to store prompts for each sample. It’s used for set different prompts for different samples. If it’s none, use prompt in parameter “prompt”. It’s None in default.

  • frame_field – the field name of video frames to generate caption. If frame_field is None, extract frames from the video field.

  • frame_sampling_method – sampling method of extracting frame videos from the videos. Should be one of [“all_keyframes”, “uniform”]. Only works when “frame_field” is none. The former one extracts all key frames (the number of which depends on the duration of the video) and the latter one extract specified number of frames uniformly from the video. Default: “all_keyframes”.

  • frame_num – the number of frames to be extracted uniformly from the video frames. Only works when “frame_sampling_method” is “uniform” or “frame_field” is given. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration.

  • horizontal_flip – flip frame video horizontally (left to right).

  • vertical_flip – flip frame video vertically (top to bottom).

  • text_update_strategy – strategy to update the text field after caption generation. Can be one of [‘keep_origin’, ‘rewrite’]. ‘keep_origin’: keep the original text unchanged. ‘rewrite’: rewrite the text field with the generated captions concated by special tokens.

  • caption_field – the field name to save the generated captions.

  • legacy_split_by_text_token – Whether to split by special tokens (e.g. <__dj__video>) in the text field and read videos in order, or use the ‘videos’ or ‘frames’ field directly.

  • args – extra args

  • kwargs – extra args

process_batched(samples, rank=None, context=False)[source]#
Parameters:

samples

Returns:

Note

This is a batched_OP, whose the input and output type are both list. Suppose there are $N$ input sample list with batch size as $b$, and denote caption_num as $M$. the number of total samples after generation is $2Nb$ for ‘random_any’ and ‘similar_one’ mode, and $(1+M)Nb$ for ‘all’ mode.

class data_juicer.ops.mapper.VideoCaptioningFromVLMMapper(*args, **kwargs)[source]#

Bases: Mapper

Generates video captions using a VLM that accepts videos as inputs.

This operator processes video samples to generate captions based on the provided video. It uses a VLM model that accept videos as inputs, such as ‘Qwen/Qwen3-VL-8B-Instruct’, to generate multiple caption candidates for each video. The number of generated captions and the strategy to keep or filter these candidates can be configured. The final output can include both the original sample and the generated captions, depending on the configuration.

__init__(hf_model: str = 'Qwen/Qwen3-VL-8B-Instruct', enable_vllm: bool = False, caption_num: Annotated[int, Gt(gt=0)] = 1, keep_candidate_mode: str = 'random_any', keep_original_sample: bool = True, prompt: str | None = None, prompt_key: str | None = None, model_params: Dict = None, sampling_params: Dict = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • hf_model – VLM model name on huggingface to generate caption

  • enable_vllm – If true, use VLLM for loading hugging face or local llm.

  • caption_num – how many candidate captions to generate for each video

  • keep_candidate_mode –

    retain strategy for the generated $caption_num$ candidates.

    ’random_any’: Retain the random one from generated captions

    ’similar_one_simhash’: Retain the generated one that is most

    similar to the original caption

    ’all’: Retain all generated captions by concatenation

Note

This is a batched_OP, whose input and output type are both list. Suppose there are $N$ list of input samples, whose batch size is $b$, and denote caption_num as $M$. The number of total samples after generation is $2Nb$ when keep_original_sample is True and $Nb$ when keep_original_sample is False. For ‘random_any’ and ‘similar_one_simhash’ mode, it’s $(1+M)Nb$ for ‘all’ mode when keep_original_sample is True and $MNb$ when keep_original_sample is False.

Parameters:
  • keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated captions in the final datasets and the original captions will be removed. It’s True in default.

  • prompt – a string prompt to guide the generation of video-blip model for all samples globally. It’s None in default, which means using the DEFAULT_PROMPT.

  • prompt_key – the key name of fields in samples to store prompts for each sample. It’s used for set different prompts for different samples. If it’s none, use prompt in parameter “prompt”. It’s None in default.

  • model_params – Parameters for initializing the model.

  • sampling_params – Extra parameters passed to the model calling. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}

  • args – extra args

  • kwargs – extra kwargs

process_batched(samples, rank=None, context=False)[source]#
Parameters:

samples

Returns:

Note

This is a batched_OP, whose the input and output type are both list. Suppose there are $N$ input sample list with batch size as $b$, and denote caption_num as $M$. the number of total samples after generation is $2Nb$ for ‘random_any’ and ‘similar_one’ mode, and $(1+M)Nb$ for ‘all’ mode.

class data_juicer.ops.mapper.VideoDepthEstimationMapper(*args, **kwargs)[source]#

Bases: Mapper

Perform depth estimation on the video.

__init__(video_depth_model_path: str = 'video_depth_anything_vitb.pth', point_cloud_dir_for_metric: str = '/home/runner/.cache/data_juicer/assets', max_res: int = 1280, torch_dtype: str = 'fp16', if_save_visualization: bool = False, save_visualization_dir: str = '/home/runner/.cache/data_juicer/assets', grayscale: bool = False, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • video_depth_model_path – The path to the Video-Depth-Anything model. If the model is a ‘metric’ model, the code will automatically switch to metric mode, and the user should input the path for storing point clouds.

  • point_cloud_dir_for_metric – The path for storing point clouds (for a ‘metric’ model).

  • max_res – The maximum resolution threshold for videos; videos exceeding this threshold will be resized.

  • torch_dtype – The floating point type used for model inference. Can be one of [‘fp32’, ‘fp16’]

  • if_save_visualization – Whether to save visualization results.

  • save_visualization_dir – The path for saving visualization results.

  • grayscale – If True, the colorful palette will not be applied.

process_single(sample=None, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoExtractFramesMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to extract frames from video files according to specified methods.

Extracts frames from video files using either all keyframes or a uniform sampling method.

Supported output formats are: [“path”, “bytes”]. If format is “path”, the output is a list of lists, where each inner list contains the path of the frames of a single video. e.g.[

[video1_frame1_path, video1_frame2_path, â€Ļ], [video2_frame1_path, video2_frame2_path, â€Ļ], â€Ļ

] (In the order of the videos).

If format is “bytes”, the output is a list of lists, where each inner list contains the bytes of the frames of a single video. e.g. [

[video1_byte1, video1_byte2, â€Ļ], [video2_byte1, video2_byte2, â€Ļ], â€Ļ

] (In the order of the videos).

  • Frame Sampling Methods:

  • “all_keyframes”: Extracts all keyframes from the video.

  • “uniform”: Extracts a specified number of frames uniformly from the video.

  • If duration is set, the video is segmented into multiple segments based on the duration, and frames are extracted from each segment.

  • The output directory for the frames can be specified if output format is “path”, else left to None.

  • The field name in the sample’s metadata where the frame information is stored can be customized.

__init__(frame_sampling_method: str = 'all_keyframes', output_format: str = 'path', frame_num: Annotated[int, Gt(gt=0)] = 3, duration: float = 0, frame_dir: str = None, frame_key: str = None, frame_field: str = 'video_frames', legacy_split_by_text_token: bool = True, video_backend: str = 'av', *args, **kwargs)[source]#

Initialization method. :param frame_sampling_method: sampling method of extracting frame

videos from the videos. Should be one of [“all_keyframes”, “uniform”]. The former one extracts all key frames (the number of which depends on the duration of the video) and the latter one extract specified number of frames uniformly from the video. If “duration” > 0, frame_sampling_method acts on every segment. Default: “all_keyframes”.

Parameters:
  • output_format –

    The output format of the frame videos. Supported formats are: [“path”, “bytes”]. If format is “path”, the output is a list of lists, where each inner list contains the path of the frames of a single video. e.g.[

    [video1_frame1_path, video1_frame2_path, â€Ļ], [video2_frame1_path, video2_frame2_path, â€Ļ], â€Ļ

    ] (In the order of the videos).

    If format is “bytes”, the output is a list of lists, where each inner list contains the bytes of the frames of a single video. e.g. [

    [video1_byte1, video1_byte2, â€Ļ], [video2_byte1, video2_byte2, â€Ļ], â€Ļ

    ] (In the order of the videos).

  • frame_num – the number of frames to be extracted uniformly from the video. Only works when frame_sampling_method is “uniform”. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration. If “duration” > 0, frame_num is the number of frames per segment.

  • duration – The duration of each segment in seconds. If 0, frames are extracted from the entire video. If duration > 0, the video is segmented into multiple segments based on duration, and frames are extracted from each segment.

  • frame_dir – Output directory to save extracted frames. If output_format is “path”, must specify a directory.

  • frame_key – The name of field to save generated frames info.

  • frame_field – The name of field to save generated frames info.

  • legacy_split_by_text_token – Whether to split by special tokens (e.g. <__dj__video>) in the text field and read videos in order, or use the ‘videos’ or ‘frames’ field directly.

  • video_backend – video backend, can be ffmpeg, av.

  • args – extra args

  • kwargs – extra args

extract_frames(video)[source]#
process_single(sample, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoFFmpegWrappedMapper(*args, **kwargs)[source]#

Bases: Mapper

Wraps FFmpeg video filters for processing video files in a dataset.

This operator applies a specified FFmpeg video filter to each video file in the dataset. It supports passing keyword arguments to the filter and global arguments to the FFmpeg command line. The processed videos are saved in a specified directory or the same directory as the input files. If no filter name is provided, the videos remain unmodified. The operator updates the source file paths in the dataset to reflect any changes.

__init__(filter_name: str | None = None, filter_kwargs: Dict | None = None, global_args: List[str] | None = None, capture_stderr: bool = True, overwrite_output: bool = True, save_dir: str = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • filter_name – ffmpeg video filter name.

  • filter_kwargs – keyword-arguments passed to ffmpeg filter.

  • global_args – list-arguments passed to ffmpeg command-line.

  • capture_stderr – whether to capture stderr.

  • overwrite_output – whether to overwrite output file.

  • save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

  • args – extra args

  • kwargs – extra args

process_single(sample)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoHandReconstructionHaworMapper(*args, **kwargs)[source]#

Bases: Mapper

Use HaWoR and MoGe-2 for hand reconstruction.

__init__(hawor_model_path: str = 'hawor.ckpt', hawor_config_path: str = 'model_config.yaml', hawor_detector_path: str = 'detector.pt', moge_model_path: str = 'Ruicheng/moge-2-vitl', mano_right_path: str = 'path_to_mano_right_pkl', frame_num: Annotated[int, Gt(gt=0)] = 3, duration: float = 0, thresh: float = 0.2, tag_field_name: str = 'hand_reconstruction_hawor_tags', frame_dir: str = '/home/runner/.cache/data_juicer/assets', if_output_moge_info: bool = False, moge_output_info_dir: str = '/home/runner/.cache/data_juicer/assets', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • hawor_model_path – The path to ‘hawor.ckpt’. for the HaWoR model.

  • hawor_config_path – The path to ‘model_config.yaml’ for the HaWoR model.

  • hawor_detector_path – The path to ‘detector.pt’ for the HaWoR model.

  • moge_model_path – The path to the Moge-2 model.

  • mano_right_path – The path to ‘MANO_RIGHT.pkl’. Users need to download this file from https://mano.is.tue.mpg.de/ and comply with the MANO license.

  • frame_num – The number of frames to be extracted uniformly from the video. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration. If “duration” > 0, frame_num is the number of frames per segment.

  • duration – The duration of each segment in seconds. If 0, frames are extracted from the entire video. If duration > 0, the video is segmented into multiple segments based on duration, and frames are extracted from each segment.

  • thresh – Confidence threshold for hand detection.

  • tag_field_name – The field name to store the tags. It’s “hand_reconstruction_hawor_tags” in default.

  • frame_dir – Output directory to save extracted frames.

  • if_output_moge_info – Whether to save the results from MoGe-2 to an JSON file.

  • moge_output_info_dir – Output directory for saving camera parameters.

  • args – extra args

  • kwargs – extra args

detect_track(imgfiles: list, hand_det_model, thresh: float = 0.5) tuple[source]#

Detects and tracks hands across a sequence of images using YOLO.

Parameters:
  • imgfiles (list) – List of image frames.

  • hand_det_model (YOLO) – The initialized YOLO hand detection model.

  • thresh (float) – Confidence threshold for detection.

Returns:

(list of boxes (unused in original logic), dict of tracks)

Return type:

tuple

hawor_motion_estimation(imgfiles: list, tracks: dict, model, img_focal: float, img_paths: list, single_image: bool = False) dict[source]#

Performs HAWOR 3D hand reconstruction on detected and tracked hand regions.

Parameters:
  • imgfiles (list) – List of image frames.

  • tracks (dict) – Dictionary mapping track ID to a list of detection objects.

  • model (HAWOR) – The initialized HAWOR model.

  • img_focal (float) – Camera focal length.

  • img_paths (list) – List of images paths.

  • single_image (bool) – Flag for single-image processing mode.

Returns:

Reconstructed parameters (‘left’ and ‘right’ hand results).

Return type:

dict

process_single(sample=None, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoHandReconstructionMapper(*args, **kwargs)[source]#

Bases: Mapper

Use the WiLoR model for hand localization and reconstruction.

__init__(wilor_model_path: str = 'wilor_final.ckpt', wilor_model_config: str = 'model_config.yaml', detector_model_path: str = 'detector.pt', mano_right_path: str = 'path_to_mano_right_pkl', frame_num: Annotated[int, Gt(gt=0)] = 3, duration: float = 0, batch_size: int = 16, tag_field_name: str = 'hand_reconstruction_tags', frame_dir: str = '/home/runner/.cache/data_juicer/assets', if_save_visualization: bool = True, save_visualization_dir: str = '/home/runner/.cache/data_juicer/assets', if_save_mesh: bool = True, save_mesh_dir: str = '/home/runner/.cache/data_juicer/assets', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • wilor_model_path – The path to ‘wilor_final.ckpt’.

  • wilor_model_config – The path to ‘model_config.yaml’ for the WiLOR model.

  • detector_model_path – The path to ‘detector.pt’ for the WiLOR model.

  • mano_right_path – The path to ‘MANO_RIGHT.pkl’. Users need to download this file from https://mano.is.tue.mpg.de/ and comply with the MANO license.

  • frame_num – The number of frames to be extracted uniformly from the video. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration. If “duration” > 0, frame_num is the number of frames per segment.

  • duration – The duration of each segment in seconds. If 0, frames are extracted from the entire video. If duration > 0, the video is segmented into multiple segments based on duration, and frames are extracted from each segment.

  • batch_size – Batch size for simultaneous hand inference.

  • tag_field_name – The field name to store the tags. It’s “hand_reconstruction_tags” in default.

  • frame_dir – Output directory to save extracted frames.

  • if_save_visualization – Whether to save overlay images.

  • save_visualization_dir – The path for saving overlay images.

  • if_save_mesh – Whether to save images of the hand mesh.

  • save_mesh_dir – The path for saving images of the hand mesh.

  • args – extra args

  • kwargs – extra args

project_full_img(points, cam_trans, focal_length, img_res)[source]#
process_single(sample=None, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoFaceBlurMapper(*args, **kwargs)[source]#

Bases: Mapper

Mapper to blur faces detected in videos.

This operator uses an OpenCV classifier for face detection and applies a specified blur type to the detected faces. The default classifier is ‘haarcascade_frontalface_alt.xml’. Supported blur types include ‘mean’, ‘box’, and ‘gaussian’. The radius of the blur kernel can be adjusted. If a save directory is not provided, the processed videos will be saved in the same directory as the input files. The DJ_PRODUCED_DATA_DIR environment variable can also be used to specify the save directory.

__init__(cv_classifier: str = '', blur_type: str = 'gaussian', radius: float = 2, save_dir: str = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • cv_classifier – OpenCV classifier path for face detection. By default, we will use ‘haarcascade_frontalface_alt.xml’.

  • blur_type – Type of blur kernel, including [‘mean’, ‘box’, ‘gaussian’].

  • radius – Radius of blur kernel.

  • save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

  • args – extra args

  • kwargs – extra args

process_single(sample, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoObjectSegmentingMapper(*args, **kwargs)[source]#

Bases: Mapper

Text-guided semantic segmentation of valid objects throughout the video (YOLOE + SAM2).

__init__(sam2_hf_model: str = 'facebook/sam2.1-hiera-tiny', yoloe_path: str = 'yoloe-11l-seg.pt', yoloe_conf: float = 0.5, torch_dtype: str = 'bf16', if_binarize: bool = True, if_save_visualization: bool = False, save_visualization_dir: str = '/home/runner/.cache/data_juicer/assets', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • hf_model – Hugginface model id of SAM2.

  • yoloe_path – The path to the YOLOE model.

  • yoloe_conf – Confidence threshold for YOLOE object detection.

  • torch_dtype – The floating point type used for model inference. Can be one of [‘fp32’, ‘fp16’, ‘bf16’].

  • if_binarize – Whether the final mask requires binarization. If ‘if_save_visualization’ is set to True, ‘if_binarize’ will automatically be adjusted to True.

  • if_save_visualization – Whether to save visualization results.

  • save_visualization_dir – The path for saving visualization results.

process_single(sample=None, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoRemoveWatermarkMapper(*args, **kwargs)[source]#

Bases: Mapper

Remove watermarks from videos based on specified regions.

This operator removes watermarks from video frames by detecting and masking the watermark areas. It supports two detection methods: ‘pixel_value’ and ‘pixel_diversity’. The regions of interest (ROIs) for watermark detection can be specified as either pixel coordinates or ratios of the frame dimensions. The operator extracts a set number of frames uniformly from the video to detect watermark pixels. A pixel is considered part of a watermark if it meets the detection criteria in a minimum number of frames. The cleaned video is saved in the specified directory or the same directory as the input file if no save directory is provided.

__init__(roi_strings: List[str] = ['0,0,0.1,0.1'], roi_type: str = 'ratio', roi_key: str | None = None, frame_num: Annotated[int, Gt(gt=0)] = 10, min_frame_threshold: Annotated[int, Gt(gt=0)] = 7, detection_method: str = 'pixel_value', save_dir: str = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • roi_strings – a given list of regions the watermarks locate. The format of each can be “x1, y1, x2, y2”, “(x1, y1, x2, y2)”, or “[x1, y1, x2, y2]”.

  • roi_type – the roi string type. When the type is ‘pixel’, (x1, y1), (x2, y2) are the locations of pixels in the top left corner and the bottom right corner respectively. If the roi_type is ‘ratio’, the coordinates are normalized by widths and heights.

  • roi_key – the key name of fields in samples to store roi_strings for each sample. It’s used for set different rois for different samples. If it’s none, use rois in parameter “roi_strings”. It’s None in default.

  • frame_num – the number of frames to be extracted uniformly from the video to detect the pixels of watermark.

  • min_frame_threshold – a coordination is considered as the location of a watermark pixel when it is that in no less min_frame_threshold frames.

  • detection_method – the method to detect the pixels of watermark. If it is ‘pixel_value’, we consider the distribution of pixel value in each frame. If it is ‘pixel_diversity’, we will consider the pixel diversity in different frames. The min_frame_threshold is useless and frame_num must be greater than 1 in ‘pixel_diversity’ mode.

  • save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

  • args – extra args

  • kwargs – extra args

process_single(sample, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoResizeAspectRatioMapper(*args, **kwargs)[source]#

Bases: Mapper

Resizes videos to fit within a specified aspect ratio range. This operator adjusts the dimensions of videos to ensure their aspect ratios fall within a defined range. It can either increase or decrease the video dimensions based on the specified strategy. The aspect ratio is calculated as width divided by height. If a video’s aspect ratio is outside the given range, it will be resized to match the closest boundary (either the minimum or maximum ratio). The min_ratio and max_ratio should be provided as strings in the format “9:21” or “9/21”. The resizing process uses the ffmpeg library to handle the actual video scaling. Videos that do not need resizing are left unchanged. The operator supports saving the modified videos to a specified directory or the same directory as the input files.

STRATEGY = ['decrease', 'increase']#
__init__(min_ratio: str = '9/21', max_ratio: str = '21/9', strategy: str = 'increase', save_dir: str = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • min_ratio – The minimum aspect ratio to enforce videos with an aspect ratio below min_ratio will be resized to match this minimum ratio. The ratio should be provided as a string in the format “9:21” or “9/21”.

  • max_ratio – The maximum aspect ratio to enforce videos with an aspect ratio above max_ratio will be resized to match this maximum ratio. The ratio should be provided as a string in the format “21:9” or “21/9”.

  • strategy – The resizing strategy to apply when adjusting the video dimensions. It can be either ‘decrease’ to reduce the dimension or ‘increase’ to enlarge it. Accepted values are [‘decrease’, ‘increase’].

  • save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

  • args – extra args

  • kwargs – extra args

process_single(sample)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoResizeResolutionMapper(*args, **kwargs)[source]#

Bases: Mapper

Resizes video resolution based on specified width and height constraints.

This operator resizes videos to fit within the provided minimum and maximum width and height limits. It can optionally maintain the original aspect ratio by adjusting the dimensions accordingly. The resized videos are saved in the specified directory or the same directory as the input if no save directory is provided. The key metric for resizing is the video’s width and height, which are adjusted to meet the constraints while maintaining the aspect ratio if configured. The force_divisible_by parameter ensures that the output dimensions are divisible by a specified integer, which must be a positive even number when used with aspect ratio adjustments.

__init__(min_width: int = 1, max_width: int = 9223372036854775807, min_height: int = 1, max_height: int = 9223372036854775807, force_original_aspect_ratio: str = 'disable', force_divisible_by: Annotated[int, Gt(gt=0)] = 2, save_dir: str = None, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • min_width – Videos with width less than ‘min_width’ will be mapped to videos with equal or bigger width.

  • max_width – Videos with width more than ‘max_width’ will be mapped to videos with equal of smaller width.

  • min_height – Videos with height less than ‘min_height’ will be mapped to videos with equal or bigger height.

  • max_height – Videos with height more than ‘max_height’ will be mapped to videos with equal or smaller height.

  • force_original_aspect_ratio – Enable decreasing or increasing output video width or height if necessary to keep the original aspect ratio, including [‘disable’, ‘decrease’, ‘increase’].

  • force_divisible_by – Ensures that both the output dimensions, width and height, are divisible by the given integer when used together with force_original_aspect_ratio, must be a positive even number.

  • save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

  • args – extra args

  • kwargs – extra args

process_single(sample, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoSplitByDurationMapper(*args, **kwargs)[source]#

Bases: Mapper

Splits videos into segments based on a specified duration.

This operator splits each video in the dataset into smaller segments, each with a fixed duration. The last segment is discarded if its duration is less than the specified minimum last split duration. The original sample can be kept or removed based on the keep_original_sample parameter. The generated video files are saved in the specified directory or, if not provided, in the same directory as the input files. The key metric for this operation is the duration of each segment, which is character-based (seconds).

  • Splits videos into segments of a specified duration.

  • Discards the last segment if it is shorter than the minimum allowed duration.

  • Keeps or removes the original sample based on the keep_original_sample parameter.

  • Saves the generated video files in the specified directory or the input file’s directory.

  • Uses the duration in seconds to determine the segment boundaries.

__init__(split_duration: float = 10, min_last_split_duration: float = 0, keep_original_sample: bool = True, save_dir: str = None, video_backend: str = 'ffmpeg', ffmpeg_extra_args: str = '', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • split_duration – duration of each video split in seconds.

  • min_last_split_duration – The minimum allowable duration in seconds for the last video split. If the duration of the last split is less than this value, it will be discarded.

  • keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only cut sample in the final datasets and the original sample will be removed. It’s True in default.

  • save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

  • video_backend – video backend, can be ffmpeg, av.

  • ffmpeg_extra_args – Extra ffmpeg args for splitting video, only valid when video_backend is ffmpeg.

  • args – extra args

  • kwargs – extra args

split_videos_by_duration(video_key, container)[source]#
process_batched(samples)[source]#
class data_juicer.ops.mapper.VideoSplitByKeyFrameMapper(*args, **kwargs)[source]#

Bases: Mapper

Splits a video into segments based on key frames.

This operator processes video data by splitting it into multiple segments at key frame boundaries. It uses the key frames to determine where to make the splits. The original sample can be kept or discarded based on the keep_original_sample parameter. If save_dir is specified, the split video files will be saved in that directory; otherwise, they will be saved in the same directory as the input files. The operator processes each video in the sample and updates the sample with the new video keys and text placeholders. The Fields.source_file field is updated to reflect the new video segments. This operator works in batch mode, processing multiple samples at once.

__init__(keep_original_sample: bool = True, save_dir: str = None, video_backend: str = 'av', ffmpeg_extra_args: str = '', output_format: str = 'path', save_field: str = None, legacy_split_by_text_token: bool = True, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only split sample in the final datasets and the original sample will be removed. It’s True in default.

  • save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

  • video_backend – video backend, can be ffmpeg, av.

  • ffmpeg_extra_args – Extra ffmpeg args for splitting video, only valid when video_backend is ffmpeg.

  • output_format –

    The output format of the videos. Supported formats are: [“path”, “bytes”]. If format is “path”, the output is a list of lists, where each inner list contains the path of the split videos. e.g.[

    [video1_split1_path, video1_split2_path, â€Ļ], [video2_split1_path, video2_split2_path, â€Ļ], â€Ļ

    ] (In the order of the videos).

    If format is “bytes”, the output is a list of lists, where each inner list contains the bytes of the split videos. e.g. [

    [video1_split1_byte, video1_split2_byte, â€Ļ], [video2_split1_byte, video2_split2_byte, â€Ļ], â€Ļ

    ] (In the order of the videos).

  • save_field – The new field name to save generated video files path. If not specified, will overwrite the original video field.

  • legacy_split_by_text_token – Whether to split by special tokens (e.g. <__dj__video>) in the text field and read videos in order, or use the ‘videos’ field directly.

  • args – extra args

  • kwargs – extra args

get_split_key_frame(container, video_key: str = None)[source]#
process_batched(samples)[source]#
class data_juicer.ops.mapper.VideoSplitBySceneMapper(*args, **kwargs)[source]#

Bases: Mapper

Splits videos into scene clips based on detected scene changes.

This operator uses a specified scene detector to identify and split video scenes. It supports three types of detectors: ContentDetector, ThresholdDetector, and AdaptiveDetector. The operator processes each video in the sample, detects scenes, and splits the video into individual clips. The minimum length of a scene can be set, and progress can be shown during processing. The resulting clips are saved in the specified directory or the same directory as the input files if no save directory is provided. The operator also updates the text field in the sample to reflect the new video clips. If a video does not contain any scenes, it remains unchanged.

available_detectors = {'AdaptiveDetector': ['window_width', 'min_content_val', 'weights', 'luma_only', 'kernel_size', 'video_manager', 'min_delta_hsv'], 'ContentDetector': ['weights', 'luma_only', 'kernel_size'], 'ThresholdDetector': ['fade_bias', 'add_final_scene', 'method', 'block_size']}#
__init__(detector: str = 'ContentDetector', threshold: Annotated[float, Ge(ge=0)] = 27.0, min_scene_len: Annotated[int, Ge(ge=0)] = 15, show_progress: bool = False, save_dir: str = None, save_field: str = None, ffmpeg_extra_args: str = '-movflags frag_keyframe+empty_moov', output_format: str = 'path', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • detector – Algorithm from scenedetect.detectors. Should be one of [‘ContentDetector’, ‘ThresholdDetector’, ‘AdaptiveDetector`].

  • threshold – Threshold passed to the detector.

  • min_scene_len – Minimum length of any scene.

  • show_progress – Whether to show progress from scenedetect.

  • save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.

  • save_field – The new field name to save generated video files path. If not specified, will overwrite the original video field.

  • ffmpeg_extra_args – Extra ffmpeg args for splitting video.

  • output_format –

    The output format of the videos. Supported formats are: [“path”, “bytes”]. If format is “path”, the output is a list of lists, where each inner list contains the path of the split videos. e.g.[

    [video1_split1_path, video1_split2_path, â€Ļ], [video2_split1_path, video2_split2_path, â€Ļ], â€Ļ

    ] (In the order of the videos).

    If format is “bytes”, the output is a list of lists, where each inner list contains the bytes of the split videos. e.g. [

    [video1_split1_byte, video1_split2_byte, â€Ļ], [video2_split1_byte, video2_split2_byte, â€Ļ], â€Ļ

    ] (In the order of the videos).

  • args – extra args

  • kwargs – extra args

process_single(sample, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoTaggingFromAudioMapper(*args, **kwargs)[source]#

Bases: Mapper

Generates video tags from audio streams using the Audio Spectrogram Transformer.

This operator extracts audio streams from videos and uses a Hugging Face Audio Spectrogram Transformer (AST) model to generate tags. The tags are stored in the specified metadata field, defaulting to ‘video_audio_tags’. If no valid audio stream is found, the tag is set to ‘EMPTY’. The operator resamples audio to match the model’s required sampling rate if necessary. The tags are inferred based on the highest logit value from the model’s output. If the tags are already present in the sample, the operator skips processing for that sample.

__init__(hf_ast: str = 'MIT/ast-finetuned-audioset-10-10-0.4593', trust_remote_code: bool = False, tag_field_name: str = 'video_audio_tags', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • hf_ast – path to the HF model to tag from audios.

  • trust_remote_code – whether to trust the remote code of HF models

  • tag_field_name – the field name to store the tags. It’s “video_audio_tags” in default.

  • args – extra args

  • kwargs – extra args

process_single(sample, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoTaggingFromFramesMapper(*args, **kwargs)[source]#

Bases: Mapper

Generates video tags from frames extracted from videos.

This operator extracts frames from videos and generates tags based on the content of these frames. The frame extraction method can be either “all_keyframes” or “uniform”. For “all_keyframes”, all keyframes are extracted, while for “uniform”, a specified number of frames are extracted uniformly across the video. The tags are generated using a pre-trained model and stored in the specified field name. If the tags are already present in the sample, the operator skips processing. Important notes: - Uses a Hugging Face tokenizer and a pre-trained model for tag generation. - If no video is present in the sample, an empty tag array is stored. - Frame tensors are processed to generate tags, which are then sorted by frequency and stored.

__init__(frame_sampling_method: str = 'all_keyframes', frame_num: Annotated[int, Gt(gt=0)] = 3, tag_field_name: str = 'video_frame_tags', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • frame_sampling_method – sampling method of extracting frame images from the videos. Should be one of [“all_keyframes”, “uniform”]. The former one extracts all key frames (the number of which depends on the duration of the video) and the latter one extract specified number of frames uniformly from the video. Default: “all_keyframes”.

  • frame_num – the number of frames to be extracted uniformly from the video. Only works when frame_sampling_method is “uniform”. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration.

  • tag_field_name – the field name to store the tags. It’s “video_frame_tags” in default.

  • args – extra args

  • kwargs – extra args

process_single(sample, rank=None, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoUndistortMapper(*args, **kwargs)[source]#

Bases: Mapper

Undistort raw videos with corresponding camera intrinsics and distortion coefficients.

__init__(output_video_dir: str = '/home/runner/.cache/data_juicer/assets', tag_field_name: str = 'video_undistortion_tags', batch_size_each_video: int = 1000, crf: int = 22, *args, **kwargs)[source]#

Initialization method.

Parameters:
  • output_video_dir – Output directory to save undistorted videos.

  • tag_field_name – The field name to store the tags. It’s “video_undistortion_tags” in default.

  • batch_size_each_video – Number of frames to process and save per temporary TS file batch.

  • crf – Constant Rate Factor (CRF) for FFmpeg encoding quality.

  • args – extra args

  • kwargs – extra args

concatenate_ts_files(folder, video_name, batch_counts)[source]#

Concatenate batch TS files into final mp4.

create_ffmpeg_writer(output_path, width, height, fps, crf)[source]#

Spawn an ffmpeg async encoding process for writing raw frames.

process_single(sample, context=False)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.VideoWholeBodyPoseEstimationMapper(*args, **kwargs)[source]#

Bases: Mapper

Input a video containing people, and use the DWPose model to extract the body, hand, feet, and face keypoints of the human subjects in the video, i.e., 2D Whole-body Pose Estimation.

__init__(onnx_det_model: str = 'yolox_l.onnx', onnx_pose_model: str = 'dw-ll_ucoco_384.onnx', frame_num: Annotated[int, Gt(gt=0)] = 3, duration: float = 0, tag_field_name: str = 'pose_estimation_tags', frame_dir: str = '/home/runner/.cache/data_juicer/assets', if_save_visualization: bool = False, save_visualization_dir: str = '/home/runner/.cache/data_juicer/assets', *args, **kwargs)[source]#

Initialization method.

Parameters:
  • onnx_det_model – The path to ‘yolox_l.onnx’.

  • onnx_pose_model – The path to ‘dw-ll_ucoco_384.onnx’.

  • frame_num – The number of frames to be extracted uniformly from the video. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration. If “duration” > 0, frame_num is the number of frames per segment.

  • duration – The duration of each segment in seconds. If 0, frames are extracted from the entire video. If duration > 0, the video is segmented into multiple segments based on duration, and frames are extracted from each segment.

  • tag_field_name – The field name to store the tags. It’s “pose_estimation_tags” in default.

  • frame_dir – Output directory to save extracted frames.

  • if_save_visualization – Whether to save visualization results.

  • save_visualization_dir – The path for saving visualization results.

  • args – extra args

  • kwargs – extra args

process_single(sample=None, rank=None)[source]#

For sample level, sample –> sample

Parameters:

sample – sample to process

Returns:

processed sample

class data_juicer.ops.mapper.WhitespaceNormalizationMapper(*args, **kwargs)[source]#

Bases: Mapper

Normalizes various types of whitespace characters to standard spaces in text samples.

This mapper converts all non-standard whitespace characters, such as tabs and newlines, to the standard space character (’ ‘, 0x20). It also trims leading and trailing whitespace from the text. This ensures consistent spacing across all text samples, improving readability and consistency. The normalization process is based on a comprehensive list of whitespace characters, which can be found at https://en.wikipedia.org/wiki/Whitespace_character.

__init__(*args, **kwargs)[source]#

Initialization method.

Parameters:
  • args – extra args

  • kwargs – extra args

process_batched(samples)[source]#