client.clm_train_cloud(
    self, 
    job_name:str,
    model_name:str, 
    dataset_name:str=None, 
    hf_token:str='', 
    dataset_config_name:str=None, 
    data_from_hf:bool=True,
    do_split:bool=True, 
    split_ratio:float=0.2, 
    use_peft:bool=False, 
    lora_config:LoraConfig=None, 
    train_args:TrainingArguments=None, 
    data:dict={}, 
    wandb_config:wandbConfig=None, 
    use_ddp:bool=False, 
    use_zero:bool=True, 
    prompt_config:PromptConfig=None
)
job_name
string
required

The name of the training job.

model_name
string
required

The name or path of the pre-trained model to use.

dataset_name
string

The name of the dataset to be used for training. Defaults to None.

hf_token
string

The Hugging Face token required for accessing private datasets or models. Defaults to an empty string.

dataset_config_name
string

The configuration name of the dataset, if applicable. Defaults to None.

data_from_hf
boolean

A flag to determine whether to load data from Hugging Face. Defaults to True.

do_split
boolean

A flag to determine whether to split the dataset into training and validation sets. Defaults to True.

split_ratio
float

The ratio of the dataset to be used for validation. Defaults to 0.2.

use_peft
boolean

A flag to enable Parameter-Efficient Fine-Tuning (PEFT). Defaults to False.

lora_config
LoraConfig

The configuration for LoRA (Low-Rank Adaptation) if use_peft is True. Defaults to None.

train_args
TrainingArguments

The training arguments to customize the training process. Defaults to None.

data
dict

A dictionary containing the training data. Defaults to an empty dictionary.

wandb_config
wandbConfig

The configuration for Weights and Biases (WandB) logging. Defaults to None.

use_ddp
boolean

A flag to enable Distributed Data Parallel (DDP) training. Defaults to False.

use_zero
boolean

A flag to enable ZeRO (Zero Redundancy Optimizer) for memory optimization. Defaults to True.

prompt_config
PromptConfig

The configuration for prompts used in the training. Defaults to None.