Data Processor

class piepline.data_processor.data_processor.DataProcessor(model: torch.nn.modules.module.Module, device: torch.device = None)[source]

DataProcessor manage: model, data processing, device choosing

Args:
model (Module): model, that will be used for process data device (torch.device): what device pass data for processing
model() → torch.nn.modules.module.Module[source]

Get current module

predict(data: torch.Tensor) → object[source]

Make predict by data

Parameters:data – data as torch.Tensor or dict with key data
Returns:processed output
Return type:the model output type
set_data_preprocess(data_preprocess: callable) → piepline.data_processor.data_processor.DataProcessor[source]

Set callback, that will get output from DataLoader and return preprocessed data. For example may be used for pass data to device.

Default mode:


_pass_data_to_device()

Args:
data_preprocess (callable): preprocess callable. This callback need to get one parameter: dataset output
Returns:
self object

Examples:

from piepline.utils import dict_recursive_bypass
data_processor.set_data_preprocess(lambda data: dict_recursive_bypass(data, lambda v: v.cuda()))
set_pick_model_input(pick_model_input: callable) → piepline.data_processor.data_processor.DataProcessor[source]

Set callback, that will get output from DataLoader and return model input.

Default mode:


lambda data: data[‘data’]

Args:
pick_model_input (callable): pick model input callable. This callback need to get one parameter: dataset output
Returns:
self object

Examples:

data_processor.set_pick_model_input(lambda data: data['data'])
data_processor.set_pick_model_input(lambda data: data[0])
class piepline.data_processor.data_processor.TrainDataProcessor(train_config: piepline.train_config.train_config.BaseTrainConfig, device: torch.device = None)[source]

TrainDataProcessor is make all of DataProcessor but produce training process.

Parameters:train_config – train config
exception TDPException(msg)[source]
get_lr() → float[source]

Get learning rate from optimizer

get_state() → {}[source]

Get model and optimizer state dicts

Returns:dict with keys [weights, optimizer]
predict(data, is_train=False) → torch.Tensor[source]

Make predict by data. If is_train is True - this operation will compute gradients. If is_train is False - this will work with model.eval() and torch.no_grad

Parameters:
  • data – data in dict
  • is_train – is data processor need train on data or just predict
Returns:

processed output

Return type:

model return type

process_batch(batch: {}, is_train: bool) → Tuple[torch.Tensor, torch.Tensor, torch.Tensor][source]

Process one batch of data

Args:
batch (dict): contains ‘data’ and ‘target’ keys. The values for key must be instance of torch.Tensor or dict is_train (bool): is batch process for train
Returns:
tuple of class:torch.Tensor of losses, predicts and targets with shape (N, …) where N is batch size
save_state(path: str) → None[source]

Save state of optimizer and perform epochs number

set_pick_target(pick_target: callable) → piepline.data_processor.data_processor.DataProcessor[source]

Set callback, that will get output from DataLoader and return target.

Default mode:


lambda data: data[‘target’]

Args:
pick_target (callable): pick target callable. This callback need to get one parameter: dataset output
Returns:
self object

Examples:

data_processor.set_pick_target(lambda data: data['target'])
data_processor.set_pick_target(lambda data: data[1])
set_target_preprocess(target_preprocess: callable) → piepline.data_processor.data_processor.DataProcessor[source]

Set callback, that will get output from DataLoader and return preprocessed target. For example may be used for pass target to device.

Default mode:


_pass_target_to_device()

Args:
target_preprocess (callable): preprocess callable. This callback need to get one parameter: targetset output
Returns:
self object

Examples:

from piepline.utils import dict_recursive_bypass
target_processor.set_target_preprocess(lambda target: dict_recursive_bypass(target, lambda v: v.cuda()))
update_lr(lr: float) → None[source]

Update learning rate straight to optimizer

Parameters:lr – target learning rate

Model