Base Modules for PySkOptimize
- class pyskoptimize.base.MLEstimator(*, model: pyskoptimize.steps.SklearnTransformerModel)
This is the ML Estimator, with no feature engineering
- property parameter_space: Dict[str, Union[skopt.space.space.Categorical, skopt.space.space.Integer, skopt.space.space.Real]]
The tuning parameter space
- Returns
- property pipeline: sklearn.pipeline.Pipeline
The ML Pipeline :return:
- class pyskoptimize.base.MLEstimatorWithFeaturePostProcess(*, postProcess: pyskoptimize.steps.PostProcessingFeaturePodModel, model: pyskoptimize.steps.SklearnTransformerModel)
This is the ML Estimator, with feature engineering on assuming processed feature
- property parameter_space: Dict[str, Union[skopt.space.space.Categorical, skopt.space.space.Integer, skopt.space.space.Real]]
The tuning parameter space
- Returns
- property pipeline: sklearn.pipeline.Pipeline
The ML Pipeline
- Returns
- class pyskoptimize.base.MLEstimatorWithFeaturePrePostProcess(*, postProcess: pyskoptimize.steps.PostProcessingFeaturePodModel, preprocess: List[pyskoptimize.steps.PreprocessingFeaturePodModel], model: pyskoptimize.steps.SklearnTransformerModel)
This is the ML Estimator, with feature engineering, allowing for grouping of feature engineering and a final aggregate of feature engineering
- property parameter_space: Dict[str, Union[skopt.space.space.Categorical, skopt.space.space.Integer, skopt.space.space.Real]]
The tuning parameter space
- Returns
- property pipeline: sklearn.pipeline.Pipeline
The ML Pipeline
- Returns
- class pyskoptimize.base.MLEstimatorWithFeaturePreprocess(*, preprocess: List[pyskoptimize.steps.PreprocessingFeaturePodModel], model: pyskoptimize.steps.SklearnTransformerModel)
This is the ML Estimator, with groupings of feature engineering
- property parameter_space: Dict[str, Union[skopt.space.space.Categorical, skopt.space.space.Integer, skopt.space.space.Real]]
The tuning parameter space
- Returns
- property pipeline: sklearn.pipeline.Pipeline
The ML Pipeline
- Returns
- class pyskoptimize.base.MLOptimizer(*, mlPipeline: Union[pyskoptimize.base.TargetTransformationMLPipeline, pyskoptimize.base.MLEstimatorWithFeaturePrePostProcess, pyskoptimize.base.MLEstimatorWithFeaturePostProcess, pyskoptimize.base.MLEstimatorWithFeaturePreprocess, pyskoptimize.base.MLEstimator], scoring: str, cv: int = 5)
This represents the full pipeline state.
Here, we should have a scikit-learn model (i.e. Ridge, LogisticRegression) as the model parameter, the scoring metric that is supported in scikit-learn, the list of preprocessing steps across all of the features, the post process of the resulting features from the application of the preprocessing steps, and optionally a transformer model that will convert your target variable to the proper state of choice.
- Variables
mlPipeline – The pipeline we want to optimize
scoring – The scoring metric
cv – The cross validation number
- to_bayes_opt(verbose: int = 0, n_iter: int = 50) skopt.searchcv.BayesSearchCV
This creates the bayesian search CV object with the preprocessing, postprocessing, model and target transformer.
- Returns
The bayesian search method with the base estimator and search space
- class pyskoptimize.base.TargetTransformationMLPipeline(*, baseEstimator: Union[pyskoptimize.base.MLEstimator, pyskoptimize.base.MLEstimatorWithFeaturePreprocess, pyskoptimize.base.MLEstimatorWithFeaturePostProcess, pyskoptimize.base.MLEstimatorWithFeaturePrePostProcess], targetTransformer: pyskoptimize.steps.SklearnTransformerModel)
This is the target transformation pipeline, which requires a base estimator
- static _parameter_space(param_space) Dict
A private function to help change the namespace of the parameter space for the regressor
- Parameters
param_space – The initial parameter space
- Returns
- property parameter_space: Dict[str, Union[skopt.space.space.Categorical, skopt.space.space.Integer, skopt.space.space.Real]]
The tuning parameter space
- Returns
- property pipeline: sklearn.compose._target.TransformedTargetRegressor
The ML pipeline with the target transformation
- Returns