23,99 €
inkl. MwSt.

Versandfertig in 6-10 Tagen
  • Broschiertes Buch

Prediction models have reached to a stage where a single model is not sufficient to make predictions. Hence, to achieve better accuracy and performance, an ensemble of various models are being used. Gradient Boosting Algorithm has almost been the part of all ensembles. Winners of Kaggle Competition are swearing by this. Extreme Gradient Boosting is a step forward to this where we try to optimise the loss function. In this research work Squared Logistic Loss function is used with Boosting function which is expected to reduce bias and variance. The proposed model is applied on stock market data…mehr

Produktbeschreibung
Prediction models have reached to a stage where a single model is not sufficient to make predictions. Hence, to achieve better accuracy and performance, an ensemble of various models are being used. Gradient Boosting Algorithm has almost been the part of all ensembles. Winners of Kaggle Competition are swearing by this. Extreme Gradient Boosting is a step forward to this where we try to optimise the loss function. In this research work Squared Logistic Loss function is used with Boosting function which is expected to reduce bias and variance. The proposed model is applied on stock market data for the past ten years. Squared Logistic Loss function with XGBoost promises to be an effective approach in terms of accuracy and better prediction.
Autorenporträt
Nonita Sharma is currently working as an Assistant Professor in the Department of Computer Science & Engineering, Dr. B. R. Ambedkar National Institute of Technology Jalandhar. Her research interests include Wireless Sensor Networks, IoT, Big Data Analytics, and Data Mining.