La página https://web.ua.es/es/iuii/noticias-y-eventos/noticias.html no existe
"In this presentation, I will address the problem of learning from imbalanced data. I will consider the scenario where the number of negative examples is much larger than the number of positive ones (like - e.g. - in bank fraud detection) preventing us from using standard loss functions to learn well. I will present a theoretically-founded method which learns a set of local ellipsoids centered at the minority class examples while excluding the negative examples of the majority class. I will address this task from a Mahalanobis-like metric learning point of view and I will present generalization guarantees on the learned metric using the uniform stability framework. The experimental evaluation on classic benchmarks and on a proprietary dataset in bank fraud detection shows the effectiveness of the approach, particularly when the imbalancy is huge."