top of page

PROJECTS

fairinterpretable.png
2019 CVPR | Discovering Fair Representations in the Data Domain
with Novi Quadrianto and Oliver Thomas

 

Fairness and interpretability are critical in computer vision and machine learning applications when dealing with human outcomes, e.g. inviting or not inviting for a job interview based on application materials that may include photographs. In this work, we address this problem as data-to-data translation, i.e. learning a mapping from an input domain to a fair target domain, where a fairness definition is being enforced.  Here the data domain can be images, or any tabular data representation. 

arXiv  Code

NIPS 2017:  Recycling Privileged Learning and Distribution Matching for Fairness
2017 NIPS | Recycling Privileged Learning and Distribution Matching for Fairness
with Novi Quadrianto

In this work, we focus on ensuring machine learning models deliver fair decisions. To achieve our goal, we recycle two well-established machine learning techniques, privileged learning and distribution matching, and harmonize them for satisfying multi-faceted fairness definitions.

CVPR 2016:  Learning from the Mistakes of Others: Matching Errors in Cross-Dataset Learning
2016 CVPR | Learning from the Mistakes of Others: Matching Errors in Cross-Dataset Learning (spotlight)
with Novi Quadrianto

 

In this work, we propose a framework to solve computer vision tasks in images by acquiring knowledge from the mistakes committed by other data collections (videos, clip arts, and 3D models) when learning the same concepts. 

Supplementary  Code  Spotlight

CVPR 2016:  Ambiguity Helps: Classification with Disagreements in Crowdsourced Annotations
2016 CVPR | Ambiguity Helps: Classification with Disagreements in Crowdsourced Annotations   
with Daniel Hernández-Lobato, Miguel Hernández-Lobato 
and Novi Quadrianto

 

In this work, we focus on the ambiguity in crowd-sourced annotations. We showed that given the annotations and a chosen aggregation strategy to define the ground-truth label (such as maximum voting) the results could be improved when taking into account user disagreements in annotations. 

Supplementary  Code  Slides     

IJCAI 2016: Learning using Unselected Features (LUFe)
2016 IJCAI | Learning using Unselected Features (LUFe)   
with Joseph Taylor, Kristian Kersting, David Weir and Novi Quadrianto 

 

In this work, we revisit a classical task of feature selection and describe a strategy that allows selected and unselected features to serve different functions in classification (LUFe). In this framework, selected features are used directly to set the decision boundary, and unselected features are utilized as privileged information, with no additional cost at test time. 

Code

CVPR 2015:  Curriculum Learning of Multiple Tasks
2015 CVPR | Curriculum Learning of Multiple Tasks 
with Anastasia Pentina and Christoph Lampert

 

In this work, we propose an approach that processes multiple tasks in a sequence with sharing between subsequent tasks instead of solving all tasks jointly. We address the question of curriculum learning of tasks, i.e. finding the best order of tasks to be learned.

Supplementary Code

2015 THESIS | Learning with Attributes for Object Recognition: Parametric and Non-parametric Views

 

We address two key learning challenges in view of object recognition task in images: learning augmented attributes as mid-level discriminative feature representation, and learning with attributes as privileged information. Our main contributions are parametric and non-parametric statistical learning models to solve these frameworks.

NIPS 2014: Mind the Nuisance: Gaussian Process Classification using Privileged Noise
2014 NIPS | Mind the Nuisance: Gaussian Process Classification using Privileged Noise
​with Daniel Hernández-Lobato, Kristian Kersting, Christoph Lampert and Novi Quadrianto
 

In this work, we propose a method for learning with privileged information based on the framework of Gaussian process classifiers (GPC). In the Bayesian treatment the privileged data enters the model in the form of heteroscedastic noise in the GPC. 

Supplementary Code

Learning to Transfer Privileged Information
2014 arXiv | Learning to Transfer Privileged Information
with Novi Quadrianto and Christoph Lampert

 

In this work, we explore two maximum-margin techniques that are able to make use of privileged information for binary and multiclass object classification. We interpret them as learning easiness and hardness of the objects in the privileged space and then transferring this knowledge to train a better classifier in the original space. 

Data annotation Code

ICCV 2013: Learning to Rank Using Privileged Information
2013 ICCV | Learning to Rank Using Privileged Information
with Novi Quadrianto and Christoph Lampert

 

In this work, we study the case where we are given additional information about the training data, which however will not be available at test time. This is called learning using privileged information (LUPI). We introduce the maximum margin approach to make use of this privileged source of information. 

Supplementary Code

UAI 2013: The SupervisedIBP:Neighbourhood Preserving Infinite Latent Feature Models
2013 UAI | The Supervised IBP: Neighbourhood Preserving Infinite Latent Feature Models

with Novi Quadrianto, David A. Knowles and Zoubin Ghahramani

 

In this work, we look at the problem of learning discriminative attribute-based representation from the probabilistic modeling perspective. We take advantage of the non-parametric approach, and allow the augmented representation to grow with the data without pre-specifying the dimensionality of the attribute space.

ECCV 2012: Augmented Attribute Representations
2012 ECCV | Augmented Attribute Representations
with Novi Quadrianto and Christoph Lampert

 

In this work, we propose a parametric model to augment the semantic attributes with a discriminative attributes part, such that the inferred augmented attribute representation can be directly used for nearest neighbor classification.

To see more or discuss possible work let's talk >>
bottom of page