Presentations

We will have presentations of research papers in the second half of the course. Presentations will be done in groups of 41 students, using Piazza to find teammates, where each group can indicate a ranked preference for a topic they’d like to present a paper in. A paper from the preferred topic2 will then be (randomly) assigned to the group.

Each group will only do one presentation. All members of the group will receive the same grade excepting in any special/extenuating circumstances.

Each group should collaborate to create an 8-minute presentation video of the assigned paper, based on which they will participate in a 5-6 minute live Q&A session. Research papers typically make a scientific contribution, which means that they propose or claim something that holds and that matters. The overall goal of the presentation is to convey the contribution made in the paper. For that purpose, the presentations should cover:

  1. Very briefly, what is the paper generally about?
  2. Background and motivation.
  3. What is proposed or claimed in the paper?
  4. What supporting evidence is provided?
  5. Why does the proposal/claim matter?

Only include as much mathematics as needed to convey the key message of the paper. Feel free to use diagrams and equations from the paper in your presentation (with proper acknowledgement).

Important Dates

Fri 5 Feb, noon (wk 4)
Each group should fill-in the paper presentation details form with the following information
  • Student numbers and names
  • Ranked topics in decreasing order of preference
  • Any constraints on the date or time (e.g. if you’re in an unsuitable timezone) when the group cannot participate in the live Q&A.
Mon/Wed, Week8…, 9am
Each group should submit their presentation videos by 9am (UK time) the day before they are scheduled to present using the presentation slides form.

Presentation Schedule

Paper assignments and presentation schedules: https://edin.ac/3jXQRGN

Presentation Details

For the presentations, we expect that

Please upload the videos using the presentation slides form, naming your file appropriately as requested on the form.

There is no specific way we require you to record your presentations. You can record them however you see fit. However, some suggestions to record presentations with audio are

Its entirely up to you whether you would prefer to record each members presentation individually and just concatenate/stitch them together into one video, or if you coordinate to record all members together in one go, or any other arrangement to suit.

Marking

The final grade will be based on the presentation, performance at the Q&A session, and (weighted) audience feedback on how well the presentation characterised the paper.

The marking criteria for the presentation will include

Furthermore, as part of evaluating engagement, for a different paper from the one they will present, but within the same topic, every student will also be required to

(Note: Each student only needs to attend the session where their chosen paper is being presented. However, you are of course welcome to also attend other sessions!)

The submission for the Engagement forms will depend on when the paper you have chosen is being presented. They will both need to be submitted on that day.

Suppose your chosen paper is being presented on Tue, 9th March 2021 (Class #1 in the schedule), then

Note that in the majority of cases this will be in the same class that you’re presenting your team’s assigned paper in.

Adherence to these deadlines will be monitored by the course engagement tracker.

Papers

The following are the broad list of topics the groups will need to choose from. Papers listed within will be randomly assigned once topic is determined for a group.

PCA and its extensions

  1. Independent Component Analysis: Algorithms and Applications
    A. Hyvarinen
    Neural Networks 2000

  2. Robust Principal Component Analysis
    E. Candes, X. Li, Y. Ma, and J. Wright
    Journal of ACM 2009

  3. Heterogeneous Component Analysis
    S. Oba, M. Kawanabe, et al
    Advances in Neural Information Processing Systems 21, 2008

  4. Optimal Sparse Linear Encoders and Sparse PCA
    M. Magdon-Ismail and C. Boutsidis
    Advances in Neural Information Processing Systems 29, 2016

  5. On Consistency and Sparsity for Principal Components Analysis in High Dimensions
    I.M. Johnstone and A.Y. Lu
    Journal of the American Statistical Association 2009

  6. Single Pass PCA of Matrix Products
    S. Wu, S. Bhojanapalli, et al
    Advances in Neural Information Processing Systems 29, 2016

  7. Provable Non-convex Robust PCA
    P. Netrapalli, U.N. Niranjan, et al
    Advances in Neural Information Processing Systems 27, 2014

  8. A Generalization of Principal Components Analysis to the Exponential Family
    M. Collins, S. Dasgupta, and R. E. Schapire
    Advances in Neural Information Processing Systems 15, 2002

  9. Semi-parametric Exponential Family PCA
    S. Sajama and A. Orlitsky
    Advances in Neural Information Processing Systems 18, 2005

  10. Demixed Principal Component Analysis
    W. Brendel, R. Romo, and C. K. Machens
    Advances in Neural Information Processing Systems 24, 2011

  11. Memory Limited, Streaming PCA
    I. Mitliagkas, C. Caramanis, and P. Jain
    Advances in Neural Information Processing Systems 26, 2013

  12. Iterative Supervised Principal Components
    J. Piironen and A. Vehtari
    Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics (PMLR), 2018

  13. Random Consensus Robust PCA
    D. Pimentel-Alarcon and R. Nowak
    Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (PMLR), 2017

  14. Demixed Principal Component Analysis of Neural Population Data
    D. Kobak, W. Brendel, et al
    eLife, 2016

Dimensionality reduction and data visualisation

  1. Reducing the Dimensionality of Data with Neural Networks
    G. Hinton and R. Salakhutdinov
    Science 2006

  2. On a Connection between Kernel PCA and Metric Multidimensional Scaling
    C. Williams
    Advances in Neural Information Processing Systems 14, 2001

  3. Hubs in Space: Popular Nearest Neighbors in High-Dimensional Data
    M. Radovanovic et al
    Journal of Machine Learning Research 2010

  4. Nonlinear Dimensionality Reduction by Locally Linear Embedding (longer version)
    L. Saul and S. Roweis
    Science 2000

  5. Visualizing Data using t-SNE
    L. van der Maaten and G. Hinton
    Journal of Machine Learning Research 2008

  6. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction
    L. McInnes, J. Healy, and J. Melville
    Arxiv:1802.03426 2018

  7. Learning the Parts of Objects by Non-negative Matrix Factorization
    D.D. Lee and H.S. Seung
    Nature 1999

  8. Static and Dynamic Source Separation using Nonnegative Factorizations: A Unified View
    P. Smaragdis, C. Fevotte, et al
    IEEE Signal Processing Magazine 31, 2014

  9. PARAFAC. Tutorial and Applications
    R. Bro
    Chemometrics and Intelligent Laboratory Systems 1997

  10. Dimensionality Reduction for Data in Multiple Feature Representations
    Y.Y. Lin, T.L. Liu, and C.S. Fuh
    Advances in Neural Information Processing Systems 22, 2009

  11. Denoising and Dimension Reduction in Feature Space
    M.L. Braun, K.R. Müller, and J.M. Buhmann
    Advances in Neural Information Processing Systems 20, 2007

  12. Temporal Regularized Matrix Factorization for High-dimensional Time Series Prediction
    H.F. Yu, N. Rao, and I.S. Dhillon
    Advances in Neural Information Processing Systems 29, 2016

  13. Dimensionality Reduction of Massive Sparse Datasets Using Coresets
    D. Feldman, M. Volkov, and D. Rus
    Advances in Neural Information Processing Systems 29, 2016

  14. Isomap Out-of-sample Extension for Noisy Time Series Data
    H. Dadkhahi, M. F. Duarte, and Benjamin Marlin
    IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), 2015

  15. Dimensionality Reduction Using the Sparse Linear Model
    I. A. Gkioulekas and T. Zickler
    Advances in Neural Information Processing Systems 24, 2011

Performance evaluation, hyperparameter selection

  1. Image Quality Assessment: From Error Visibility to Structural Similarity
    Z. Wang, A. Bovik, et al
    IEEE Transactions on Image Processing 2004

  2. Random Search for Hyper-Parameter Optimization
    J. Bergstra and Y. Bengio
    Journal of Machine Learning Research

  3. Practical Bayesian Optimization of Machine Learning Algorithms
    J. Snoek, H. Larochelle, and R. Adams
    Advances in Neural Information Processing Systems 25, 2012

  4. “Why Should I Trust You?” Explaining the Predictions of Any Classifier
    M.T. Ribeiro, S. Singh, and C. Guestrin

  5. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining Minimax-optimal Semi-supervised Regression on Unknown Manifolds
    A. Moscovich, A. Jaffe, and N. Boaz
    Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (PMLR), 2017

Missing data, outliers, and anomaly detection

  1. Isolation forest {for anomaly detection}
    Liu et al
    Eighth IEEE International Conference on Data Mining 2008

  2. Removing Electroencephalographic Artifacts: Comparison between ICA and PCA
    T.P. Jung et al
    Proceedings of the 1998 IEEE Signal Processing Society Workshop 1998

  3. Recommender Systems: Missing Data and Statistical Model Estimation
    B. Marlin et al
    Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI 2011)

  4. LOF: Identifying Density-Based Local Outliers
    M. Breunig et al
    Proceedings of the ACM SIGMOD International Conference on Management of Data 2000

  5. Support Vector Data Description
    D. Tax and R. Duin
    Machine Learning 2004

  6. Efficient Direct Density Ratio Estimation for Non-stationarity Adaptation and Outlier Detection
    T. Kanamori, S. Hido, and M. Sugiyama
    Advances in Neural Information Processing Systems 22, 2009

  7. Generalized Outlier Detection with Flexible Kernel Density Estimates
    E. Schubert, A. Zimek, and H. Kriegel
    Proceedings of the 2014 SIAM International Conference on Data Mining, 2014

  8. Fast Memory Efficient Local Outlier Detection in Data Streams
    M. Salehi, C. Leckie, J. C. Bezdek, T. Vaithianathan, and X. Zhang
    IEEE Transactions on Knowledge and Data Engineering 28, 2016

  9. Outlier Detection and Trend Detection: Two Sides of the Same Coin
    E. Schubert, M. Weiler, and A. Zimek
    IEEE International Conference on Data Mining Workshop (ICDMW), 2015

  10. In-network PCA and Anomaly Detection
    L. Huang, X. Nguyen, M. Garofalakis, M. I. Jordan, A. Joseph, and N. Taft
    Advances in Neural Information Processing Systems 20, 2007

  11. Feature Set Embedding for Incomplete Data
    D. Grangier and I. Melvin
    Advances in Neural Information Processing Systems 23, 2010

  12. A Denoising View of Matrix Completion
    W. Wang, M. A. Carreira-Perpinan, and Z. Lu
    Advances in Neural Information Processing Systems 24, 2011

  13. Robust Variational Autoencoders for Outlier Detection in Mixed-Type Data
    Si. Eduardo, A. Nazabal, C. K. I. Williams, C. Sutton
    Advances in Neural Information Processing Systems 32, 2019

  14. Plant Functional Trait Change Across a Warming Tundra Biome
    A. D. Bjorkman, I. H. Myers-Smith, et al
    Nature 562, 2018

  15. Why Normalizing Flows Fail to Detect Out-of-Distribution Data
    Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson
    Neural Information Processing Systems (NeurIPS) 2020

  16. GAIN: Missing Data Imputation using Generative Adversarial Nets
    Jinsung Yoon, James Jordon, Mihaela Schaar
    International Conference on Machine Learning, 2018

  17. Missing Data Imputation using Optimal Transport
    Boris Muzellec, Julie Josse, Claire Boyer, Marco Cuturi
    International Conference on Machine Learning, 2020

Fairness and Bias

  1. Private traits and attributes are predictable from digital records of human behavior
    M. Kosinski et al
    Proceedings of the National Academy of Sciences 2013

  2. Learning Fair Representations
    R. Zemel et al
    Proceedings of the 30th International Conference on Machine Learning 2013

  3. Why Is My Classifier Discriminatory? Irene Y. Chen, Fredrik D. Johansson, David Sontag
    Neural Information Processing Systems (NeurIPS) 2018

  4. Improving fairness in machine learning systems: What do industry practitioners do?
    Holstein, K., Vaughan, J. W., Daumé III, H., Dudík, M., & Wallach, H. (2018).
    ArXiv:1812.05239, 2018

  5. Equality of Opportunity in Supervised Learning
    Moritz Hardt, Eric Price, Nathan Srebro
    Neural Information Processing Systems (NeurIPS) 2016

  6. Repair: Removing representation bias by dataset resampling
    Yi Li, Nuno Vasconcelos
    Computer Vision and Pattern Recognition 2019

  7. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
    Zhao et al
    Empirical Methods in Natural Language Processing 2017

  8. Convnets and imagenet beyond accuracy: Understanding mistakes and uncovering biases
    Pierre Stock, and Moustapha Cisse
    European Conference on Computer Vision (ECCV) 2018

  9. Women also snowboard: Overcoming bias in captioning models
    Hendricks et al.
    European Conference on Computer Vision (ECCV) 2018

  10. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them
    Hila Gonen, and Yoav Goldberg
    North American Chapter of the Association for Computational Linguistics 2019

  11. Compositional fairness constraints for graph embeddings
    Avishek Bose, and William Hamilton
    International Conference on Machine Learning 2019

  12. Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations
    Wang et al
    International Conference on Computer Vision 2019

  13. Fairness Constraints: Mechanisms for Fair Classification
    Zafar et al
    Artificial Intelligence and Statistics (AISTATS) 2017

  14. Mitigating unwanted biases with adversarial learning
    Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell
    Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 2018.

  1. Excepting special/extenuating circumstances. 

  2. We cannot guarantee assignment of the most preferred topic.