Machine learning.

At the end phase of my PhD in psychology, I trained myself in various machine learning techniques, especially artificial neural networks. To consolidate and advance these skills, I expanded my research focus and added a post-doc on computational modeling approaches to my career. During my time at the Neuro-Cognitive Modeling lab of Prof. Martin Butz, I investigated how artificial neural networks can be improved by including inductive biases that are inspired by human cognition and applied the empirical rigor that I had acquired during my PhD to investigate the models’ inner working mechanisms.

During winter 2021/22, I was a visiting scholar at Prof. Virginia de Sa’s lab at the Halıcıoğlu Data Science Institute at UC San Diego. Her lab focuses exactly on the intersection of machine learning and psychological research, which is why it was the perfect place to continue researching topics which I have investigated empirically during my PhD, but this time with the help of machine learning. A fellowship from the Science department of the University of Tübingen allowed me to take this unique opportunity to work at UCSD, applying a computer vision model to investigate a psychological research question. 

Peer-reviewed papers:

• Fabi, S., & Hagendorff, T. (under review). Why we need biased AI – How including ethical and cognitive machine biases can enhance AI systems.

• Fabi, S., Xu, X., & de Sa, V.R. (2022). Exploring the racial bias in pain detection with a computer vision model. Proceedings of the Annual Meeting of the Cognitive Science Society, 44.

• Fabi, S., Holzwarth, L., & Butz, M.V. (2022). Efficient learning through compositionality in a CNN-RNN model consisting of a bottom-up and a top-down pathway. Proceedings of the Annual Meeting of the Cognitive Science Society, 44.

• Fabi, S., Otte, S., Scholz, F., Wührer, J., Karlbauer, M., & Butz, M.V. (2022). Extending the Omniglot Challenge: Imitating handwriting styles on a new sequential data set. IEEE Transactions on Cognitive and Developmental Systems.

Fabi, S., Otte, S., & Butz, M.V. (2021). Compositionality as learning bias in generative RNNs solves the Omniglot challenge. In International Conference on Learning Representations (ICLR) – Workshop Learning to Learn.

Fabi, S., Otte, S., & Butz, M.V. (2021). Fostering compositionality in latent, generative encodings to solve the Omniglot challenge. In I. Farkas, P. Masulli, S. Otte, & S. Wermter (Eds.), Proceedings of Artificial Neural Networks and Machine Learning – ICANN 2021, Part II, 525-536.

Hobbhahn, M., Butz, M.V., Fabi, S., & Otte, S. (2020). Sequence classification using ensembles of recurrent generative expert modules. In Proceedings of the 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning – ESANN 2020, 333-338.

Fabi, S., Otte, S., Wiese, J.G., & Butz, M.V. (2020). Investigating efficient learning and compositionality in generative LSTM networks. In I. Farkas, P. Masulli, & S. Wermter (Eds.), Proceedings of Artificial Neural Networks and Machine Learning – ICANN 2020, 143-154.

Pesentations:

• Fabi, S. (2022). Applying Cognitive Science to Machine Learning and vice versa. Research Talk, DeepMind, London.

• Fabi, S. (2022). Efficient learning in generative RNNs: Solving one-shot tasks by including compositionality as an inductive bias. Tech Talk, Amazon, Tübingen.

• Fabi, S. (2022). Machine learning for psychological research: Using the example of the racial bias in pain recognition. Machine Learning in Science: Postdoc Symposium of the Cluster of Excellence, Tübingen.

• Fabi, S., Otte, S. & Butz, M.V. (2021). Fostering compositionality in generative RNNs to solve the Omniglot challenge. Oral presentation at the Computational Cognition Workshop.

• Fabi, S., Otte, S., & Butz, M.V. (2021). Fostering compositionality in latent, generative encodings to solve the Omniglot challenge. Oral presentation at the 30th International Conference on Artificial Neural Networks (ICANN).

• Fabi, S., Otte, S. & Butz, M.V. (2021). Does compositionality as a prior in Generative RNNs lead to efficient learning of temporal predictions?. Oral presentation at the ICDL Workshop Spatio-temporal Aspects of Embodied Predictive Processing.

• Fabi,  S.,  Otte,  S.,  & Butz,  M.V. (2021). Compositionality as learning bias in generative RNNs solves the Omniglot challenge. Poster presented at International Conference on Learning Representations (ICLR) – Workshop Learning to Learn.

You can find my Google Scholar profile here.