Jason Chow

Computational Cognitive Psychologist

About Me

Hi, my name is Jason Chow, I’m a PhD student in Psychological Sciences at Vanderbilt University deeply interested in the intersection between deep neural networks and individual differences in high-level perception. I want to use deep neural networks to study why human individual differences occur and take those insights to build better models of human perception.

I enjoy expanding my technical toolkit to solve new and exciting problems. I believe that having the right tool at hand is half the battle. Over the years, I have developed a wide range of skills in multivariate statistics, Bayesian analysis, deep learning, online data collection, and simulation techniques implemented in R, Python, and Javascript.

See my CV for more details!


Vanderbilt University

PhD Psychological Sciences

2018 - Present

I came to Vanderbilt University to work with Dr. Thomas Palmeri and Dr. Isabel Gauthier. Being coadvised let me develop two lines of research studying individual differences in high-level perception, namely object recognition.

In my behavioral work, I developed new haptic and auditory object recognition tests and leveraged a variety of multivariate statistics and Bayesian statistics in Javascript and R. My dissertation focuses on my modeling work where I studied how to best manipulate and measure deep neural network models to study individual differences in Python with Tensorflow.

University of Toronto

BS Honors Psychology

2014 - 2018

From my tiny hometown in rural Alberta, I went to the University of Toronto where my first lecture had more people than my entire town. As a first-generation student with first-generation parents, I had no idea what I wanted to do.

Early on, I was fortunate enough to get involved in research with Dr. Michael Mack and Dr. Lynn Hasher where I cut my teeth on my own projects in Psychology. I built a wide range of technical skills in programming, statistics, and 3D printing to apply to my research.

Research Interests

Individual differences in object recognition ability

In this line of research, I have developed completely new tests of object recognition ability, applying psychometrics techniques to confidently measure this ability in haptic and auditory perception. Using multivariate statistical techniques like confirmatory factor analysis, I found that visual and haptic object recognition share about 25% of their variance. Interestingly, between visual and auditory object recognition ability, there was an almost perfect correlation across modalities. The robust relationships remain even when controlling for possible third factors like general intelligence, working memory, and low-level visual ability. These findings suggests that object recognition ability taps into common perceptual mechanisms that extends across modalities.

Representative Publications:

  • Chow, J.K., Palmeri, T.J. & Gauthier, I. (2024). Distinct but related abilities for visual and haptic object recognition. Psychon Bull Rev. https://doi.org/10.3758/s13423-024-02471-x
  • Chow, J.K., Palmeri, T. J., Pluck, G., & Gauthier, I. (2023). Evidence for an amodal domain general object recognition ability. Cognition, 238, 105542.

Modeling Individual Differences in Perception with DNNs

We know that there are reliable individual differences in perceptual abilities but why they exist remains elusive. I am interested in using the latest deep neural networks to model individual differences and ask what factors best account for these differences. As modeling individual differences with deep neural networks is a relatively new approach, I have worked to determine the best ways to measure differences between deep neural networks, directly comparing similarity metrics used in machine learning, psychology, and neuroscience. In parallel, I have begun work on determining which factors in deep neural networks are the best to manipulate when the goal is to model individual differences. To do this, I sampled from a large space of deep neural networks to efficiently test which combination of factors result in the most consistent variations.

Representative Publications:

  • Chow, J. & Palmeri, T. (2022). Manipulating and Measuring Variation in DNN Representations. Poster presented at: 2022 Conference on Cognitive Computational Neuroscience; Aug 2022; San Francisco, CA

Personal Projects


Guild Wars 2 Data Tool

Demo link

The game Guild Wars 2 has rich combat logs that can be parsed on a per fight basis. These logs include information on teams of players and their performance in individual fights. I built a Discord bot using Python to connect to my backend server that collects and parses logs to store data in a MySQL database. The backend server also acts as an API for my frontend web app built with D3.js in Observable Notebooks to provide an interactive way to explore the data.

While log parsers are common in the Guild Wars 2 community, it lacks a central repository to track performance week over week and compare people’s performance in the same fights. This project aimed to fill this gap at a small scale for my friends. I focused on providing an clean and intuitive interface both from the Discord bot and the online web app that is easy to understand while still having a lot of options.


Crowfall Log Parser

Demo link

The short-lived game Crowfall could produce combat logs but they were poorly formatted and there lacked a convenient way to parse and organize the information. Further, the community had a culture of keeping their strategies a secret so privacy was a concern. I built this tool with D3.js in Observable Notebooks to parse and organize this data completely locally in the browser.

With this project, I focused on making the tool to display as much information as possible while still being useful even at a glance. I wanted users to be able to quickly glean summary information and then be able to dig into the exact breakdowns of the data.