Jason Chow

Computational Cognitive Scientist

About Me

I am a computational cognitive scientist specializing in leveraging machine learning and AI to understand human perception and decision-making. I develop machine learning models inspired by principles in cognitive psychology to extract unique insights from high-dimensional data. My skillset enables me to distill complex human data into actionable takeaways, providing a unique window into data.

I am proficient in Python, R, and JavaScript. I have extensive experience with libraries and technologies for machine learning such as Tensorflow, PyTorch, and Scikit-learn. My expertise also includes data visualization with D3.js and statistical analysis with Tidyverse, lavaan, BayesFactor. In my side-projects, I’ve flexed my wider technical skills in creating end-to-end consumer-facing products: implementing ETL pipelines, efficient data APIs, and interactive data visualization dashboards, handling data with millions of rows. Across these projects, I’ve served 25K+ users.

See my resume for more details!

Education

Vanderbilt University

PhD Psychological Sciences

2018 - 2024

  • Developed scalable framework to reliably measure representational variability across deep neural networks due to differences in model attributes (e.g., model architecture, training dataset, and training regime) across 700+ models and 9 datasets.
  • Distilled insights from high-dimensional large-scale analysis (e.g., INDSCAL, hierarchical clustering) of deep neural network models to efficiently instantiate parametric model manipulations across a set of 100 new networks to systematically vary representations to model individual differences in object recognition ability.
  • Implemented psychologically-inspired transfer learning DNN architecture improving a multi-task classification accuracy by 3%
  • Designed, optimized, and validated new measures of object recognition ability in vision, haptics, and audition using hand-designed trials and data-driven automated techniques resulting in high reliability with faster tests (25% reduction in test time).
  • Created internal R package to perform statistical analysis/visualization of multivariate individual differences data using confirmatory factor analysis and Bayesian hypothesis testing, resulting in 7 first-author publications.

Portfolio

RaidedLoA

Lost Ark data analysis website

Demo link

Website to analyze global trends in performance across classes in the game. Developed data scraping Python CLI for use with Github Actions. Displayed data in interactive dashboard built with D3.js and Observable Framework. Used user feedback and analytics to shape refinement of the user experience and development of new features. Focused on a strong global user experience while efficiently using available resources to collect, process, and serve data. Visited by 22K+ users.

RaidedGW2

Guild Wars 2 data toolset

Demo link

Set of tools to collect, analyze, and display historical team data. Created Discord bot for ETL into a MySQL database. Created web API with Flask to serve data for an interactive dashboard. Added novel statistical methods to avoid misleading conclusions and improve ease of understanding. Focused on a closely tailored experience to best serve the teams using the tools.

RaidedCF

Crowfall log parser

Demo link

A tool to parse and organize combat logs from poorly structured data. Focused on easy providing high-level information at a glance while providing high information density through interactive breakdowns. Built entirely with Javascript and D3.js, requiring no server-side processing. Focused on efficiency and privacy to provide trust to the user that their data stays on their machine.

Research Interests

Individual differences in object recognition ability

In this line of research, I have developed completely new tests of object recognition ability, applying psychometrics techniques to confidently measure this ability in haptic and auditory perception. Using multivariate statistical techniques like confirmatory factor analysis, I found that visual and haptic object recognition share about 25% of their variance. Interestingly, between visual and auditory object recognition ability, there was an almost perfect correlation across modalities. The robust relationships remain even when controlling for possible third factors like general intelligence, working memory, and low-level visual ability. These findings suggests that object recognition ability taps into common perceptual mechanisms that extends across modalities.

Representative Publications:

  • Chow, J.K., Palmeri, T.J. & Gauthier, I. (2024). Distinct but related abilities for visual and haptic object recognition. Psychon Bull Rev. https://doi.org/10.3758/s13423-024-02471-x
  • Chow, J.K., Palmeri, T. J., Pluck, G., & Gauthier, I. (2023). Evidence for an amodal domain general object recognition ability. Cognition, 238, 105542.

Modeling Individual Differences in Perception with DNNs

We know that there are reliable individual differences in perceptual abilities but why they exist remains elusive. I am interested in using the latest deep neural networks to model individual differences and ask what factors best account for these differences. As modeling individual differences with deep neural networks is a relatively new approach, I have worked to determine the best ways to measure differences between deep neural networks, directly comparing similarity metrics used in machine learning, psychology, and neuroscience. In parallel, I have begun work on determining which factors in deep neural networks are the best to manipulate when the goal is to model individual differences. To do this, I sampled from a large space of deep neural networks to efficiently test which combination of factors result in the most consistent variations.

Representative Publications:

  • Chow, J. & Palmeri, T. (2024). Manipulating and Measuring Variation in Deep Neural Network (DNN) Representations of Objects. Cognition. https://osf.io/preprints/psyarxiv/yw49e