Discovering Salient Neurons in deep NLP models

Nadir Durrani, Fahim Dalvi, Hassan Sajjad

Research output: Contribution to journalArticlepeer-review

Abstract

While a lot of work has been done in understanding representations learned within deep NLP models and what knowledge they capture, work done towards analyzing individ-ual neurons is relatively sparse. We present a technique called Linguistic Correlation Analysis to extract salient neurons in the model, with respect to any extrinsic property, with the goal of understanding how such knowledge is preserved within neurons. We carry out a fine-grained analysis to answer the following questions: (i) can we identify subsets of neurons in the network that learn a specific linguistic property? (ii) is a certain linguistic phenomenon in a given model localized (encoded in few individual neurons) or distributed across many neurons? (iii) how redundantly is the information preserved? (iv) how does fine-tuning pre-trained models towards downstream NLP tasks impact the learned linguis-tic knowledge? (v) how do models vary in learning different linguistic properties? Our data-driven, quantitative analysis illuminates interesting findings: (i) we found small sub-sets of neurons that can predict different linguistic tasks; (ii) neurons capturing basic lexical information, such as suffixation, are localized in the lowermost layers; (iii) neurons learning complex concepts, such as syntactic role, are predominantly found in middle and higher layers; (iv) salient linguistic neurons are relocated from higher to lower layers during trans-fer learning, as the network preserves the higher layers for task-specific information; (v) we found interesting differences across pre-trained models regarding how linguistic infor-mation is preserved within them; and (vi) we found that concepts exhibit similar neuron distribution across different languages in the multilingual transformer models. Our code is publicly available as part of the NeuroX toolkit (Dalvi et al., 2023).1
Original languageEnglish
Article number13288
Number of pages40
JournalJournal of Machine Learning Research
Volume24
Publication statusPublished - 2023

Keywords

  • Explainable AI
  • Interpretability
  • Neuron Analysis
  • Representation Analysis

Fingerprint

Dive into the research topics of 'Discovering Salient Neurons in deep NLP models'. Together they form a unique fingerprint.

Cite this