论文标题
神经数据的信息理论分析,以最大程度地降低研究人员在预测编码研究中的假设的影响
Information-theoretic analyses of neural data to minimize the effect of researchers' assumptions in predictive coding studies
论文作者
论文摘要
研究神经信息处理的研究通常会隐含地询问两者,使用了几种替代方案的哪种处理策略,以及如何在神经动力学中实施该策略。一个主要的例子是关于预测性编码的研究。这些经常询问在层次神经系统中传递了有关内部预测和输入之间的投入或预测错误的确认预测 - 同时寻找编码错误和预测的神经相关性。如果我们不确切地知道神经系统在任何给定时刻的预测,这会导致循环分析 - 正如对正确的批评一样。为了避免这种循环分析,我们建议通过局部信息理论量表达信息处理策略(例如预测编码),以便可以直接从神经数据中估算它们。我们通过调查了两个类似预测编码的处理策略的相对说明来证明我们的方法,在那里我们通过本地活动信息存储和本地传输熵来量化预测性编码的构件,即投入和信息传输。我们定义了有关两种数量关系的可检验假设,以确定使用了哪些假定策略。我们证明了我们从猫的视网膜生成突触中尖峰数据的方法。应用我们的本地信息动态框架,我们能够证明突触代码可预测而不是令人惊讶的输入。为了支持我们的发现,我们采用部分信息分解中采取的措施,可以区分转移的信息是否主要是自下而上的感觉输入或以突触的当前状态条件传输的信息。支持我们的本地信息理论结果,我们发现突触优先传递自下而上的信息。
Studies investigating neural information processing often implicitly ask both, which processing strategy out of several alternatives is used and how this strategy is implemented in neural dynamics. A prime example are studies on predictive coding. These often ask if confirmed predictions about inputs or predictions errors between internal predictions and inputs are passed on in a hierarchical neural system--while at the same time looking for the neural correlates of coding for errors and predictions. If we do not know exactly what a neural system predicts at any given moment, this results in a circular analysis--as has been criticized correctly. To circumvent such circular analysis, we propose to express information processing strategies (such as predictive coding) by local information-theoretic quantities, such that they can be estimated directly from neural data. We demonstrate our approach by investigating two opposing accounts of predictive coding-like processing strategies, where we quantify the building blocks of predictive coding, namely predictability of inputs and transfer of information, by local active information storage and local transfer entropy. We define testable hypotheses on the relationship of both quantities to identify which of the assumed strategies was used. We demonstrate our approach on spiking data from the retinogeniculate synapse of the cat. Applying our local information dynamics framework, we are able to show that the synapse codes for predictable rather than surprising input. To support our findings, we apply measures from partial information decomposition, which allow to differentiate if the transferred information is primarily bottom-up sensory input or information transferred conditionally on the current state of the synapse. Supporting our local information-theoretic results, we find that the synapse preferentially transfers bottom-up information.