论文标题
迈向小组学习:专家的分布式加权
Towards Group Learning: Distributed Weighting of Experts
论文作者
论文摘要
在许多领域中,从嘈杂来源集合中汇总信号是一个基本问题,包括众包,多代理计划,传感器网络,信号处理,投票,集成学习和联合学习。核心问题是如何从多个来源(例如专家)汇总信号,以揭示基本的地面真相。虽然一个完整的答案取决于信号的类型,信号的相关性和所需的输出的类型,但所有这些应用程序共有的问题是根据其质量区分来源并相应地对其进行加权。通常假定这种差异化和聚集是由单个,准确的中心机制或代理(例如法官)完成的。我们以两种方式使这个模型复杂化。首先,我们与单个法官和一名法官一起研究了设置。其次,鉴于法官的这种多代理互动,我们研究了法官报告空间的各种限制。我们以最佳的专家加权为基础,并证明,在某些条件下,次优机制的合奏可以最佳地发挥作用。然后,我们从经验上表明,整体在更广泛的条件范围内近似最佳机制的性能。
Aggregating signals from a collection of noisy sources is a fundamental problem in many domains including crowd-sourcing, multi-agent planning, sensor networks, signal processing, voting, ensemble learning, and federated learning. The core question is how to aggregate signals from multiple sources (e.g. experts) in order to reveal an underlying ground truth. While a full answer depends on the type of signal, correlation of signals, and desired output, a problem common to all of these applications is that of differentiating sources based on their quality and weighting them accordingly. It is often assumed that this differentiation and aggregation is done by a single, accurate central mechanism or agent (e.g. judge). We complicate this model in two ways. First, we investigate the setting with both a single judge, and one with multiple judges. Second, given this multi-agent interaction of judges, we investigate various constraints on the judges' reporting space. We build on known results for the optimal weighting of experts and prove that an ensemble of sub-optimal mechanisms can perform optimally under certain conditions. We then show empirically that the ensemble approximates the performance of the optimal mechanism under a broader range of conditions.