论文标题
在联邦学习中招募客户的动态定价
Dynamic Pricing for Client Recruitment in Federated Learning
论文作者
论文摘要
尽管联合学习(FL)保留了客户的数据隐私,但鉴于其移动设备中的通信成本和能源消耗,许多客户仍然不愿加入FL。设计定价补偿以激励足够的客户加入FL并分配训练全球模型非常重要。 FL的先前定价机制是静态的,无法随着时间的推移适应客户的随机到达模式。我们通过构建哈密顿功能来最佳平衡客户招聘时间和模型培训时间,在不了解客户的实际到达或培训费用的情况下,提出了一种新的动态定价解决方案,以最佳地平衡客户招聘时间和模型培训时间。在客户招聘阶段,我们每位客户到达总付款和FL模型的准确性损失之间的权衡。当我们完成招聘截止日期或更大的数据衰老时,这种奖励会逐渐增加,如果客户的培训时间变短,我们还会延长截止日期。此外,我们扩展以考虑培训数据大小和培训时间的异质客户类型。我们成功地扩展了动态定价解决方案,并将线性复杂性的最佳算法开发到单调选择FL的客户端类型。最后,我们还显示了解决方案与客户数据大小的估计误差的鲁棒性,并运行数值实验以验证我们的结论。
Though federated learning (FL) well preserves clients' data privacy, many clients are still reluctant to join FL given the communication cost and energy consumption in their mobile devices. It is important to design pricing compensations to motivate enough clients to join FL and distributively train the global model. Prior pricing mechanisms for FL are static and cannot adapt to clients' random arrival pattern over time. We propose a new dynamic pricing solution in closed-form by constructing the Hamiltonian function to optimally balance the client recruitment time and the model training time, without knowing clients' actual arrivals or training costs. During the client recruitment phase, we offer time-dependent monetary rewards per client arrival to trade-off between the total payment and the FL model's accuracy loss. Such reward gradually increases when we approach to the recruitment deadline or have greater data aging, and we also extend the deadline if the clients' training time per iteration becomes shorter. Further, we extend to consider heterogeneous client types in training data size and training time per iteration. We successfully extend our dynamic pricing solution and develop an optimal algorithm of linear complexity to monotonically select client types for FL. Finally, we also show the robustness of our solution against estimation error of clients' data sizes and run numerical experiments to validate our conclusion.