• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

Ȩ Ȩ > ¿¬±¸¹®Çå >

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) Federated Distillation with Adaptive Stochastic Optimization for On-Device Learning
¿µ¹®Á¦¸ñ(English Title) Federated Distillation with Adaptive Stochastic Optimization for On-Device Learning
ÀúÀÚ(Author) Huy Q. Le   Minh N. H. Nguyen   Choong Seon Hong  
¿ø¹®¼ö·Ïó(Citation) VOL 48 NO. 02 PP. 0608 ~ 0610 (2021. 12)
Çѱ۳»¿ë
(Korean Abstract)
¿µ¹®³»¿ë
(English Abstract)
Federated Learning is a distributed machine learning technique where multiple devices collaboratively contribute to building the powerful model without exchanging raw data. One of the most significant challenges in federated learning is to handle the statistical heterogeneity due to the non-i.i.d. data of local devices. Moreover, it also faces the issue of high communication costs when exchanging the huge model parameters. To resolve those bottlenecks, we proposed federated learning with a distillation mechanism in the previous work. Nevertheless, our previous work faces scalability limitations, which we deployed on just a small number of devices. In this paper, we propose FedABD: the adaptive stochastic optimization for federated learning with bi-level distillation on the large scale of devices. The experiments are deployed on two federated datasets, and it illustrates that our proposed scheme accomplishes the significant speedups and the better device¡¯s personalized and generalized performances than other baselines while maintaining the advantages of communication cost.
Å°¿öµå(Keyword)
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå