• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

Çмú´ëȸ ÇÁ·Î½Ãµù

Ȩ Ȩ > ¿¬±¸¹®Çå > Çмú´ëȸ ÇÁ·Î½Ãµù > Çѱ¹Á¤º¸°úÇÐȸ Çмú´ëȸ > 2016³â ÄÄÇ»ÅÍÁ¾ÇÕÇмú´ëȸ

2016³â ÄÄÇ»ÅÍÁ¾ÇÕÇмú´ëȸ

Current Result Document : 4 / 20 ÀÌÀü°Ç ÀÌÀü°Ç   ´ÙÀ½°Ç ´ÙÀ½°Ç

ÇѱÛÁ¦¸ñ(Korean Title) Implementation of Large Scale Deep Neural Networks in Cluster Computing
¿µ¹®Á¦¸ñ(English Title) Implementation of Large Scale Deep Neural Networks in Cluster Computing
ÀúÀÚ(Author) Akhmedov Khumoyun   Yun Cui   Hanku Lee   Myoungjin Kim  
¿ø¹®¼ö·Ïó(Citation) VOL 43 NO. 01 PP. 1541 ~ 1543 (2016. 06)
Çѱ۳»¿ë
(Korean Abstract)
¿µ¹®³»¿ë
(English Abstract)
Deep Learning models can efficiently infer complex relationships from huge amounts of data and can be used in image classification, pattern recognition, online marketing problems. However, the high degree of complexity in Deep Networks makes them computationally expensive to process. To overcome the issue, recently, most of the DNN have been trained using GPUs. And its usage has led to successful improvements which made the training of modestly sized models practical. However, there are some limits of GPUs, e.g. most GPUs can only hold a relatively small amount of data in its memory, which makes them hard to leverage for large models. A good alternative to GPUs is cluster of computers, collection of commodity servers. Cluster computers have some advantages over GPUs such as they are relatively inexpensive to construct and can be scaled out to thousands of machines which is very good environment for training large scale DNN models. In this paper, we propose the naïve implementation of DNN training framework which is designed to run in cluster computers using Akka Framework. The implementation is mostly based on Google¡¯s DistBelief framework¡¯s Downpour SGD algorithm.
Å°¿öµå(Keyword)   
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå