• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸°úÇÐȸ ³í¹®Áö > Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Current Result Document : 5 / 15 ÀÌÀü°Ç ÀÌÀü°Ç   ´ÙÀ½°Ç ´ÙÀ½°Ç

ÇѱÛÁ¦¸ñ(Korean Title) GPU Ŭ·¯½ºÅÍ ±â¹Ý ´ë¿ë·® ¿ÂÅç·ÎÁö Ãß·Ð
¿µ¹®Á¦¸ñ(English Title) Scalable Ontology Reasoning Using GPU Cluster Approach
ÀúÀÚ(Author) È«Áø¿µ   Àü¸íÁß   ¹Ú¿µÅà  JinYung Hong   MyungJoong Jeon   YoungTack Park  
¿ø¹®¼ö·Ïó(Citation) VOL 43 NO. 01 PP. 0061 ~ 0070 (2016. 01)
Çѱ۳»¿ë
(Korean Abstract)
±Ù·¡¿¡ µé¾î ´Ù¾çÇÑ ½Ã¸àƽ ¼­ºñ½º¸¦ À§ÇÏ¿© ±âÁ¸ÀÇ Áö½ÄÀ» ¹ÙÅÁÀ¸·Î »õ·Î¿î Áö½ÄÀ» °í¼ÓÀ¸·Î Ãß·ÐÇÒ ¼ö ÀÖ´Â ´ë¿ë·® ¿ÂÅç·ÎÁö Ãß·Ð ±â¹ýÀÌ ¿ä±¸µÇ°í ÀÖ´Ù. ÀÌ·± Ãß¼¼¿¡ µû¶ó ´ë±Ô¸ðÀÇ Å¬·¯½ºÅ͸¦ È°¿ëÇÏ´Â ÇÏµÓ ¹× Spark ÇÁ·¹ÀÓ¿öÅ© ±â¹ÝÀÇ ¿ÂÅç·ÎÁö Ãß·Ð ¿£Áø °³¹ßÀÌ ¿¬±¸µÇ°í ÀÖ´Ù. ¶ÇÇÑ, ±âÁ¸ÀÇ CPU¿¡ ºñÇØ ¸¹Àº ÄÚ¾î·Î ±¸¼ºµÇ¾î ÀÖ´Â GPGPU¸¦ È°¿ëÇÏ´Â º´·Ä ÇÁ·Î±×·¡¹Ö ¹æ½Äµµ ¿ÂÅç·ÎÁö Ã߷п¡ È°¿ëµÇ°í ÀÖ´Ù. ¾Õ¼­ ¸»ÇÑ µÎ °¡Áö ¹æ½ÄÀÇ ÀåÁ¡À» °áÇÕÇÏ¿©, º» ³í¹®¿¡¼­´Â RDFS ´ë¿ë·® ¿ÂÅç·ÎÁö µ¥ÀÌÅ͸¦ ÀÎ-¸Þ¸ð¸® ±â¹Ý ÇÁ·¹ÀÓ¿öÅ©ÀÎ Spark¸¦ ÅëÇØ ºÐ»ê½ÃÅ°°í GPGPU¸¦ ÀÌ¿ëÇÏ¿© ºÐ»êµÈ µ¥ÀÌÅ͸¦ °í¼Ó Ãß·ÐÇÏ´Â ¹æ¹ýÀ» Á¦¾ÈÇÑ´Ù. GPGPU¸¦ ÅëÇÑ ¿ÂÅç·ÎÁö Ãß·ÐÀº ±âÁ¸ÀÇ Ãß·Ð ¹æ½Äº¸´Ù Àúºñ¿ëÀ¸·Î °í¼Ó Ãß·ÐÀ» ¼öÇàÇÏ´Â °ÍÀÌ °¡´ÉÇÏ´Ù. ¶ÇÇÑ Spark Ŭ·¯½ºÅÍÀÇ °¢ ³ëµå¸¦ ÅëÇÏ¿© ´ë¿ë·® ¿ÂÅç·ÎÁö µ¥ÀÌÅÍ¿¡ ´ëÇÑ ºÎÇϸ¦ ÁÙÀÏ ¼ö ÀÖ´Ù. º» ³í¹®¿¡¼­ Á¦¾ÈÇÏ´Â Ãß·Ð ¿£ÁøÀ» Æò°¡Çϱâ À§ÇÏ¿© LUBM10, 50, 100, 120¿¡ ´ëÇØ Ãß·Ð ¼Óµµ¸¦ ½ÇÇèÇÏ¿´°í, ÃÖ´ë µ¥ÀÌÅÍÀÎ LUBM120(¾à 1¹é7½Ê¸¸ Æ®¸®ÇÃ, 2.1GB)ÀÇ ½ÇÇè °á°ú, ÀÎ-¸Þ¸ð¸®(Spark) Ãß·Ð ¿£Áø º¸´Ù 7¹è ºü¸¥ Ãß·Ð ¼º´ÉÀ» º¸¿´´Ù.
¿µ¹®³»¿ë
(English Abstract)
In recent years, there has been a need for techniques for large-scale ontology inference in order to infer new knowledge from existing knowledge at a high speed, and for a diversity of semantic services. With the recent advances in distributed computing, developments of ontology inference engines have mostly been studied based on Hadoop or Spark frameworks on large clusters. Parallel programming techniques using GPGPU, which utilizes many cores when compared with CPU, is also used for ontology inference. In this paper, by combining the advantages of both techniques, we propose a new method for reasoning large RDFS ontology data using a Spark in-memory framework and inferencing distributed data at a high speed using GPGPU. Using GPGPU, ontology reasoning over high-capacity data can be performed as a low cost with higher efficiency over conventional inference methods. In addition, we show that GPGPU can reduce the data workload on each node through the Spark cluster. In order to evaluate our approach, we used LUBM ranging from 10 to 120. Our experimental results showed that our proposed reasoning engine performs 7 times faster than a conventional approach which uses a Spark in-memory inference engine.
Å°¿öµå(Keyword) ´ë¿ë·® RDFS Ã߷Р  GPGPU   Spark   LUBM   scalable RDFS reasoning   GPGPU   spark   LUBM  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå