-
公开(公告)号:US20230313940A1
公开(公告)日:2023-10-05
申请号:US18024332
申请日:2020-09-03
Applicant: LG ELECTRONICS INC.
Inventor: Yunjoo KIM , Jemin WOO , Juyoung CHOI , Chulyoon JUNG , Jaewoo KIM , Jusik HEO
CPC classification number: F16M11/425 , F16M11/046 , F16M11/18 , F16M11/2014 , F16M11/2092 , F16M11/26 , F16M13/022
Abstract: Disclosed is a display device. The display device of the present disclosure includes: a rail which extends long; a moving wall which extends long in a direction intersecting the rail and has a display panel; and a rod which is positioned between the rail and the moving wall; wherein one end of the rod is coupled to the rail to be movable in a length direction of the rail, and the other end of the rod is fixed to the moving wall, wherein the moving wall is movable in the length direction of the rail.
-
公开(公告)号:US20210319311A1
公开(公告)日:2021-10-14
申请号:US16924951
申请日:2020-07-09
Applicant: LG ELECTRONICS INC.
Inventor: Dongyeon SHIN , Sungmin PARK , Jemin WOO , Kiyoung LEE
Abstract: The present disclosure discloses an artificial intelligence apparatus including an input interface configured to obtain input data, a sensing interface configured to obtain environment information, and one or more processors configured to classify an object by inputting the input data obtained from the input interface to an artificial intelligence model, in which the artificial intelligence model uses a first learning model and a second learning model which is connected with the first learning model and includes a plurality of output layers to respectively assign weights to the respective result values output by the plurality of output layers and combine the respective result values to which the weights are assigned to derive the final result.
-
公开(公告)号:US20200250509A1
公开(公告)日:2020-08-06
申请号:US16777729
申请日:2020-01-30
Applicant: LG ELECTRONICS INC.
Inventor: Byoungjoo LEE , Jemin WOO , Jinjong LEE , Jungsig JUN
Abstract: The present disclosure relates to an artificial intelligence chip for processing computations for machine learning models that provides a compute node and a method of processing a computational model using a plurality of compute nodes in parallel. In some embodiments, the compute node, comprises: a communication interface configured to communicate with one or more other compute nodes; a memory configured to store shared data that is shared with the one or more other compute nodes; and a processor configured to: determine an expected computational load for processing a computational model for input data; obtain a contributable computational load of the compute node and the one or more other compute nodes; and select a master node to distribute the determined expected computational load based on the obtained contributable computational load. Consequently, learning and inference can be performed efficiently on-device.
-
-