Abstract:
Disclosed is a vehicle user interface apparatus including a display unit, an interface, and a processor configured to receive a vehicle external image via the interface, specify a first area corresponding to a preset Point of Interest (POI) in the vehicle external image, control the display unit to display a graphic object of augmented reality on the vehicle external image to point out the first area, and control the display unit to display the graphic object along a travel lane to point out the first area.
Abstract:
The present invention relates to a head-up display for a vehicle configured to change display positions of a plurality of virtual images displayed through a windshield of the vehicle or the like to implement augmented reality and a control method thereof, and a head-up display for a vehicle according to an embodiment of the present disclosure may include a mirror unit comprising a first mirror for reflecting first and second image lights toward a windshield of the vehicle; a display layer located at the windshield of the vehicle to display a first virtual image corresponding to the first image light in a first region, and display a second virtual image corresponding to the second image light in a second region; and a controller configured to change an inclination of the first mirror to change a display position of the first and the second virtual image.
Abstract:
A user interface apparatus for a vehicle includes: an interface unit; a display unit configured to project an augmented reality (AR) graphic object onto a screen; at least one processor; and a computer-readable medium coupled to the at least one processor having stored thereon instructions which, when executed by the at least one processor, causes the at least one processor to perform operations including: acquiring, through the interface unit, front view image information and vehicle motion information; based on the front view image information, generating the AR graphic object; and based on the vehicle motion information, warping the AR graphic object.
Abstract:
The present invention relates to a method and apparatus of calculating a location of an electronic device. The present invention comprises receiving a common packet from a host device, wherein the common packet includes at least one of time information or frequency related information through which a data packet is transmitted; receiving the data packet based on information included in the common packet from the host device, wherein the data packet includes at least one of a location related information, or antenna related information of the host device; obtaining angle information indicating a location relation with the host device using at least one of the location related information or the antenna related information included in the received data packet; and calculating the location of the electronic device based on the angle information.
Abstract:
Disclosed is a user interface apparatus for a vehicle, including: an interface unit; a display unit configured to implement multiple display layers each having a different virtual difference; and a processor configured to receive driving situation information of the vehicle through the interface unit, and control the display unit to vary a virtual distance of each of the multiple display layers.
Abstract:
The present invention relates to a head-up display for a vehicle configured to change display positions of a plurality of virtual images displayed through a windshield of the vehicle or the like to implement augmented reality and a control method thereof, and a head-up display for a vehicle according to an embodiment of the present disclosure may include a mirror unit comprising a first mirror for reflecting first and second image lights toward a windshield of the vehicle; a display layer located at the windshield of the vehicle to display a first virtual image corresponding to the first image light in a first region, and display a second virtual image corresponding to the second image light in a second region; and a controller configured to change an inclination of the first mirror to change a display position of the first and the second virtual image.
Abstract:
A vehicle control device provided in a vehicle includes a communication unit, a sensing unit, a display unit, and a processor configured to output driving-related information of an adjacent vehicle decided using at least one of the communication unit and the sensing unit on the display unit based on a satisfaction of a preset condition.
Abstract:
An image decoding method according to the present document includes obtaining motion prediction information for a current block from a bitstream, generating an affine MVP candidate list for the current block, deriving CPMVPs for CPs of the current block based on the affine MVP candidate list, deriving CPMVDs for the CPs of the current block based on the motion prediction information, deriving CPMVs for the CPs of the current block based on the CPMVPs and the CPMVDs, and deriving prediction samples for the current block based on the CPMVs.
Abstract:
The present disclosure relates to a method by which a decoding apparatus performs video coding, comprising the steps of: generating a motion information candidate list for a current block; selecting one candidate from among those included in the motion information candidate list; deriving control point motion vectors (CPMVs) of the current block based on the selected candidate; deriving sub-block-unit or sample-unit motion vectors of the current block based on the CPMVs; deriving a predicted block based on the motion vectors; and reconstructing a current picture based on the predicted block, wherein the motion information candidate list includes an inherited affine candidate, the inherited affine candidate is derived based on candidate blocks coded by affine prediction, from among spatial neighboring blocks of the current block, and the inherited affine candidate is generated up to a pre-defined maximum number.
Abstract:
Disclosed is a user interface apparatus for a vehicle, including: a first camera configured to capture a forward view image including an object; an interface unit configured to receive information about the object from a second camera; a display; and a processor configured to convert the information about the object in a coordinate system of the second camera with respect to the first camera, generate an augmented reality (AR) graphic object corresponding to the object, and control the display so as to overlay the AR graphic object on the forward view image.