GENERATION OF GHOST-FREE HIGH DYNAMIC RANGE IMAGES
    1.
    发明申请
    GENERATION OF GHOST-FREE HIGH DYNAMIC RANGE IMAGES 审中-公开
    无GHOST高动态范围图像的生成

    公开(公告)号:WO2014172060A1

    公开(公告)日:2014-10-23

    申请号:PCT/US2014/031355

    申请日:2014-03-20

    Abstract: Apparatuses and methods for reading a set of images to merge together into a high dynamic range (HDR) output image are described. Images have a respective HDR weight and a respective ghost-free weight. Images are merged together using the weighted average of the set of input images using the ghost-free weight. A difference image is determined based on a difference between each pixel within a HDR output image and each respective pixel within a reference image used to create the HDR output image.

    Abstract translation: 描述用于读取一组图像以将其合并成高动态范围(HDR)输出图像的装置和方法。 图像具有相应的HDR权重和相应的无重影重量。 使用无重影重量的输入图像集合的加权平均值将图像合并在一起。 基于HDR输出图像内的每个像素与用于创建HDR输出图像的参考图像内的各个像素之间的差异来确定差分图像。

    RENDERING AUGMENTED REALITY BASED ON FOREGROUND OBJECT
    2.
    发明申请
    RENDERING AUGMENTED REALITY BASED ON FOREGROUND OBJECT 审中-公开
    基于前置对象渲染增强的现实

    公开(公告)号:WO2014107261A1

    公开(公告)日:2014-07-10

    申请号:PCT/US2013/073510

    申请日:2013-12-06

    Abstract: A mobile device detects a moveable foreground object in captured images, e.g., a series of video frames without depth information. The object may be one or more of the user's fingers. The object may be detected by warping one of a captured image of a scene that includes the object and a reference image of the scene without the object so they have the same view and comparing the captured image and the reference image after warping. A mask may be used to segment the object from the captured image. Pixels are detected in the extracted image of the object and the pixels are used to detect the point of interest on the foreground object. The object may then be tracked in subsequent images. Augmentations may be rendered and interacted with or temporal gestures may be detected and desired actions performed accordingly.

    Abstract translation: 移动设备在捕获的图像中检测可移动的前景对象,例如,没有深度信息的一系列视频帧。 该对象可以是用户的一个或多个手指。 可以通过对包含对象的场景的拍摄图像和没有对象的场景的参考图像进行扭曲来检测对象,使得它们具有相同的视图,并且在翘曲之后比较捕获的图像和参考图像。 可以使用掩码来从捕获的图像中分割对象。 在提取的对象图像中检测像素,并且使用像素来检测前景对象上的兴趣点。 然后可以在后续图像中跟踪对象。 增强可以被渲染并且与之进行交互,或者可以检测到时间手势并相应地执行期望的动作。

    SENSOR API FRAMEWORK FOR CLOUD BASED APPLICATIONS
    3.
    发明申请
    SENSOR API FRAMEWORK FOR CLOUD BASED APPLICATIONS 审中-公开
    基于云的应用的传感器API框架

    公开(公告)号:WO2013070420A1

    公开(公告)日:2013-05-16

    申请号:PCT/US2012/061142

    申请日:2012-10-19

    Abstract: An apparatus and method for a framework exposing an API (application programming interface) to web-based server applications on the internet or in the cloud is presented. The API allows server applications to retrieve sensor data from a mobile device via a low-power sensor core processor on a mobile device. This API eliminates effort and cost associated with developing and promoting a new mobile device client application. The API framework includes APIs that web-based application may use to fetch sensor data from one or more particular sensors on the mobile device.

    Abstract translation: 提出了一种用于将互联网或云中的基于Web的服务器应用程序API(应用程序编程接口)暴露的框架的设备和方法。 API允许服务器应用程序通过移动设备上的低功耗传感器核心处理器从移动设备检索传感器数据。 该API消除了与开发和推广新的移动设备客户端应用程序相关的费用和成本。 API框架包括基于Web的应用程序可以用于从移动设备上的一个或多个特定传感器获取传感器数据的API。

    AN ADAPTABLE FRAMEWORK FOR CLOUD ASSISTED AUGMENTED REALITY
    4.
    发明申请
    AN ADAPTABLE FRAMEWORK FOR CLOUD ASSISTED AUGMENTED REALITY 审中-公开
    用于云协助的现实的适应框架

    公开(公告)号:WO2012040099A1

    公开(公告)日:2012-03-29

    申请号:PCT/US2011/052135

    申请日:2011-09-19

    CPC classification number: G06T7/246 G06K9/00671 G06T7/73 G06T2207/10004

    Abstract: A mobile platform efficiently processes sensor data, including image data, using distributed processing in which latency sensitive operations are performed on the mobile platform, while latency insensitive, but computationally intensive operations are performed on a remote server. The mobile platform acquires sensor data, such as image data, and determines whether there is a trigger event to transmit the sensor data to the server. The trigger event may be a change in the sensor data relative to previously acquired sensor data, e.g., a scene change in an image. When a change is present, the sensor data may be transmitted to the server for processing. The server processes the sensor data and returns information related to the sensor data, such as identification of an object in an image or a reference image or model. The mobile platform may then perform reference based tracking using the identified object or reference image or model.

    Abstract translation: 移动平台使用在移动平台上执行延迟敏感操作的分布式处理来有效地处理包括图像数据在内的传感器数据,而延迟不敏感,但在远程服务器上执行计算密集型操作。 移动平台获取诸如图像数据的传感器数据,并且确定是否存在将传感器数据传送到服务器的触发事件。 触发事件可以是传感器数据相对于先前获取的传感器数据的变化,例如图像中的场景变化。 当存在变化时,传感器数据可以被发送到服务器进行处理。 服务器处理传感器数据并返回与传感器数据相关的信息,例如图像中的对象或参考图像或模型的识别。 然后,移动平台可以使用所识别的对象或参考图像或模型执行基于参考的跟踪。

    METHOD AND APPARATUS FOR GENERATING AN ALL-IN-FOCUS IMAGE
    7.
    发明申请
    METHOD AND APPARATUS FOR GENERATING AN ALL-IN-FOCUS IMAGE 审中-公开
    用于产生全焦点图像的方法和装置

    公开(公告)号:WO2015031856A1

    公开(公告)日:2015-03-05

    申请号:PCT/US2014/053583

    申请日:2014-08-29

    Abstract: Techniques are described for generating an all-in focus image with a capabilityto refocus. One example includes obtaining a first depth map associated with a plurality of captured images of a scene. The plurality of captured images may include images having different focal lengths. The method further includes obtaining a second depth map associated with the plurality of captured images, generating a composite image showing different portions of the scene in focus (based on the plurality of captured images and the first depth map), and generating a refocused image showing a selected portion of the scene in focus (based on the composite image and the second depth map).

    Abstract translation: 描述了用于产生具有重新聚焦能力的全焦点图像的技术。 一个示例包括获得与场景的多个捕获图像相关联的第一深度图。 多个拍摄图像可以包括具有不同焦距的图像。 该方法还包括获得与多个拍摄图像相关联的第二深度图,生成示出对焦的场景的不同部分(基于多个拍摄图像和第一深度图)的合成图像,以及生成重新聚焦图像,其显示 对焦的场景的选定部分(基于合成图像和第二深度图)。

    HEAD POSE ESTIMATION USING RGBD CAMERA
    9.
    发明申请
    HEAD POSE ESTIMATION USING RGBD CAMERA 审中-公开
    使用RGBD摄像机的头枕估计

    公开(公告)号:WO2012158361A1

    公开(公告)日:2012-11-22

    申请号:PCT/US2012/036362

    申请日:2012-05-03

    Abstract: A three-dimensional pose of the head of a subject is determined based on depth data captured in multiple images. The multiple images of the head are captured, e.g., by an RGBD camera. A rotation matrix and translation vector of the pose of the head relative to a reference pose is determined using the depth data. For example, arbitrary feature points on the head may be extracted in each of the multiple images and provided along with corresponding depth data to an Extended Kalman filter with states including a rotation matrix and a translation vector associated with the reference pose for the head and a current orientation and a current position. The three-dimensional pose of the head with respect to the reference pose is then determined based on the rotation matrix and the translation vector.

    Abstract translation: 基于在多个图像中捕获的深度数据确定被摄体头部的三维姿态。 头部的多个图像例如通过RGBD照相机被捕获。 使用深度数据确定头部相对于参考姿势的姿态的旋转矩阵和平移向量。 例如,可以在多个图像的每一个中提取头上的任意特征点,并将其与相应的深度数据一起提供给扩展卡尔曼滤波器,该扩展卡尔曼滤波器的状态包括旋转矩阵和与头部的参考姿势相关联的平移向量 当前方向和当前位置。 然后基于旋转矩阵和平移向量来确定头部相对于参考姿势的三维姿态。

    HEAD POSE ESTIMATION USING RGBD CAMERA
    10.
    发明公开

    公开(公告)号:EP3627445A1

    公开(公告)日:2020-03-25

    申请号:EP19209823.4

    申请日:2012-05-03

    Abstract: A three-dimensional pose of the head of a subject is determined based on depth data captured in multiple images. The multiple images of the head are captured, e.g., by an RGBD camera. A rotation matrix and translation vector of the pose of the head relative to a reference pose is determined using the depth data. For example, arbitrary feature points on the head may be extracted in each of the multiple images and provided along with corresponding depth data to an Extended Kalman filter with states including a rotation matrix and a translation vector associated with the reference pose for the head and a current orientation and a current position. The three-dimensional pose of the head with respect to the reference pose is then determined based on the rotation matrix and the translation vector.

Patent Agency Ranking