PRIORITIZING OBJECTS FOR OBJECT RECOGNITION
    1.
    发明申请

    公开(公告)号:WO2019046077A1

    公开(公告)日:2019-03-07

    申请号:PCT/US2018/047611

    申请日:2018-08-22

    Abstract: Techniques and systems are provided for prioritizing objects for object recognition in one or more video frames. For example, a current video frame is obtained, and a objects are detected in the current video frame. State information associated with the objects is determined. Priorities for the objects can also be determined. For example, a priority can be determined for an object based on state information associated with the object. Object recognition is performed for at least one object from the objects based on priorities determined for the at least one object. For instance, object recognition can be performed for objects having higher priorities before objects having lower priorities.

    METHODS AND SYSTEMS OF UPDATING MOTION MODELS FOR OBJECT TRACKERS IN VIDEO ANALYTICS
    2.
    发明申请
    METHODS AND SYSTEMS OF UPDATING MOTION MODELS FOR OBJECT TRACKERS IN VIDEO ANALYTICS 审中-公开
    在视频分析中更新对象追踪器的运动模型的方法和系统

    公开(公告)号:WO2018031106A1

    公开(公告)日:2018-02-15

    申请号:PCT/US2017/035485

    申请日:2017-06-01

    Abstract: Techniques and systems are provided for processing video data. For example, techniques and systems are provided for performing context-aware object or blob tracker updates (e.g., by updating a motion model of a blob tracker). In some cases, to perform a context-aware blob tracker update, a blob tracker is associated with a first blob. The first blob includes pixels of at least a portion of one or more foreground objects in one or more video frames. A split of the first blob and a second blob in a current video frame can be detected, and a motion model of the blob tracker is reset in response to detecting the split of the first blob and the second blob. In some cases, a motion model of a blob tracker associated with a merged blob is updated to include a predicted location of the blob tracker in a next video frame. The motion model can be updated by using a previously predicted location of blob tracker as the predicted location of the blob tracker in the next video frame in response to the blob tracker being associated with the merged blob. The previously predicted location of the blob tracker can be determined using a blob location of a blob from a previous video frame.

    Abstract translation: 提供了用于处理视频数据的技术和系统。 例如,提供用于执行情境感知对象或斑点跟踪器更新(例如,通过更新斑点跟踪器的运动模型)的技术和系统。 在某些情况下,为了执行上下文感知斑点跟踪器更新,斑点跟踪器与第一个斑点相关联。 第一块包括一个或多个视频帧中的一个或多个前景对象的至少一部分的像素。 响应于检测到第一斑点和第二斑点的分裂,可以检测当前视频帧中的第一斑点和第二斑点的分裂,并且斑点追踪器的运动模型被重置。 在一些情况下,与合并的斑点相关联的斑点跟踪器的运动模型被更新以包括斑点跟踪器在下一个视频帧中的预测位置。 响应于团块跟踪器与合并团块相关联,可以通过使用团块跟踪器的先前预测位置作为下一个视频帧中的团块跟踪器的预测位置来更新运动模型。 先前预测的斑点跟踪器的位置可以使用来自先前视频帧的斑点的斑点位置来确定。

    METHODS AND SYSTEMS OF MAINTAINING LOST OBJECT TRACKERS IN VIDEO ANALYTICS
    3.
    发明申请
    METHODS AND SYSTEMS OF MAINTAINING LOST OBJECT TRACKERS IN VIDEO ANALYTICS 审中-公开
    在视频分析中维护丢失对象追踪器的方法和系统

    公开(公告)号:WO2018031105A1

    公开(公告)日:2018-02-15

    申请号:PCT/US2017/035483

    申请日:2017-06-01

    Abstract: Techniques and systems are provided for maintaining lost blob trackers for one or more video frames. In some examples, one or more blob trackers maintained for a sequence of video frames are identified. The one or more blob trackers are associated with one or more blobs of the sequence of video frames. A transition of a blob tracker from a first type of tracker to a lost tracker is detected at a first video frame. For example, the blob tracker can be transitioned from the first type of tracker to the lost tracker when a blob for which the blob tracker was associated with in a previous frame is not detected in the first video frame. A recovery duration is determined for the lost tracker at the first video frame. For one or more subsequent video frames obtained after the first video frame, the lost tracker is removed from the one or more blob trackers maintained for the sequence of video frames when a lost duration for the lost tracker is greater than the recovery duration. The blob tracker can be transitioned back to the first type of tracker if the lost tracker is associated with a blob in a subsequent video frame prior to expiration of the recovery duration. Trackers and associated blobs are output as identified blob tracker-blob pairs when the trackers are converted from new trackers to trackers of the first type.

    Abstract translation: 提供了用于为一个或多个视频帧维持丢失斑点跟踪器的技术和系统。 在一些示例中,针对视频帧序列维护的一个或多个斑点跟踪器被识别。 一个或多个斑点跟踪器与视频帧序列中的一个或多个斑点相关联。 在第一视频帧处检测斑点跟踪器从第一类型跟踪器到丢失跟踪器的转变。 例如,当在第一视频帧中没有检测到在先前帧中与其相关联的斑点跟踪器的斑点时,斑点跟踪器可以从第一类型跟踪器转换到丢失跟踪器。 在第一个视频帧处为丢失的跟踪器确定恢复持续时间。 对于在第一视频帧之后获得的一个或多个后续视频帧,当丢失跟踪器的丢失持续时间大于恢复持续时间时,丢失跟踪器从针对该视频帧序列维护的一个或多个块跟踪器中被移除。 如果在恢复持续时间期满之前丢失的跟踪器与后续视频帧中的斑点相关联,则斑点跟踪器可以转换回第一类型的跟踪器。 当跟踪器从新的跟踪器转换为第一个跟踪器时,跟踪器和相关的斑点会以识别的斑点跟踪器 - 斑点对的形式输出。

    METHODS AND SYSTEMS OF PERFORMING CONTENT-ADAPTIVE OBJECT TRACKING IN VIDEO ANALYTICS
    4.
    发明申请
    METHODS AND SYSTEMS OF PERFORMING CONTENT-ADAPTIVE OBJECT TRACKING IN VIDEO ANALYTICS 审中-公开
    在视频分析中执行内容自适应对象跟踪的方法和系统

    公开(公告)号:WO2018031102A1

    公开(公告)日:2018-02-15

    申请号:PCT/US2017/035418

    申请日:2017-06-01

    Abstract: Techniques and systems are provided for processing video data. For example, techniques and systems are provided for performing content-adaptive object or blob tracking. To perform the content-adaptive object tracking, a blob tracker is associated with a blob generated for a video frame. The blob includes pixels of at least a portion of a foreground object in a video frame. A size of the blob can be determined to be greater than a blob size threshold. The blob tracker can be converted to a normal tracker based on the size of the blob being greater than the size threshold. The associated blob tracker and blob are output as an identified blob tracker-blob pair when the blob tracker is converted to the normal tracker.

    Abstract translation: 提供了用于处理视频数据的技术和系统。 例如,提供用于执行内容自适应对象或斑点追踪的技术和系统。 为了执行内容自适应对象追踪,斑点追踪器与为视频帧生成的斑点相关联。 该斑点包括视频帧中前景对象的至少一部分的像素。 斑点的大小可以被确定为大于斑点大小阈值。 blob跟踪器可以根据blob的大小大于大小阈值转换为普通跟踪器。 当斑点跟踪器转换为普通跟踪器时,相关的斑点跟踪器和斑点将作为识别的斑点跟踪器 - 斑点对输出。

    METHODS AND SYSTEMS OF CODING A PREDICTIVE RANDOM ACCESS PICTURE USING A BACKGROUND PICTURE
    5.
    发明申请
    METHODS AND SYSTEMS OF CODING A PREDICTIVE RANDOM ACCESS PICTURE USING A BACKGROUND PICTURE 审中-公开
    使用背景图片编码预测随机访问图像的方法和系统

    公开(公告)号:WO2017062373A1

    公开(公告)日:2017-04-13

    申请号:PCT/US2016/055360

    申请日:2016-10-04

    Abstract: Techniques and systems are provided for encoding video data. For example, a method of encoding video data includes obtaining a background picture that is generated based on a plurality of pictures captured by an image sensor. The background picture is generated to include background portions identified in each of the captured pictures. The method further includes encoding, into a video bitstream, a group of pictures captured by the image sensor. The group of pictures includes at least one random access picture. Encoding the group of pictures includes encoding at least a portion of the at least one random access picture using inter-prediction based on the background picture.

    Abstract translation: 提供了用于编码视频数据的技术和系统。 例如,编码视频数据的方法包括获得基于由图像传感器捕获的多个图像生成的背景图像。 生成背景图像以包括在每个捕获的图像中识别的背景部分。 该方法还包括将图像传感器捕获的一组图像编码到视频比特流中。 该组图片包括至少一个随机存取图象。 编码图像组包括使用基于背景图像的帧间预测对至少一个随机接入图像的至少一部分进行编码。

    MOTION INFORMATION DERIVATION MODE DETERMINATION IN VIDEO CODING
    6.
    发明申请
    MOTION INFORMATION DERIVATION MODE DETERMINATION IN VIDEO CODING 审中-公开
    运动信息衍生模式确定视频编码

    公开(公告)号:WO2016160609A1

    公开(公告)日:2016-10-06

    申请号:PCT/US2016/024334

    申请日:2016-03-25

    Abstract: In an example, a method of decoding video data includes selecting a motion information derivation mode from a plurality of motion information derivation modes for determining motion information for a current block, where each motion information derivation mode of the plurality comprises performing a motion search for a first set of reference data that corresponds to a second set of reference data outside of the current block, and where the motion information indicates motion of the current block relative to reference video data. The method also includes determining the motion information for the current block using the selected motion information derivation mode. The method also includes decoding the current block using the determined motion information and without decoding syntax elements representative of the motion information.

    Abstract translation: 在一个示例中,视频数据的解码方法包括从用于确定当前块的运动信息的多个运动信息导出模式中选择运动信息导出模式,其中多个运动信息导出模式包括对运动信息推导模式进行运动搜索 第一组参考数据对应于当前块之外的第二组参考数据,并且其中运动信息指示当前块相对于参考视频数据的运动。 该方法还包括使用所选择的运动信息导出模式来确定当前块的运动信息。 该方法还包括使用所确定的运动信息解码当前块,并且不对代表运动信息的语法元素进行解码。

    OVERLAPPED MOTION COMPENSATION FOR VIDEO CODING
    7.
    发明申请
    OVERLAPPED MOTION COMPENSATION FOR VIDEO CODING 审中-公开
    用于视频编码的重叠运动补偿

    公开(公告)号:WO2016123068A1

    公开(公告)日:2016-08-04

    申请号:PCT/US2016/014857

    申请日:2016-01-26

    Abstract: In an example, a method of decoding video data may include receiving a first block of video data. The first block of video data may be a sub-block of a prediction unit. The method may include receiving one or more blocks of video data that neighbor the first block of video data. The method may include determining motion information of at least one of the one or more blocks of video data that neighbor the first block of video data. The method may include decoding, using overlapped block motion compensation, the first block of video data based at least in part on the motion information of the at least one of the one or more blocks that neighbor the first block of video data.

    Abstract translation: 在一个示例中,解码视频数据的方法可以包括接收第一视频数据块。 视频数据的第一块可以是预测单元的子块。 该方法可以包括接收与第一视频数据块相邻的视频数据的一个或多个块。 该方法可以包括确定与第一视频数据块相邻的一个或多个视频数据块中的至少一个的运动信息。 该方法可以包括至少部分地基于相邻于第一视频数据块的一个或多个块中的至少一个的运动信息来解码,使用重叠块运动补偿,第一视频数据块。

    TRANSPORT STREAM FOR CARRIAGE OF VIDEO CODING EXTENSIONS
    8.
    发明申请
    TRANSPORT STREAM FOR CARRIAGE OF VIDEO CODING EXTENSIONS 审中-公开
    运输视频编码扩展的运输流程

    公开(公告)号:WO2016011237A1

    公开(公告)日:2016-01-21

    申请号:PCT/US2015/040721

    申请日:2015-07-16

    Abstract: A video processing device may obtain, from a descriptor for a program comprising one or more elementary streams, a plurality of profile, tier, level (PTL) syntax element sets. The video processing device may obtain, from the descriptor, a plurality of operation point syntax element sets. For each respective operation point syntax element set of the plurality of operation point syntax element sets, the video processing device may determine, for each respective layer of the respective operation point specified by the respective operation point syntax element set, based on a respective syntax element in the respective operation point syntax element set, which of the PTL syntax element sets specifies the PTL information assigned to the respective layer, the respective operation point having a plurality of layers.

    Abstract translation: 视频处理设备可以从包括一个或多个基本流的节目的描述符获得多个简档,层级(PTL)语法元素集合。 视频处理设备可以从描述符获得多个操作点语法元素集合。 对于多个操作点语法元素集合的每个相应的操作点语法元素集合,视频处理设备可以基于相应的语法元素来确定由各个操作点语法元素集合指定的各个操作点的每个相应层 在相应的操作点语法元素集合中,哪个PTL语法元素集合指定分配给各个层的PTL信息,各个操作点具有多个层。

    SIMPLIFIED SUB-PREDICTION UNIT (SUB-PU) MOTION PARAMETER INHERITENCE (MPI)
    9.
    发明申请
    SIMPLIFIED SUB-PREDICTION UNIT (SUB-PU) MOTION PARAMETER INHERITENCE (MPI) 审中-公开
    简化子预测单元(SUB PU)运动参数不动(MPI)

    公开(公告)号:WO2015131387A1

    公开(公告)日:2015-09-11

    申请号:PCT/CN2014/073039

    申请日:2014-03-07

    CPC classification number: H04N19/597 H04N19/52 H04N19/70 H04N2213/005

    Abstract: This disclosure describes techniques for simplifying depth inter mode coding in a three-dimensional (3D) video coding process, such as 3D-HEVC. The techniques include generating a motion parameter candidate list, e.g., merging candidate list, for a current depth prediction unit (PU). In some examples, the described techniques include determining that a sub-PU motion parameter inheritance (MPI) motion parameter candidate is unavailable for inclusion in the motion parameter candidate list for the current depth PU if motion parameters of a co-located texture block to a representative block of the current depth PU are unavailable. In some examples, the described techniques include deriving a sub-PU MPI candidate for inclusion in the motion parameter candidate list for the current depth PU only if a partition mode of the current depth PU is 2Nx2N.

    Abstract translation: 本公开描述了用于简化3D-HEVC等三维(3D)视频编码处理中的深度帧间编码的技术。 这些技术包括为当前深度预测单元(PU)生成运动参数候选列表,例如合并候选列表。 在一些示例中,所描述的技术包括确定如果将位置相同的纹理块的运动参数移动到一个或多个运动参数继承(MPI)运动参数候选的运动参数,则不能使用用于当前深度PU的运动参数候选列表 当前深度PU的代表块不可用。 在一些示例中,所描述的技术包括仅当当前深度PU的分区模式是2Nx2N时,才导出用于当前深度PU的运动参数候选列表中的子PU MPI候选。

    DISPARITY VECTOR AND/OR ADVANCED RESIDUAL PREDICTION FOR VIDEO CODING
    10.
    发明申请
    DISPARITY VECTOR AND/OR ADVANCED RESIDUAL PREDICTION FOR VIDEO CODING 审中-公开
    视频编码的差异向量和/或高级残差预测

    公开(公告)号:WO2015103502A1

    公开(公告)日:2015-07-09

    申请号:PCT/US2015/010073

    申请日:2015-01-03

    Inventor: CHEN, Ying

    CPC classification number: H04N19/597 H04N19/517 H04N19/70

    Abstract: A device for processing three-dimensional (3D) video data may determine, based on direct dependent layers signaled in a video parameter set, that the current texture layer of the video data is dependent on a depth layer of the video data; and process the current texture layer using the depth layer.

    Abstract translation: 用于处理三维(3D)视频数据的设备可以基于视频参数集中信号的直接依赖层来确定视频数据的当前纹理层取决于视频数据的深度层; 并使用深度层处理当前纹理层。

Patent Agency Ranking