-
公开(公告)号:WO2019067899A1
公开(公告)日:2019-04-04
申请号:PCT/US2018/053422
申请日:2018-09-28
Applicant: APPLE INC.
Inventor: STOYLES, Justin D. , KUHN, Michael
IPC: G06F3/01 , G06F3/0484 , G06T19/00 , G02B27/01
Abstract: In some exemplary processes for controlling an external device using a computergenerated reality interface, information specifying a function of the external device is received from the external device. First image data of a physical environment that includes the external device is obtained with one or more image sensors. A representation of the physical environment according to the first image data is displayed on a display. While displaying the representation of the physical environment, second image data identifying a gesture occurring between the display and the external device in the physical environment is obtained with the one or more image sensors. A determination is made as to whether the identified gesture satisfies one or more predetermined criteria associated with the function. In accordance with determining that the identified gesture satisfies one or more predetermined criteria associated with the function, the external device is caused to perform the function.
-
公开(公告)号:WO2019067895A1
公开(公告)日:2019-04-04
申请号:PCT/US2018/053415
申请日:2018-09-28
Applicant: APPLE INC.
Inventor: STOYLES, Justin D. , KUHN, Michael
IPC: G06F3/01 , G06F3/0484 , G06T19/00 , G02B27/01
Abstract: In an exemplary process for accessing a function of an external device through a computer-generated reality interface, one or more external devices are detected. Image data of a physical environment captured by an image sensor is obtained. The process determines whether the image data includes a representation of a first external device of the one or more detected external devices. In accordance with determining that the image data includes a representation of the first external device, the process causing a display to concurrently display a representation of the physical environment according to the image data, and an affordance corresponding to a function of the first external device, wherein detecting user activation of the displayed affordance causes the first external device to perform an action corresponding to the function.
-
公开(公告)号:EP3404659A1
公开(公告)日:2018-11-21
申请号:EP18151014.0
申请日:2018-01-10
Applicant: Apple Inc.
Inventor: STOYLES, Justin D. , MOHA, Alexandre R. , SCAPEL, Nicholas V. , BARLIER, Guillaume P. , GUZMAN, Aurelio , SOMMER, Bruno M , DAMASKY, Nina , WEISE, Thibaut , GOOSSENS, Thomas , PHAM, Hoan , AMBERG, Brian
IPC: G11B27/036 , G11B27/10 , G06T13/20 , G06T13/40 , G06T17/20
CPC classification number: G06T13/40 , G06K9/00228 , G06K9/00302 , G06T13/205 , G06T17/205 , G11B27/036 , G11B27/10 , H04N7/147 , H04N2007/145
Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.
-
公开(公告)号:EP4191587A1
公开(公告)日:2023-06-07
申请号:EP22214668.0
申请日:2018-01-10
Applicant: Apple Inc.
Inventor: STOYLES, Justin D. , MOHA, Alexandre R. , SCAPEL, Nicholas V. , BARLIER, Guillaume P. , GUZMAN, Aurelio , SOMMER, Bruno M , DAMASKY, Nina , WEISE, Thibaut , GOOSSENS, Thomas , PHAM, Hoan , AMBERG, Brian
IPC: G11B27/036 , G11B27/10 , G06T13/20 , G06T13/40 , G06T17/20
Abstract: A method, comprising receiving, by a device, a plurality of image frames comprising a facial expression of a person. Receiving, by the device, depth information comprising the facial expression of the person. Determining, by the device, the facial expression of the person based on the plurality of image frames and the depth information. Animating, by the device, an avatar based at least in part on the facial expression of the person. Causing a first version of the animated avatar to be rendered, by the device, based on the facial expression. Transmitting recipient information from the device to a message system for a transmission to a recipient computing device associated with the recipient information. Receiving, at the device, a response from the message system prior to delivery of the rendered first version of the animated avatar to the recipient computing device. Causing, following the response from the message system, a second version of the animated avatar to be rendered at the device based on the facial expression, the second version corresponding to one or more capabilities of the recipient computing device associated with the recipient information. Transmitting the rendered second version of the animated avatar from the device to the recipient computing device without transmitting the first version of the animated avatar from the device to the recipient computing device.
-
-
-