-
公开(公告)号:US20250093990A1
公开(公告)日:2025-03-20
申请号:US18969089
申请日:2024-12-04
Applicant: Apple Inc.
Inventor: Lejing Wang , Benjamin R. Blachnitzky , Lilli I. Jonsson , Nicolai Georg
IPC: G06F3/041 , G06F3/01 , G06F3/0488
Abstract: Detecting a touch includes receiving image data of a touching object of a user selecting selectable objects of a target surface, determining a rate of movement of the touching object, in response to determining that the rate of movement satisfies a predetermined threshold, modifying a touch detection parameter for detecting a touch event between the touching object and the target surface, and detecting one or more additional touch events using the modified touch detection parameter.
-
公开(公告)号:US20250004545A1
公开(公告)日:2025-01-02
申请号:US18614219
申请日:2024-03-22
Applicant: Apple Inc.
Inventor: Paul X. Wang , Ashwin Kumar Asoka Kumar Shenoi , Shuxin Yu , Benjamin R. Blachnitzky , Jie Gu , Fletcher R. Rothkopf
IPC: G06F3/01 , G06F3/04815 , G06F3/0487
Abstract: A head-mounted device may have a head-mounted support structure, a gaze tracker in the head-mounted support structure, and one or more displays in the head-mounted support structure. For example, two displays may display images to two eye boxes. The display may display a virtual keyboard and may display a text input in response to a gaze location that is determined by the gaze tracker. The gaze tracker may additionally determine a gaze swipe input, or a camera in the support structure may determine a hand swipe input, and the swipe input may be used with the gaze location to determine the text input. In particular, the swipe input may create a swipe input curve that is fit to the text input to determine the text input. A user's hand may be used as a secondary input to indicate the start or end of a text input.
-
公开(公告)号:US12008208B2
公开(公告)日:2024-06-11
申请号:US18121673
申请日:2023-03-15
Applicant: Apple Inc.
Inventor: Nicolai Georg , Aaron M. Burns , Adam G. Poulos , Arun Rakesh Yoganandan , Benjamin Hylak , Benjamin R. Blachnitzky
IPC: G06F17/00 , G06F3/01 , G06F3/04815 , G06F3/0486
CPC classification number: G06F3/04815 , G06F3/014 , G06F3/0486 , G06F2203/0331 , G06F2203/04802
Abstract: A method includes displaying a plurality of computer-generated objects, including a first computer-generated object at a first position within an environment and a second computer-generated object at a second position within the environment. The first computer-generated object corresponds to a first user interface element that includes a first set of controls for modifying a content item. The method includes, while displaying the plurality of computer-generated objects, obtaining extremity tracking data. The method includes moving the first computer-generated object from the first position to a third position within the environment based on the extremity tracking data. The method includes, in accordance with a determination that the third position satisfies a proximity threshold with respect to the second position, merging the first computer-generated object with the second computer-generated object in order to generate a third computer-generated object for modifying the content item. The method includes displaying the third computer-generated object.
-
公开(公告)号:US11960657B2
公开(公告)日:2024-04-16
申请号:US18124120
申请日:2023-03-21
Applicant: Apple Inc.
Inventor: Aaron M. Burns , Adam G. Poulos , Arun Rakesh Yoganandan , Benjamin Hylak , Benjamin R. Blachnitzky , Jordan A. Cazamias , Nicolai Georg
IPC: G06F3/0346 , G06F3/01 , G06F3/038 , G06F3/0486 , G06T7/20
CPC classification number: G06F3/017 , G06F3/014 , G06F3/0346 , G06F3/038 , G06F3/0486 , G06T7/20 , G06F2203/0331 , G06T2207/30196
Abstract: A method includes, while displaying a computer-generated object at a first position within an environment, obtaining extremity tracking data from an extremity tracker. The first position is outside of a drop region that is viewable using the display. The method includes moving the computer-generated object from the first position to a second position within the environment based on the extremity tracking data. The method includes, in response to determining that the second position satisfies a proximity threshold with respect to the drop region, detecting an input that is associated with a spatial region of the environment. The method includes moving the computer-generated object from the second position to a third position that is within the drop region, based on determining that the spatial region satisfies a focus criterion associated with the drop region.
-
公开(公告)号:US20240054746A1
公开(公告)日:2024-02-15
申请号:US18345661
申请日:2023-06-30
Applicant: Apple Inc.
Inventor: Aaron M. Burns , Adam G. Poulos , Alexis H. Palangie , Benjamin R. Blachnitzky , Charilaos Papadopoulos , David M. Schattel , Ezgi Demirayak , Jia Wang , Reza Abbasian , Ryan S. Carlin
IPC: G06T19/20 , G06T13/40 , H04N21/431
CPC classification number: G06T19/20 , G06T13/40 , H04N21/4316 , G06T2219/2016
Abstract: An electronic device such as a head-mounted device may present extended reality content such as a representation of a three-dimensional environment. The representation of the three-dimensional environment may be changed between different viewing modes having different immersion levels in response to user input. The three-dimensional environment may represent a multiuser communication session. A multiuser communication session may be saved and subsequently viewed as a replay. There may be an interactive virtual object within the replay of the multiuser communication session. The pose of the interactive virtual object may be manipulated by a user while the replay is paused. Some multiuser communication sessions may be hierarchical multiuser communication sessions with a presenter and audience members. The presenter and audience members may receive generalized feedback based on the audience members during the presentation. A participant may have their role changed between audience member and presenter during the presentation.
-
公开(公告)号:US20230297172A1
公开(公告)日:2023-09-21
申请号:US18124120
申请日:2023-03-21
Applicant: Apple Inc.
Inventor: Aaron M. Burns , Adam G. Poulos , Arun Rakesh Yoganandan , Benjamin Hylak , Benjamin R. Blachnitzky , Jordan A. Cazamias , Nicolai Georg
IPC: G06F3/01 , G06F3/0486 , G06F3/0346 , G06F3/038 , G06T7/20
CPC classification number: G06F3/017 , G06F3/014 , G06F3/0486 , G06F3/0346 , G06F3/038 , G06T7/20 , G06T2207/30196 , G06F2203/0331
Abstract: A method includes, while displaying a computer-generated object at a first position within an environment, obtaining extremity tracking data from an extremity tracker. The first position is outside of a drop region that is viewable using the display. The method includes moving the computer-generated object from the first position to a second position within the environment based on the extremity tracking data. The method includes, in response to determining that the second position satisfies a proximity threshold with respect to the drop region, detecting an input that is associated with a spatial region of the environment. The method includes moving the computer-generated object from the second position to a third position that is within the drop region, based on determining that the spatial region satisfies a focus criterion associated with the drop region.
-
公开(公告)号:US20230095282A1
公开(公告)日:2023-03-30
申请号:US17950770
申请日:2022-09-22
Applicant: Apple Inc.
Inventor: Benjamin R. Blachnitzky , Aaron M. Burns , Anette L. Freiin von Kapri , Arun Rakesh Yoganandan , Benjamin H. Boesel , Evgenii Krivoruchko , Jonathan Ravasz , Shih-Sang Chiu
IPC: G06F3/04815 , G06F3/01 , G06T19/00
Abstract: In one implementation, a method for displaying a first pairing affordance that is world-locked to a first peripheral device. The method may be performed by an electronic device including a non-transitory memory, one or more processors, a display, and one or more input devices. The method includes detecting the first peripheral device within a three-dimensional (3D) environment via a computer vision technique. The method includes receiving, via the one or more input devices, a first user input that is directed to the first peripheral device within the 3D environment. The method includes, in response to receiving the first user input, displaying, on the display, the first pairing affordance that is world-locked to the first peripheral device within the 3D environment.
-
公开(公告)号:US20200026352A1
公开(公告)日:2020-01-23
申请号:US16395806
申请日:2019-04-26
Applicant: Apple Inc.
Inventor: Paul X. Wang , Nicolai Georg , Benjamin R. Blachnitzky , Alhad A. Palkar , Minhazul Islam , Alex J. Lehmann , Madeleine S. Cordier , Joon-Sup Han , Hongcheng Sun , Sang E. Lee , Kevin Z. Lo , Lilli Ing-Marie Jonsson , Luis Deliz Centeno , Yuhao Pan , Stephen E. Dey , Paul N. DuMontelle , Jonathan C. Atler , Tianjia Sun , Jian Li , Chang Zhang
IPC: G06F3/01 , G06F3/0488 , G06F3/0482 , G06F3/044 , G06T19/00
Abstract: A system may include finger devices. A touch sensor may be mounted in a finger device housing to gather input from an external object as the object moves along an exterior surface of the housing. The touch sensor may include capacitive sensor electrodes. Sensors such as force sensors, ultrasonic sensors, inertial measurement units, optical sensors, and other components may be used in gathering finger input from a user. Finger input from a user may be used to manipulate virtual objects in a mixed reality or virtual reality environment while a haptic output device in a finger device provides associated haptic output. A user may interact with real-world objects while computer-generated content is overlaid over some or all of the objects. Object rotations and other movements may be converted into input for a mixed reality or virtual reality system using force measurements or other sensors measurements made with the finger devices.
-
-
-
-
-
-
-