USER INTERACTION INTERPRETER
    21.
    发明公开

    公开(公告)号:US20230350538A1

    公开(公告)日:2023-11-02

    申请号:US18217711

    申请日:2023-07-03

    Applicant: Apple Inc.

    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a CGR environment in which virtual objects from one or more apps are included. User interactions with the virtual objects are detected and interpreted by a system that is separate from the apps that provide the virtual objects. The system detects user interactions received via one or more input modalities and interprets those user interactions as events. These events provide a higher-level, input modality-independent, abstractions of the lower-level input-modality dependent user interactions that are detected. The system uses UI capability data provided by the apps to interpret user interactions with respect to the virtual object provided by the apps. For example, the UI capability data can identify whether a virtual object is moveable, actionable, hover-able, etc. and the system interprets user interactions at or near the virtual object accordingly.

    User interaction interpreter
    23.
    发明授权

    公开(公告)号:US11733824B2

    公开(公告)日:2023-08-22

    申请号:US16440048

    申请日:2019-06-13

    Applicant: Apple Inc.

    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a CGR environment in which virtual objects from one or more apps are included. User interactions with the virtual objects are detected and interpreted by a system that is separate from the apps that provide the virtual objects. The system detects user interactions received via one or more input modalities and interprets those user interactions as events. These events provide a higher-level, input modality-independent, abstractions of the lower-level input-modality dependent user interactions that are detected. The system uses UI capability data provided by the apps to interpret user interactions with respect to the virtual object provided by the apps. For example, the UI capability data can identify whether a virtual object is moveable, actionable, hover-able, etc. and the system interprets user interactions at or near the virtual object accordingly.

    User Interaction Interpreter
    24.
    发明申请

    公开(公告)号:US20190391726A1

    公开(公告)日:2019-12-26

    申请号:US16440048

    申请日:2019-06-13

    Applicant: Apple Inc.

    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a CGR environment in which virtual objects from one or more apps are included. User interactions with the virtual objects are detected and interpreted by a system that is separate from the apps that provide the virtual objects. The system detects user interactions received via one or more input modalities and interprets those user interactions as events. These events provide a higher-level, input modality-independent, abstractions of the lower-level input-modality dependent user interactions that are detected. The system uses UI capability data provided by the apps to interpret user interactions with respect to the virtual object provided by the apps. For example, the UI capability data can identify whether a virtual object is moveable, actionable, hover-able, etc. and the system interprets user interactions at or near the virtual object accordingly.

    USER INTERACTION INTERPRETER
    26.
    发明申请

    公开(公告)号:US20250036252A1

    公开(公告)日:2025-01-30

    申请号:US18910335

    申请日:2024-10-09

    Applicant: Apple Inc.

    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a CGR environment in which virtual objects from one or more apps are included. User interactions with the virtual objects are detected and interpreted by a system that is separate from the apps that provide the virtual objects. The system detects user interactions received via one or more input modalities and interprets those user interactions as events. These events provide a higher-level, input modality-independent, abstractions of the lower-level input-modality dependent user interactions that are detected. The system uses UI capability data provided by the apps to interpret user interactions with respect to the virtual object provided by the apps. For example, the UI capability data can identify whether a virtual object is moveable, actionable, hover-able, etc. and the system interprets user interactions at or near the virtual object accordingly.

    Generating and displaying content based on respective positions of individuals

    公开(公告)号:US12166957B2

    公开(公告)日:2024-12-10

    申请号:US17990141

    申请日:2022-11-18

    Applicant: Apple Inc.

    Abstract: In some implementations, a method is performed at an electronic device including one or more processors, a non-transitory memory, a rendering system, and a display. The method includes determining a first rendering characteristic based on a first viewing angle of a first individual with respect to the display. The method includes determining a second rendering characteristic based on a second viewing angle of a second individual with respect to the display. The first rendering characteristic is different from the second rendering characteristic. The method includes generating, via the rendering system, first display content data according to the first rendering characteristic, and generating, via the rendering system, second display content data according to the second rendering characteristic. The first display content data is associated with the first viewing angle. The second display content data is associated with the second viewing angle.

    User interaction interpreter
    28.
    发明授权

    公开(公告)号:US12141414B2

    公开(公告)日:2024-11-12

    申请号:US18217711

    申请日:2023-07-03

    Applicant: Apple Inc.

    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a CGR environment in which virtual objects from one or more apps are included. User interactions with the virtual objects are detected and interpreted by a system that is separate from the apps that provide the virtual objects. The system detects user interactions received via one or more input modalities and interprets those user interactions as events. These events provide a higher-level, input modality-independent, abstractions of the lower-level input-modality dependent user interactions that are detected. The system uses UI capability data provided by the apps to interpret user interactions with respect to the virtual object provided by the apps. For example, the UI capability data can identify whether a virtual object is moveable, actionable, hover-able, etc. and the system interprets user interactions at or near the virtual object accordingly.

Patent Agency Ranking