Abstract:
Examples are disclosed that relate to selectively dimming or occluding light from a real-world background to enhance the display of virtual objects on a near-eye display. One example provides a near-eye display system including a see-through display, an image source, a background light sensor, a selective background occluder comprising a first liquid crystal panel and a second liquid crystal panel positioned between a pair of polarizers, and a computing device including instructions executable by a logic subsystem to determine a shape and a position of an occlusion area based upon a virtual object to be displayed, obtain a first and a second birefringence pattern for the first and the second liquid crystal panels, produce the occlusion area by applying the birefringence patterns to the liquid crystal panels, and display the virtual object in a location visually overlapping with the occlusion area.
Abstract:
A device for combining tomographic images with human vision using a half-silvered mirror to merge the visual outer surface of an object (or a robotic mock effector) with a simultaneous reflection of a tomographic image from the interior of the object. The device maybe used with various types of image modalities including ultrasound, CT, and MRI. The image capture device and the display may or may not be fixed to the semi-transparent mirror. If not fixed, the imaging device may provide a compensation device that adjusts the reflection of the displayed ultrasound on the half-silvered mirror to account for any change in the image capture device orientation or location.
Abstract:
A projection-type display device is connectively coupled to a mobile device (such as a smartphone) where the light generated by a small projection device is directed at a relatively transparent holographic optical element (HOE) to provide a display to an operator of the mobile device or a viewer. The projector and HOE may be configured to produce and magnify a virtual image that is perceived as being displayed at a large distance from the operator who views the image through the HOE. The HOE may comprise a volume grating effective at only the narrow wavelengths of the projection device to maximize transparency while also maximizing the light reflected from the display projector to the eyes of the operator.
Abstract:
Optical user input technology comprises three-dimensional (3D) input sensors and 3D location emitters to enable high-precision input in a 3D space, and the 3D location emitter may be a stylus or other writing or pointing device. Certain implementations may comprise an orientation assembly for transmitting orientation of the 3D location emitter in addition to location within a 3D space, and some implementations may also use selectively identifiable signaling from the 3D location emitters to the 3D input sensors to distinguish one 3D location emitter from another, to transmit data other data from a 3D location emitter to a 3D location sensor, or as a means of providing orientation information for the 3D location emitter with respect to the 3D location sensor. Also disclosed are position fixing, indoor navigation, and other complementary applications using 3D input sensors and/or 3D location emitters.
Abstract:
A head-mounted light-field display system (HMD) includes two light-field projectors (LFPs), one per eye, each comprising a solid-state LED emitter array (SLEA) operatively coupled to a microlens array (MLA). The SLEA and the MLA are positioned so that light emitted from an LED of the SLEA reaches the eye through at most one microlens from the MLA. The HMD's LFP comprises a moveable solid-state LED emitter array coupled to a microlens array for close placement in front of an eye—without the need for any additional relay or coupling optics—wherein the LED emitter array physically moves with respect to the microlens array to mechanically multiplex the LED emitters to achieve resolution via mechanically multiplexing.
Abstract:
Optical user input technology comprises three-dimensional (3D) input sensors and 3D location emitters to enable high-precision input in a 3D space, and the 3D location emitter may be a stylus or other writing or pointing device. Certain implementations may comprise an orientation assembly for transmitting orientation of the 3D location emitter in addition to location within a 3D space, and some implementations may also use selectively identifiable signaling from the 3D location emitters to the 3D input sensors to distinguish one 3D location emitter from another, to transmit data other data from a 3D location emitter to a 3D location sensor, or as a means of providing orientation information for the 3D location emitter with respect to the 3D location sensor. Also disclosed are position fixing, indoor navigation, and other complementary applications using 3D input sensors and/or 3D location emitters.
Abstract:
A microscopy method and apparatus includes placing a specimen to be observed adjacent to a reflective holographic optical element (RDOE). A beam of light that is at least partially coherent is focused on a region of the specimen. The beam forward propagates through the specimen and is at least partially reflected backward through the specimen. The backward reflected light interferes with the forward propagating light to provide a three dimensional interference pattern that is at least partially within the specimen. A specimen region illuminated by the interference pattern is imaged at an image detector. Computational reconstruction is used to generate a microscopic image in all three spatial dimensions (X,Y,Z), simultaneously with resolution greater than conventional microscopy.
Abstract:
In embodiments of imaging structure color conversion, an imaging structure includes a silicon backplane with a driver pad array. An embedded light source is formed on the driver pad array in an emitter material layer, and the embedded light source emits light in a first color. A conductive material layer over the embedded light source forms a p-n junction between the emitter material layer and the conductive material layer. A color conversion layer can then convert a portion of the first color to at least a second color. Further, micro lens optics can be implemented to direct the light that is emitted through the color conversion layer.
Abstract:
In embodiments of active reflective surfaces, an imaging structure includes a circuit control layer that controls pixel activation to emit light. A reflective layer of the imaging structure reflects input light from an illumination source. An active color conversion material that is formed on the reflective layer converts the input light to the emitted light. The active color conversion material can be implemented as a phosphorus material or quantum dot material that converts the input light to the emitted light, and in embodiments, the active color conversion material is laminated directly on the reflective layer.
Abstract:
A low-power, high-resolution, see-through (i.e., “transparent”) augmented reality (AR) display without projectors with relay optics separate from the display surface but instead feature a small size, low power consumption, and/or high quality images (high contrast ratio). The AR display comprises sparse integrated light-emitting diode (iLED) array configurations, transparent drive solutions, and polarizing optics or time multiplexed lenses to combine virtual iLED projection images with a user's real world view. The AR display may also feature full eye-tracking support in order to selectively utilize only the portions of the display(s) that will produce only projection light that will enter the user's eye(s) (based on the position of the user's eyes at any given moment of time) in order to achieve power conservation.