Abstract:
Acoustic imaging systems can include an acoustic sensing array, an electromagnetic imaging tool, a display, and an audio device. A processor can receive data from the acoustic sensor array and the electromagnetic imaging tool to generate a display image combining acoustic image data and electromagnetic image data. Systems can include an audio device that receives an audio output from the processor and outputs audio feedback signals to a user. The audio feedback signals can represent acoustic signals from an acoustic scene. Systems can provide a display image to a user including acoustic image data, and a user can select an acoustic signal for which to provide a corresponding audio output to an audio device. Audio outputs and display images can change dynamically in response to a change in pointing of the acoustic sensing array, such as by changing a stereo audio output.
Abstract:
Systems and methods can be used for analyzing image data to determine an amount of vibration and/or misalignment in an object under analysis. In some instances, as operating equipment heats up during operation, temperature changes of various portions of the operating equipment leads to changes in dimensions of such portions, leading to misalignment. Multiple sets of data representative of the operating equipment in multiple operating conditions can be used to determine an amount of misalignment due to thermal offsets. Hot and cold temperatures of the equipment can be used to calculate thermal growth of various portions of the equipment, which can be used to determine an amount a misalignment due to thermal offsets. Additionally or alternatively, image data representing the equipment can be used to observe changes in alignment between states.
Abstract:
An acoustic analysis system includes an acoustic sensor array that receives acoustic signals from a target scene and outputs acoustic data based on the received acoustic signals. A processor receives a plurality of acoustic data sets from the acoustic sensor array, representative of the target scene at different points in time. The processor determines one or more locations within the target scene represented by the plurality of acoustic data sets, each being a location of an acoustic signal emitted from the target scene. For each acoustic signal, the processor classifies the acoustic signal as an intermittent acoustic signal or a continuous acoustic signal, generates accumulated-time acoustic image data based on the plurality of acoustic data sets, and generates an accumulated-time display image for presentation on a display. Within the accumulated-time display image, acoustic signals classified as intermittent acoustic signals are distinguished from acoustic signals classified as continuous acoustic signals.
Abstract:
Systems and methods can be used for analyzing image data to determine an amount of vibration and/or misalignment in an object under analysis. In some instances, as operating equipment heats up during operation, temperature changes of various portions of the operating equipment leads to changes in dimensions of such portions, leading to misalignment. Multiple sets of data representative of the operating equipment in multiple operating conditions can be used to determine an amount of misalignment due to thermal offsets. Hot and cold temperatures of the equipment can be used to calculate thermal growth of various portions of the equipment, which can be used to determine an amount a misalignment due to thermal offsets. Additionally or alternatively, image data representing the equipment can be used to observe changes in alignment between states.
Abstract:
Systems and methods directed toward acoustic analysis can include an acoustic sensor array comprising a plurality of acoustic sensor elements, an electromagnetic imaging tool, and a processor in communication with the acoustic sensor array and the electromagnetic imaging tool. The processor can be configured to analyze acoustic data to extract one or more acoustic parameters representative of acoustic signals at one or more locations in an acoustic scene and generate a display image that includes electromagnetic image data and acoustic image data. The display image can further include information indicative of the one or more acoustic parameters at one or more locations in the acoustic scene, such as including acoustic image data in the display image at locations in the scene at which the one or more acoustic parameters satisfies a predetermined condition.
Abstract:
Acoustic imaging systems can include an acoustic sensing array, an electromagnetic imaging tool, a display, and an audio device. A processor can receive data from the acoustic sensor array and the electromagnetic imaging tool to generate a display image combining acoustic image data and electromagnetic image data. Systems can include an audio device that receives an audio output from the processor and outputs audio feedback signals to a user. The audio feedback signals can represent acoustic signals from an acoustic scene. Systems can provide a display image to a user including acoustic image data, and a user can select an acoustic signal for which to provide a corresponding audio output to an audio device. Audio outputs and display images can change dynamically in response to a change in pointing of the acoustic sensing array, such as by changing a stereo audio output.
Abstract:
A smart phone user captures video and still images of a motor and makes a sound recording of his observations. The images and recordings are tagged with a date/time stamp, a serial number, bar code, matrix code, RFID code, or geolocation and sent wirelessly to an infrared camera. The infrared camera has a collocation application that reads the tag and creates a folder which holds the infrared image and the auxiliary information from the smartphone related to an asset. The collocation program also adds icons to the infrared image. By operating the user interface (not shown) of the infrared camera, the user may select an icon to view the images and listen to the sound recording.
Abstract:
Systems and methods detect abnormal conditions in electrical circuits by providing thermal imaging combined with non-contact measurements of current and voltage. Such systems may be implemented in a single test device, or in wired combinations, or in wireless communication implementations with multiple test devices and/or accessories, or in combination with one or more additional devices, such as a mobile phone, tablet, personal computer (PC), cloud-based server, etc. A thermal imaging tool that includes an infrared sensor may first discover and image one or more thermal anomalies in an object, such as an electrical circuit. One or more non-contact current or voltage sensors may be used to measure current and/or voltage, which allows for determination of the power loss at the measured location. The power loss may be used to determine an estimation of the abnormal resistive power losses in a circuit, as well as the costs associated therewith.
Abstract:
Systems include a test and measurement tool configured to acquire measurement data representative of at least one parameter of a device under test, an imaging tool configured to acquire image data representative of a target scene, and a display device. The display device can include a display and can be in communication with the test and measurement tool and the imaging tool. The display device can receive measurement data from the test and measurement tool and image data from the imaging tool, and present a display representative of at least one of the measurement data and the image data on the display.
Abstract:
An acoustic analysis system includes an acoustic sensor array that receives acoustic signals from a target scene and outputs acoustic data based on the received acoustic signals. A processor receives a plurality of acoustic data sets from the acoustic sensor array, representative of the target scene at different points in time. The processor determines one or more locations within the target scene represented by the plurality of acoustic data sets, each being a location of an acoustic signal emitted from the target scene. For each acoustic signal, the processor classifies the acoustic signal as an intermittent acoustic signal or a continuous acoustic signal, generates accumulated-time acoustic image data based on the plurality of acoustic data sets, and generates an accumulated-time display image for presentation on a display. Within the accumulated-time display image, acoustic signals classified as intermittent acoustic signals are distinguished from acoustic signals classified as continuous acoustic signals.