Abstract:
An acoustic analysis system includes an acoustic sensor array that receives acoustic signals from a target scene and outputs acoustic data based on the one or more acoustic signals. A processor receives a first set of acoustic data representing a first portion of the target scene and having a first field of view (FOV), generates a first acoustic image based on the first set of acoustic data, receives a second set of acoustic data representing a second portion of the target scene and having a second FOV, wherein the second FOV is different than the first FOV, generates a second acoustic image based on the second set of acoustic data, registers the first acoustic image and the second acoustic image to form an aligned first acoustic image and second acoustic image, and generate a panorama comprising the aligned first acoustic image and second acoustic image for presentation on a display.
Abstract:
Some systems include an acoustic sensor array configured to receive acoustic signals, an electromagnetic imaging tool configured to receive electromagnetic radiation, a user interface, a display, and a processor. The processor can receive electromagnetic data from the electromagnetic imaging tool and acoustic data from the acoustic sensor array. The processor can generate acoustic image data of the scene based on the received acoustic data, generate a display image comprising combined acoustic image data and electromagnetic image data, and present the display image on the display. The processor can receive an annotation input from the user interface and update the display image based on the received annotation input. The processor can be configured to determine one or more acoustic parameters associated with the received acoustic signal and determine a criticality associated with the acoustic signal. A user can annotated the display image with determined criticality information or other determined information.
Abstract:
A handheld acoustic imaging tool and method of imaging acoustic signals includes receiving acoustic signals from a scene, identifying a subset of the acoustic signals based on a predetermined condition of an acoustic parameter of the acoustic signals, and generating a display image using electromagnetic image data representative of electromagnetic radiation from the scene. The display image visually indicates a location in the scene corresponding to the subset of the acoustic signals. Generating the display image may further include using acoustic image data representative of the subset of the acoustic signals, wherein the electromagnetic image data and the acoustic image data are shown together in the display image. In some cases, the acoustic image data in the display image may be palettized according to the acoustic parameter, for example according to an amount, an amount of change, or a rate of change, of the acoustic parameter.
Abstract:
Some systems include an acoustic sensor array configured to receive acoustic signals, an electromagnetic imaging tool configured to receive electromagnetic radiation, a user interface, a display, and a processor. The processor can receive electromagnetic data from the electromagnetic imaging tool and acoustic data from the acoustic sensor array. The processor can generate acoustic image data of the scene based on the received acoustic data, generate a display image comprising combined acoustic image data and electromagnetic image data, and present the display image on the display. The processor can receive an annotation input from the user interface and update the display image based on the received annotation input. The processor can be configured to determine one or more acoustic parameters associated with the received acoustic signal and determine a criticality associated with the acoustic signal. A user can annotated the display image with determined criticality information or other determined information.
Abstract:
Systems and methods directed toward acoustic analysis can include a plurality of acoustic sensor arrays, each including a plurality of acoustic sensor elements, and a processor in communication with the plurality of acoustic sensor arrays. The processor can be configured to select one or more of the plurality of acoustic sensor arrays based on one or more input parameters, and generate acoustic image data representative of an acoustic scene based on received acoustic data from the selected one or more acoustic sensor arrays. Such input parameters can include distance information and/or frequency information. Different acoustic sensor arrays can share acoustic sensor elements in common or can be entirely separate from one another. Acoustic image data can be combined with electromagnetic image data from an electromagnetic imaging tool to generate a display image.
Abstract:
An acoustic analysis system includes an acoustic sensor array that receives acoustic signals from a target scene and outputs acoustic data based on the one or more acoustic signals. A processor receives a first set of acoustic data representing a first portion of the target scene and having a first field of view (FOV), generates a first acoustic image based on the first set of acoustic data, receives a second set of acoustic data representing a second portion of the target scene and having a second FOV, wherein the second FOV is different than the first FOV, generates a second acoustic image based on the second set of acoustic data, registers the first acoustic image and the second acoustic image to form an aligned first acoustic image and second acoustic image, and generate a panorama comprising the aligned first acoustic image and second acoustic image for presentation on a display.
Abstract:
Some systems include an acoustic sensor array configured to receive acoustic signals, an electromagnetic imaging tool configured to receive electromagnetic radiation, a user interface, a display, and a processor. The processor can receive electromagnetic data from the electromagnetic imaging tool and acoustic data from the acoustic sensor array. The processor can generate acoustic image data of the scene based on the received acoustic data, generate a display image comprising combined acoustic image data and electromagnetic image data, and present the display image on the display. The processor can receive an annotation input from the user interface and update the display image based on the received annotation input. The processor can be configured to determine one or more acoustic parameters associated with the received acoustic signal and determine a criticality associated with the acoustic signal. A user can annotated the display image with determined criticality information or other determined information.
Abstract:
Systems and methods can be used for analyzing image data to determine an amount of vibration and/or misalignment in an object under analysis. Image distortion present in the image data due to vibration and/or misalignment of the object during operation can be detected automatically or manually, and can be used to determine an amount of vibration and/or misalignment present. The determined amount of vibration and/or misalignment can be used to determine alignment calibration parameters for inputting into an alignment tool to facilitate alignment of the object. Various steps in determining the image distortion and/or the alignment calibration parameters can be performed using single components or can be spread across multiple components in a system.
Abstract:
Imaging tools can be fixedly or removably attached to various surfaces of test and measurement tools. An imaging tool comprising a sensor array capable of receiving electromagnetic radiation from a target scene can be configured to engage a surface of the test and measurement tool such that the imaging tool is supported by the test and measurement tool. When the imaging tool is engaged with the test and measurement tool, the sensor array can be movable relative to the imaging tool so that a target scene viewed by the sensor array of the imaging tool is adjustable without requiring movement of the test and measurement tool.
Abstract:
A smart phone user captures video and still images of a motor and makes a sound recording of his observations. The images and recordings are tagged with a date/time stamp, a serial number, bar code, matrix code, RFID code, or geolocation and sent wirelessly to an infrared camera. The infrared camera has a collocation application that reads the tag and creates a folder which holds the infrared image and the auxiliary information from the smartphone related to an asset. The collocation program also adds icons to the infrared image. By operating the user interface (not shown) of the infrared camera, the user may select an icon to view the images and listen to the sound recording.