Abstract:
Systems and methods for streaming video and/or audio from wireless cameras are provided. Methods may include, at a network camera, receiving an instruction to begin video capture from a first network node, including information identifying a second network node. A first video stream may be sent from the network camera to the first network node, and a second video stream may be simultaneously sent to the second network node. The first and second video streams may be based on common video capture data, and may be sent at different bitrates, different resolutions, different frame-rates, and/or different formats. A parameter of the second video stream may be adjusted in response to performance data received from the second network node or another network node.
Abstract:
Systems and methods for streaming video and/or audio from wireless cameras are provided. Methods may include, at a network camera, receiving an instruction to begin video capture from a first network node, including information identifying a second network node. A first video stream may be sent from the network camera to the first network node, and a second video stream may be simultaneously sent to the second network node. The first and second video streams may be based on common video capture data, and may be sent at different bitrates, different resolutions, different frame-rates, and/or different formats. A parameter of the second video stream may be adjusted in response to performance data received from the second network node or another network node.
Abstract:
Systems and methods for connecting wireless cameras are provided. A computing device may include a network interface, and a processor configured to establish a virtual USB bus available to an operating system of the computing device, establish a virtual USB camera device, and report to the operating system that the virtual USB camera device is connected to the virtual USB bus. The virtual USB camera may be configured to establish a network connection to a network camera using the network interface, receive video data from the network camera via the network interface, and send the video data via the virtual USB bus. Alternatively, the virtual USB camera may send the video data to the operating system as USB packets, without establishing a virtual USB bus.
Abstract:
Embodiments of the present disclosure generally relate to livestreaming methods and systems, and more particularly to whiteboard presentation systems that can be used in a livestreaming or video conferencing environment. In some embodiments, the whiteboard presentation system is configured to perform one or more processing operations, such as capture images on a whiteboard, perform image processing routines on the captured images, and transmit processed images as a video feed to one or more remote users, such as people or locations attending a video conference. The image processing routines can include one or more operations such as image denoising, contrast enhancement, color reconstruction, segmentation of a presenter, and image reconstruction.
Abstract:
The present disclosure generally provides for advanced single camera video conferencing systems and methods related thereto. The advanced single camera video conferencing system features a hybrid optical/digital camera, herein a camera device, having a controller that is configured to execute one or more of the methods set forth herein. In one embodiment, a method includes optically framing a first portion of a video conferencing environment to provide an actual field-of-view, digitally framing a second portion of the video conferencing environment to provide an apparent field-of-view that is encompassed within the actual field-of-view, generating a video stream of the apparent field-of-view, surveying the actual field-of-view to generate survey data, and detecting changes in the survey data over time. The method may be performed using a single camera device using a single image sensor.
Abstract:
Systems and methods for streaming video and/or audio from wireless cameras are provided. Methods may include, at a network camera, receiving an instruction to begin video capture from a first network node, including information identifying a second network node. A first video stream may be sent from the network camera to the first network node, and a second video stream may be simultaneously sent to the second network node. The first and second video streams may be based on common video capture data, and may be sent at different bitrates, different resolutions, different frame-rates, and/or different formats. A parameter of the second video stream may be adjusted in response to performance data received from the second network node or another network node.
Abstract:
Systems and methods for streaming video and/or audio from wireless cameras are provided. A camera may include an optical sensor, a wireless communication device, and a processor configured to establish a connection with a remote web site, and stream a first video stream to the remote website. The camera may be further configured to establish a connection with a local device, and stream a second video stream to the local device. The first and second video streams may include different resolutions and/or different formats. The camera may be configured to establish a control channel with the local device.
Abstract:
Embodiments herein generally relate to video conferencing systems and, more particularly, to multi-camera systems used to detect participants in a conference environment and auto frame a video stream of a priority group from the detected participants. In one embodiment, a computer-implemented method includes determining a plurality of subjects within a first view of a conference environment and altering a second view of the conference environment after determining that at least a portion of one or more of the plurality of subjects cannot fit in the second view when the second view is adjusted to include the other ones of the plurality of subjects. Here, each of the plurality of subjects includes a region-of-interest corresponding to a portion of an individual conference participant. Altering the second view includes determining a priority subject group and adjusting the second view to include the priority subject group. In some embodiments, the priority subject group includes two or more subjects of the plurality of subjects.
Abstract:
The present disclosure generally provides for advanced single camera video conferencing systems and methods related thereto. The advanced single camera video conferencing system features a hybrid optical/digital camera, herein a camera device, having a controller that is configured to execute one or more of the methods set forth herein. In one embodiment, a method includes optically framing a first portion of a video conferencing environment to provide an actual field-of-view, digitally framing a second portion of the video conferencing environment to provide an apparent field-of-view that is encompassed within the actual field-of-view, generating a video stream of the apparent field-of-view, surveying the actual field-of-view to generate survey data, and detecting changes in the survey data over time. The method may be performed using a single camera device using a single image sensor.
Abstract:
The present disclosure generally provides for advanced single camera video conferencing systems, and methods related thereto. The advanced single camera video conferencing system features a hybrid optical/digital camera, herein a camera device, having a controller that is configured to execute one or more of the methods set forth herein. In one embodiment, a method includes optically framing, a first portion of a video conferencing environment to provide an actual field-of-view, digitally framing a second portion of the video conferencing environment to provide an apparent field-of-view that is encompassed within the actual field-of-view, generating a video stream of the apparent field-of-view, surveying the actual field-of-view to generate survey data, and detecting changes in the survey data over time. The method may be performed using a single camera device using a single image sensor.