Abstract:
1. Coat rack 1.1 : Perspective 1.2 : Perspective 1.3 : Front 1.4 : Back 1.5 : Left 1.6 : Right 1.7 : Enlarged top view 1.8 : Enlarged bottom view The broken lines in the reproductions depict portions of the coat rack that form no part of the claimed design.
Abstract:
FIG. 1 is a front, right and top perspective view of a drone, showing my new design; FIG. 2 is a rear, left and bottom perspective view thereof; FIG. 3 is a front view thereof; FIG. 4 is a rear view thereof; FIG. 5 is a left side view thereof; FIG. 6 is a right side view thereof; FIG. 7 is a top plan view thereof; and, FIG. 8 is a bottom plan view thereof. The broken lines shown in the drawings illustrate portions of the drone that form no part of the claimed design.
Abstract:
FIG. 1 is a front, right and bottom perspective view of an aircraft, showing my new design; FIG. 2 is a rear, left and top perspective view thereof; FIG. 3 is a front elevation view thereof; FIG. 4 is a rear elevation view thereof; FIG. 5 is a left side elevation view thereof; FIG. 6 is a right side elevation view thereof; FIG. 7 is a top plan view thereof; and, FIG. 8 is a bottom plan view thereof. The broken lines shown in the drawings illustrate portions of the aircraft that form no part of the claimed design.
Abstract:
System and method for and identifying nerves innervating the wall of arteries such as the renal artery are disclosed. The present invention identifies areas on vessel walls that are innervated with nerves; provides indication on whether energy is delivered accurately to a targeted nerve; and provides immediate post-procedural assessment of the effect of energy delivered to the nerve. The method includes at least the steps to evaluate a change in physiological parameters after energy is delivered to an arterial wall; and to determine the type of nerve that the energy was directed to (none, sympathetic or parasympathetic) based on the evaluated results. The system includes at least a device for delivering energy to the wall of blood vessel; sensors for detecting physiological signals from a subject; and indicators to display results obtained using said method. Also provided are catheters for performing the mapping and ablating functions.
Abstract:
Systems, methods, and computer readable media for maintaining packet data protocol (PDP) context while performing data offload are disclosed. According to one aspect, a method for maintaining PDP context while performing data offload includes detecting a data offload condition wherein a UE for which a first network node is maintaining a PDP context is sending or receiving data using a data path that does not include the first network node. While the data offload condition exists, packets are sent from a source other than the UE to the first network node so as to cause the first network node to maintain the PDP context for the UE. In one embodiment, a node interposed between the UE and the first network node periodically sends dummy packets or heart beat packets to the first network node on behalf of the UE, which may include packets that appear to come from the UE.
Abstract:
Face verification is performed using video data. The two main modules are face image capturing and face verification. In face image capturing, good frontal face images are captured from input video data. A frontal face quality score discriminates between frontal and profile faces. In face verification, a local binary pattern histogram is selected as the facial feature descriptor for its high discriminative power and computational efficiency. Chi-Square (χ2) distance between LBP histograms from two face images are then calculated as a face dissimilarity measure. The decision whether or not two images belong to the same person is then made by comparing the corresponding distance with a pre-defined threshold. Given the fact that more than one face images can be captured per person from video data, several feature based and decision based aggregators are applied to combine pair-wise distances to further improve the verification performance.
Abstract:
Automatic face recognition. In a first example embodiment, a method for automatic face recognition includes several acts. First, a face pattern and two eye patterns are detected. Then, the face pattern is normalized. Next, the normalized face pattern is transformed into a normalized face feature vector of Gabor feature representations. Then, a difference image vector is calculated. Next, the difference image vector is projected to a lower-dimensional intra-subject subspace extracted from a pre-collected training face database. Then, a square function is applied to each component of the projection. Next, a weighted summation of the squared projection is calculated. Then, the previous four acts are repeated for each normalized gallery image feature vector. Finally, the face pattern in the probe digital image is classified as belonging to the gallery image with the highest calculated weighted summation where the highest calculated weighted summation is above a predefined threshold.
Abstract:
An input image (e.g. a digital RGB color image) is subjected to an eye classifier that is targeted at discriminating a complete eye pattern from any non-eye patterns. The red-eye candidate list with associated bounding boxes that are generated by the red-eye classifier are received. The bounding rectangles are subjected to object segmentation. A connected component labeling procedure is then applied to obtain one or more red regions. The largest red region is then chosen for feature extraction. A number of features are then extracted from this region. Then these features are used to determine if the particular candidate red-eye object is a mouth.
Abstract:
Automatic red-eye object classification in digital images using a boosting-based framework. In a first example embodiment, a method for classifying a candidate red-eye object in a digital photographic image includes several acts. First, a candidate red-eye object in a digital photographic image is selected. Next, a search scale set and a search region for the candidate red-eye object where an eye object may reside is determined. Then, the number of subwindows that satisfy an AdaBoost classifier is determined. This number is denoted as a vote. Next, the maximum size of the subwindows that satisfy the AdaBoost classifier is determined. Then, a normalized threshold is calculated by multiplying a predetermined constant threshold by the calculated maximum size. Next, the vote is compared with the normalized threshold. Finally, the candidate red-eye object is transformed into a true red-eye object if the vote is greater than the normalized threshold.