-
公开(公告)号:US10891845B2
公开(公告)日:2021-01-12
申请号:US16202108
申请日:2018-11-28
Inventor: Chuan-Yu Chang , Fu-Jen Tsai
Abstract: A mouth and nose occluded detecting method includes a detecting step and a warning step. The detecting step includes a facial detecting step, an image extracting step and an occluded determining step. In the facial detecting step, an image is captured by an image capturing device, wherein a facial portion image is obtained from the image. In the image extracting step, a mouth portion is extracted from the facial portion image so as to obtain a mouth portion image. In the occluded determining step, the mouth portion image is entered into an occluding convolutional neural network so as to produce a determining result, wherein the determining result is an occluding state or a normal state. In the warning step, a warning is provided according to the determining result.
-
公开(公告)号:US10722126B2
公开(公告)日:2020-07-28
申请号:US16202110
申请日:2018-11-28
Inventor: Chuan-Yu Chang , Hsiang-Chi Liu , Matthew Huei-Ming Ma
Abstract: A heart rate detection method includes a facial image data acquiring step, a feature points recognizing step, an effective displacement signal generating step and a heart rate determining step. The feature points recognizing step is for recognizing a plurality of feature points, wherein a number range of the feature points is from three to twenty, and the feature points include a center point between two medial canthi, a point of a pronasale and a point of a subnasale of the face. The effective displacement signal generating step is for calculating an original displacement signal, wherein the original displacement signal is converted to an effective displacement signal. The heart rate determining step is for transforming the effective displacement signals of each of the feature points to an effective spectrum, wherein a heart rate is determined from one of the effective spectrums corresponding to the feature points, respectively.
-
公开(公告)号:US11608170B2
公开(公告)日:2023-03-21
申请号:US16845048
申请日:2020-04-09
Inventor: Ching-Ju Chen , Chuan-Yu Chang , Chia-Yan Cheng , Meng-Syue Li , Yueh-Min Huang
Abstract: A buoy position monitoring method includes a buoy positioning step, an unmanned aerial vehicle receiving step and an unmanned aerial vehicle flying step. In the buoy positioning step, a plurality of buoys are put on a water surface. Each of the buoys is capable of sending a detecting signal. Each of the detecting signals is sent periodically and includes a position dataset of each of the buoys. In the unmanned aerial vehicle receiving step, an unmanned aerial vehicle is disposed on an initial position, and the unmanned aerial vehicle receives the detecting signals. In the unmanned aerial vehicle flying step, when at least one of the buoys is lost, the unmanned aerial vehicle flies to a predetermined position to get contact with the at least one buoy that is lost.
-
公开(公告)号:US11380348B2
公开(公告)日:2022-07-05
申请号:US17004015
申请日:2020-08-27
Inventor: Chuan-Yu Chang , Jun-Ying Li
Abstract: A method for correcting infant crying identification includes the following steps: a detecting step provides an audio unit to detect a sound around an infant to generate a plurality of audio samples. A converting step provides a processing unit to convert the audio samples to generate a plurality of audio spectrograms. An extracting step provides a common model to extract the audio spectrograms to generate a plurality of infant crying features. An incremental training step provides an incremental model to train the infant crying features to generate an identification result. A judging step provides the processing unit to judge whether the identification result is correct according to a real result of the infant. When the identification result is different from the real result, an incorrect result is generated. A correcting step provides the processing unit to correct the incremental model according to the incorrect result.
-
-
-