Abstract:
Apparatuses, methods, and systems are presented for reacting to scene-based occurrences. Such an apparatus may comprise dedicated computer vision (CV) computation hardware configured to receive sensor data from a sensor array comprising a plurality of sensor pixels and capable of computing one or more CV features using readings from neighboring sensor pixels of the sensor array. The apparatus may further comprise a first processing unit configured to control operation of the dedicated CV computation hardware. The first processing unit may be further configured to execute one or more application programs and, in conjunction with execution of the one or more application programs, communicate with at least one input/output (I/O) device controller, to effectuate an I/O operation in reaction to an event generated based on operations performed on the one or more computed CV features.
Abstract:
Aspects of the present disclosure relate to methods and apparatus for training an artificial nervous system. According to certain aspects, timing of spikes of an artificial neuron during a training iteration are recorded, the spikes of the artificial neuron are replayed according to the recorded timing, during a subsequent training iteration, and parameters associated with the artificial neuron are updated based, at least in part, on the subsequent training iteration.
Abstract:
Apparatuses, methods, and systems are presented for reacting to scene-based occurrences. Such an apparatus may comprise dedicated computer vision (CV) computation hardware configured to receive sensor data from a sensor array comprising a plurality of sensor pixels and capable of computing one or more CV features using readings from neighboring sensor pixels of the sensor array. The apparatus may further comprise a first processing unit configured to control operation of the dedicated CV computation hardware. The first processing unit may be further configured to execute one or more application programs and, in conjunction with execution of the one or more application programs, communicate with at least one input/output (I/O) device controller, to effectuate an I/O operation in reaction to an event generated based on operations performed on the one or more computed CV features.
Abstract:
Singleprocessor Vision Sensor System 1350 Queries Peripheral Circuitry —0 1370 Dedicated Processor 1312 1314 r- 1 1340 Visual Sensor Array Unit Face Detection Event A 1330 4 0 1372 1374 110 Device Controller Memory CV Hardware Controller Core Application Processor Core Visual Input FIG. 13B 1-1 00 (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (43) International Publication Date 26 July 2018 (26.07.2018) WIP0 I PCT omit VIII °nolo 10110111 OH oimIE (10) International Publication Number WO 2018/136325 Al (51) International Patent Classification: GOOK 9/00 (2006.01) GOOK 9 / 5 6 (2006.01) GOOK 9 / 4 6 (2006.01) (21) International Application Number: PCT/US2018/013501 (22) International Filing Date: 12 January 2018 (12.01.2018) (25) Filing Language: English (26) Publication Language: English (30) Priority Data: 15/413,390 23 January 2017 (23.01.2017) US (71) Applicant: QUALCOMM INCORPORATED [US/US]; ATTN: International IP Administration, 5775 Morehouse Drive, San Diego, California 92121-1714 (US). (72) Inventors: GOUSEV, Evgeni; 5775 Morehouse Dri- ve, San Diego, California 92121-1714 (US). GOVIL, Alok; 5775 Morehouse Drive, San Diego, California 92121-1714 (US). MAITAN, Jacek; 5775 Morehouse Dri- ve, San Diego, California 92121-1714 (US). RASQUIN- HA, Nelson; 5775 Morehouse Drive, San Diego, California 92121-1714 (US). RANGAN, Venkat; 5775 Morehouse Drive, San Diego, California 92121-1714 (US). PARK, Ed- win Chongwoo; 5775 Morehouse Drive, San Diego, Cali- fornia 92121-1714 (US). (74) Agent: CHANG, Ko-Fang et al.; Kilpatrick Townsend & Stockton LLP, Mailstop: IP Docketing - 22, 1100 Peachtree Street, N.E., Suite 2800, Atlanta, Georgia 30309 (US). (81) Designated States (unless otherwise indicated, for every kind of national protection available): AE, AG, AL, AM, AO, AT, AU, AZ, BA, BB, BG, BH, BN, BR, BW, BY, BZ, CA, CH, CL, CN, CO, CR, CU, CZ, DE, DJ, DK, DM, DO, (54) Title: SINGLE-PROCESSOR COMPUTER VISION HARDWARE CONTROL AND APPLICATION EXECUTION (57) : Apparatuses, methods, and systems are presented for reacting to scene-based occurrences. Such an apparatus may comprise dedicated computer vision (CV) computation hardware configured to receive sensor data from a sensor array comprising a plurality of sensor pixels and capable of computing one or more CV features using readings from neighboring sensor pixels of the sensor array. The apparatus may further comprise a first processing unit configured to control operation of the dedicated CV computation hardware. The first processing unit may be further configured to execute one or more application programs and, in conjunction with execution of the one or more application programs, communicate with at least one input/output (I/O) device controller, to effectuate an I/O operation in reaction to an event generated based on operations performed on the one or more computed CV features. [Continued on next page] WO 2018/136325 Al MIDEDIMOMOIDEIREEMOMMIMIMMOHOMEHOIS DZ, EC, EE, EG, ES, FI, GB, GD, GE, GH, GM, GT, HN, HR, HU, ID, IL, IN, IR, IS, JO, JP, KE, KG, KH, KN, KP, KR, KW, KZ, LA, LC, LK, LR, LS, LU, LY, MA, MD, ME, MG, MK, MN, MW, MX, MY, MZ, NA, NG, NI, NO, NZ, OM, PA, PE, PG, PH, PL, PT, QA, RO, RS, RU, RW, SA, SC, SD, SE, SG, SK, SL, SM, ST, SV, SY, TH, TJ, TM, TN, TR, TT, TZ, UA, UG, US, UZ, VC, VN, ZA, ZM, ZW. (84) Designated States (unless otherwise indicated, for every kind of regional protection available): ARIPO (BW, GH, GM, KE, LR, LS, MW, MZ, NA, RW, SD, SL, ST, SZ, TZ, UG, ZM, ZW), Eurasian (AM, AZ, BY, KG, KZ, RU, TJ, TM), European (AL, AT, BE, BG, CH, CY, CZ, DE, DK, EE, ES, FI, FR, GB, GR, HR, HU, IE, IS, IT, LT, LU, LV, MC, MK, MT, NL, NO, PL, PT, RO, RS, SE, SI, SK, SM, TR), OAPI (BF, BJ, CF, CG, CI, CM, GA, GN, GQ, GW, KM, ML, MR, NE, SN, TD, TG). Declarations under Rule 4.17: as to applicant's entitlement to apply for and be granted a patent (Rule 4.17(11)) as to the applicant's entitlement to claim the priority of the earlier application (Rule 4.17(iii)) Published: — with international search report (Art. 21(3))
Abstract:
Techniques disclosed herein utilize a vision sensor that integrates a special-purpose camera with dedicated computer vision (CV) computation hardware and a dedicated low-power microprocessor for the purposes of detecting, tracking, recognizing, and/or analyzing subjects, objects, and scenes in the view of the camera. The vision sensor processes the information retrieved from the camera using the included low-power microprocessor and sends "events" (or indications that one or more reference occurrences have occurred, and, possibly, associated data) for the main processor only when needed or as defined and configured by the application. This allows the general-purpose microprocessor (which is typically relatively high-speed and high-power to support a variety of applications) to stay in a low-power (e.g., sleep mode) most of the time as conventional, while becoming active only when events are received from the vision sensor.
Abstract:
Methods and apparatus are provided for using a breakpoint determination unit to examine an artificial nervous system. One example method generally includes operating at least a portion of the artificial nervous system; using the breakpoint determination unit to detect that a condition exists based at least in part on monitoring one or more components in the artificial nervous system; and at least one of suspending, examining, modifying, or flagging the operation of the at least the portion of the artificial nervous system, based at least in part on the detection.
Abstract:
Methods and apparatus are provided for determining synapses in an artificial nervous system based on connectivity patterns. One example method generally includes determining, for an artificial neuron, an event has occurred; based on the event, determining one or more synapses with other artificial neurons based on a connectivity pattern associated with the artificial neuron; and applying a spike from the artificial neuron to the other artificial neurons based on the determined synapses. In this manner, the connectivity patterns (or parameters for determining such patterns) for particular neuron types, rather than the connectivity itself, may be stored. Using the stored information, synapses may be computed on the fly, thereby reducing memory consumption and increasing memory bandwidth. This also saves time during artificial nervous system updates.
Abstract:
Certain aspects of the present disclosure support a technique for optimized representation of variables in neural systems. Bit-allocation for neural signals and parameters in a neural network described in the present disclosure may comprise allocating quantization levels to the neural signals based on at least one measure of sensitivity of a pre-determined performance metric to quantization errors in the neural signals, and allocating bits to the parameters based on the at least one measure of sensitivity of the pre-determined performance metric to quantization errors in the parameters.
Abstract:
Methods and apparatus are provided for using a breakpoint determination unit to examine an artificial nervous system. One example method generally includes operating at least a portion of the artificial nervous system; using the breakpoint determination unit to detect that a condition exists based at least in part on monitoring one or more components in the artificial nervous system; and at least one of suspending, examining, modifying, or flagging the operation of the at least the portion of the artificial nervous system, based at least in part on the detection.
Abstract:
Certain aspects of the present disclosure support operating simultaneously multiple super neuron processing units in an artificial nervous system, wherein a plurality of artificial neurons is assigned to each super neuron processing unit. The super neuron processing units can be interfaced with a memory for storing and loading synaptic weights and plasticity parameters of the artificial nervous system, wherein organization of the memory allows contiguous memory access.