Abstract:
A system enabling the performance of sensory stimulating content including music and video using gaming in a cyber reality environment, such as using a virtual reality headset. This disclosure includes a system and method through which a performer can virtually trigger and control a presentation of pre-packaged sensory stimulating content including musical programs through gaming. A theme for the performer is that the pre-packaged sensory stimulating content is preferably chosen such that, even where the performer is a novice, the sensory stimulating data is presented in a pleasing and sympathetic manner and scoring is provided as a function of the performer's ability to provide a gesture in association with a displayed virtual trigger.
Abstract:
An electronic device and a method for reproducing a sound in the electronic device are provided. The electronic device includes a touchscreen displaying a keyboard having a plurality of keys and a plurality of sound source buttons corresponding respectively to a plurality of different sound sources, a processor connected electrically to the touchscreen, and a memory connected electrically to the processor, wherein the memory stores instructions that are executed to cause the processor to perform control such that when an input to at least one key among the plurality of keys is received, the sound source corresponding to at least one sound source button selected among the plurality of sound source buttons is reproduced as a sound corresponding to the received input.
Abstract:
An electronic device is provided. The electronic device includes a touch screen display, at least one of a speaker and a sound interface, a processor configured to electrically connect to the touch screen display, the speaker, and the sound interface, and a memory configured to electrically connect to the processor. The memory stores instructions for, when executed, causing the processor to display at least one item comprising a musical instrument shape on the touch screen display, receive a touch input through the touch screen display, load sound data corresponding to the at least one item based on the touch input, process the sound data based at least in part on information associated with the touch input, and output the processed sound data through the speaker or the sound interface.
Abstract:
An electronic device is provided. The electronic device includes a display; a memory for storing at least one audio signal; a communication circuit configured to establish wireless communication with an external device; and a processor electrically connected with the display, the memory, and the communication circuit, wherein the memory stores instructions for, when executed, causing the processor to: produce the at least one audio signal, receive data associated with a gesture through the communication circuit from the external device apply a sound effect, selected based at least in part on the data associated with the gesture, to the produced at least one audio source, and output or store a resulting audio signal, wherein the resulting audio signal represents application of the sound effect to the produced at least one audio signal.
Abstract:
A computerized musical percussion instrument is disclosed.Markers carried by the musician are observed by an imager to produce a series of two dimensional images over the time of the performance.A processor receives the images and distinguishes between markers (e.g. left hand, right hand) by comparing the position and size of unidentified markers in the current image to the position and size of identified markers in preceding images.The processor analyses each markers' movements and detects a drum hit when a marker undergoes a sharp reversal of its motion direction after reaching sufficient speed. The processor determines which drum the musician intends to hit by comparing the position and size of the marker at the instant of the hit to the position and size attributes of each drum. The processor outputs an audio signal for each hit, corresponding to the drum hit, with a volume determined by marker speed.
Abstract:
A user interface implemented on a touch-sensitive display for a virtual musical instrument comprising a plurality of chord touch regions configured in a predetermined sequence, each chord touch region corresponding to a chord in a musical key and being divided into a plurality of separate touch zones, the plurality of chord touch regions defining a predetermined set of chords, where each of the plurality of separate touch zones in each region is associated with one or more preselected MIDI files stored in a computer-readable medium. In some embodiments, the touch zones are configured to provide different harmonic configurations of a base chord associated with the chord touch region. Some harmonic configurations provide progressively wider harmonic ranges across each adjacent touch zone. Other harmonic configurations can provide chords with a progressively higher relative pitch across each adjacent touch zone.
Abstract:
A wireless sensor network for musical instruments is provided that will allow a musician to communicate natural performance gestures (orientation, pressure, tilt, etc) to a computer. User interfaces and computing modules are also provided that enable a user to utilize the data communicated by the wireless sensor network to supplement and/or augment the artistic expression.
Abstract:
Embodiments of the present application relate generally to personal electronics, portable electronics, wearable electronics, and more specifically to wirelessly enabled devices that include a haptic interface and are configured to wirelessly communicate with one another to synchronize body motion or other user actions based on haptic prompts generated by a sensor system in one or more of the wirelessly enabled devices. Each wirelessly enabled device may include at least one radio configured to transmit, receive or both, RF signals encoded with motion data operative to generate sensory outputs from the haptic interface of one or more of the wirelessly enabled devices. At least one of the wirelessly enabled devices may be configured as a leader device and one or more other wirelessly enabled devices may be configured as a follower device. One or more wirelessly enabled devices may be wirelessly linked to a wireless media device that generates motion data.
Abstract:
A computer-implemented method including generating a user interface implemented on a touch-sensitive display configured to generate a virtual dual flywheel system for modulating a lifecycle of a musical note or chord. The dual flywheel system (DFS) includes a first VFS and a second VFS, where the first virtual flywheel system series connected to the second virtual flywheel system such that an output of the first virtual flywheel system is coupled to an input of the second virtual flywheel system. Upon receiving a user input on the user interface, the dual flywheel system determines a virtual momentum for the first virtual flywheel based on the user input and a predetermined mass coefficient of the first virtual flywheel system, and determines a virtual momentum for the second virtual flywheel based on the virtual momentum of the first virtual flywheel system and a predetermined mass coefficient of the second virtual flywheel.
Abstract:
A user interface implemented on a touch-sensitive display for a virtual musical instrument comprising a plurality of chord touch regions configured in a predetermined sequence, each chord touch region corresponding to a chord in a musical key and being divided into a plurality of separate touch zones, the plurality of chord touch regions defining a predetermined set of chords, where each of the plurality of separate touch zones in each region is associated with one or more preselected MIDI files stored in a computer-readable medium. Each of the plurality of touch zones is configured to detect one or more of a plurality of touch gesture articulations including at least one of a legato articulation, a pizzicato articulation, or a staccato articulation. The one or more of the plurality of touch gesture articulations determines the preselected MIDI file associated with each of the plurality of separate touch zones.