Abstract:
A pitch shifting apparatus detects peak spectra P1 and P2 from amplitude spectra of inputs sound. The pitch shifting apparatus compresses or expands an amplitude spectrum distribution AM1 in a first frequency region A1 including a first frequency f1 of the peak spectrum P1 using a pitch shift ratio which keeps its shape to obtain an amplitude spectrum distribution AM10 for a pitch-shifted first frequency region A10. The pitch shifting apparatus similarly compresses or expands an amplitude spectrum distribution AM2 adjacent to the peak spectrum P2 to obtain an amplitude spectrum distribution AM20. The pitch shifting apparatus performs pitch shifting by compressing or expanding amplitude spectra in an intermediate frequency region A3 between the peak spectra P1 and P2 at a given pitch shift ratio in response to the each amplitude spectrum.
Abstract:
A sound analysis apparatus employs tone models which are associated with various fundamental frequencies and each of which simulates a harmonic structure of a performance sound generated by a musical instrument, then defines a weighted mixture of the tone models to simulate frequency components of the performance sound, further sequentially updates and optimizes weight values of the respective tone models so that a frequency distribution of the weighted mixture of the tone models corresponds to a distribution of the frequency components of the performance sound, and estimates the fundamental frequency of the performance sound based on the optimized weight values.
Abstract:
PROBLEM TO BE SOLVED: To realize music analysis (especially, music search) robust against an error in a designated note sequence.SOLUTION: A feature extraction unit 22 generates a time series (feature vector series) X of feature vectors xm from a designated note sequence corresponding to indication from a user. A probabilistic model (a weight λk for a feature function fk()) generated by machine learning using time series of feature vectors xm of a plurality of reference music pieces is applied to the feature vector series X of the designated note sequence, and thereby an evaluation index value SC[n] depending on the probability P[ym=Ln] that the designated note sequence is a note sequence in a reference music piece is computed for each reference music piece.
Abstract:
PROBLEM TO BE SOLVED: To search for musical data related to an automatic accompaniment including a phrase configured of a musical tone pattern of which a degree of similarity to the pattern of an intended musical tone on a part specified by a user satisfies predetermined conditions.SOLUTION: A user inputs, in a rhythm input device 10, a rhythm pattern using an operation form of an operator corresponding to any part of a plurality of parts constituting automatic accompaniment data. An input rhythm pattern storage section 212 causes RAM to store an input rhythm pattern on the basis of a clock signal output by a bar line clock output section 211 and an input trigger data. A part specification section 213 specifies an object part from MIDI information according to the operator. A rhythm pattern search section 214 searches an automatic accompaniment DB 221 for automatic accompaniment data including a rhythm pattern that has a highest degree of similarity to the input rhythm pattern, on the specified part.
Abstract:
PROBLEM TO BE SOLVED: To prevent unnecessary sound from being generated between fragments and make the seam inconspicuous when connecting a plurality of musical pieces in units of fragments to generate a new musical piece. SOLUTION: The music processing device includes a storage means for storing music sound data showing the waveform of each fragment obtained by dividing a musical piece. The device is configured to: sequentially read out the relevant musical sound data from the storage means according to reproduction instruction showing a plurality of fragments to be reproduced and the reproduction timing of each fragment; apply an envelope for fading out in a fragment represented by the relevant musical sound data, to the musical sound data; on the other hand, apply an envelope for fading in before starting the fragment represented by the musical sound data when the musical sound data is output following another musical sound data; and outputting the musical sound data with the envelopes applied thereto with cross-fading. COPYRIGHT: (C)2010,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To precisely evaluate a degree of consonance or dissonance between a plurality of sounds. SOLUTION: A mask generation section 30 generates an evaluating mask M indicating, per each frequency, a degree of dissonance between a target sound VA and a sound at the frequency by setting a dissonance function Fd indicating relationship between a frequency difference from a peak p and a degree of dissonance with a component of the peak, for each of a plurality of peaks p in a spectral sequence RA of the target sound VA. An index calculation section 60 collates a spectral sequence RB of an evaluated sound VB with the evaluating mask M to calculate a consonance index value D indicating a degree of consonance or dissonance between the target sound VA and the evaluated sound VB. COPYRIGHT: (C)2010,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To enable a user to visually grasp a range of sounds matching his or her image and play using sounds within the range. SOLUTION: The player displays on a display device an image representing as points in a two-dimensional space respective sound data stored in a database storing feature quantity data indicative of degrees of two kinds of features of sounds that a plurality of sound data represent corresponding to the sound data. When an operation to specify one point in the two-dimensional space is detected, the player reads out sound data corresponding to a point whose distance from the specified point is less than a predetermined threshold, and performs output control over a sound signal according to the sound data. COPYRIGHT: (C)2008,JPO&INPIT