Abstract:
The position of a trimming region is decided in accordance with a scene type represented by an image which is a layout target. The trimming region is trimmed at the decided position.
Abstract:
A control method for an image processing apparatus that generates a layout image by arranging an image in a template includes specifying, as a designated object, at least one object of a plurality of objects including a first object and a second object according to a combination of a plurality of setting values including a setting value regarding the first object and a setting value regarding the second object based on the received input, selecting image data from a plurality of pieces of image data, and outputting a layout image in which an image represented by the selected image data is arranged in a template, wherein image data representing an image that contains an object specified as the designated object is selected in preference to image data representing an image that does not contain an object specified as the designated object from the plurality of pieces of image data.
Abstract:
An image processing method is provided for acquiring additional information from image information obtained by shooting a printed product on which the additional information is multiplexed by at least one of a plurality of different multiplexing methods, the method comprising: attempting decoding of the additional information from the image information by a plurality of different decoding methods corresponding to the plurality of different multiplexing methods; and outputting, by a unit, the additional information successfully decoded.
Abstract:
An image processing apparatus for embedding additional information in an image having a plurality of pixels and image information. A generating unit generates (a) the additional information to be embedded and (b) a bookbinding type of a medium containing the image. A holding unit holds a plurality of quantization conditions for embedding the additional information, including a quantization threshold. A selection unit segments the image into a plurality of embedding regions, and selects a condition to use in quantization based on the image information. A quantization condition control unit controls at least one of the plurality of quantization conditions for a predetermined region of the image based on the bookbinding type generated. An error diffusion processing unit distributes an error to peripheral pixels of a target pixel. A separating unit separates the embedded additional information from the image.
Abstract:
Upon capturing a digital watermark-embedded print product by an image capturing device held by the hand of a user and extracting embedded information, the relative speed between the hand-held image capturing device and the print product changes, failing in extracting the embedded additional information. In an embodiment, the image capturing device performs continuous shooting for every small area to read the print product. A shoot image is analyzed to calculate the feature amount of the image. Information about an image capturing quality characteristic generated at the time of capturing the print product is estimated, and a change of the feature amount accompanying the image capturing quality characteristic is predicted based on the estimation result. A code indicated by the watermark is determined based on the calculation result and the prediction result.
Abstract:
A plurality of images obtained by capturing, dividing over a plurality of times, a printed material in which additional information is embedded are input. Feature information concerning each of the inputted plurality of images is extracted. A poor quality area in the images is evaluated. An overlapping area of the plurality of images is specified based on the feature information. Additional information embedded in the printed material is extracted based on a result of the evaluating and a result of the specifying.
Abstract:
An image processing apparatus includes a generation unit; wherein a first dither pattern and a second dither pattern have same threshold values that are set for same pixels in a first gradation range, and have different threshold values that are set for same pixels in a second gradation range that exceeds the first gradation range; and the generation unit sets a first threshold value for forming dots of the first dither pattern, and generates a first binary data according to whether or not a threshold value corresponding to a target pixel of the first dither pattern is included in the first threshold value, and sets a second threshold value for forming dots of the second dither pattern, and generates a second binary data according to whether or not a threshold value corresponding to a target pixel of the second dither pattern is included in the second threshold value.
Abstract:
One dither mask having a highest spacial frequency is selected from a plurality of dither masks. Next, a granularity is obtained with reference to a table based on the selected dither mask and an ejection amount level per area. Moreover, a difference in granularity between adjacent areas is calculated with respect to all of the areas. A maximum value is obtained out of the obtained differences in granularity, and then, the maximum difference in granularity is compared with a determination threshold. When the maximum difference in granularity is the threshold or greater, it is determined whether or not a dither mask having a spacial frequency lower than that of the selected dither mask is stored in a memory. When there are dither masks having lower spacial frequencies, a dither mask having a spacial frequency lower by one level than that of the selected dither mask is selected.
Abstract:
An amount of color correction of a facial region is modified based on changes caused in face average values by white balance correction. Also, the amount of color correction of the facial region is modified according to luminance of pixels of interest to perform optimum color correction on the facial region and a highlight region.
Abstract:
A gradation-correcting curve to correct a gradation of an input image is generated. Gradation correction based on the gradation-correcting curve is made to a boundary pixel of a color-reproduction space at the same saturation as that of a target lattice point and at a hue of the target lattice point. An equal-saturation line is set by using the boundary pixel of the color-reproduction space after the gradation correction. A saturation-correction amount of the target lattice point is decided based on the gradation-correcting curve and the equal-saturation line.