Abstract:
A vessel segmentation method includes acquiring an image of a blood vessel, including cross sections, using a contrast medium. The method further includes setting a threshold value for each of the cross sections based on data of an intensity of the contrast medium. The method further includes performing vessel segmentation based on the image and the threshold value for each of the cross sections.
Abstract:
A method and apparatus for image quality assessment are provided. The method of image quality assessment includes: accessing a text prompt representing an image-quality attribute of a target image included in a data set; training a target encoder to correspond to a visual-language model (VLM), the training based on data obtained by applying the text prompt to the VLM; and fine-tuning the trained target encoder to perform image-quality assessment.
Abstract:
A processor-implemented method with image generation includes obtaining a first image, determining predicted texture information of a target image corresponding to the first image through a texture prediction model, based on the first image, determining predicted color information of the target image through a color prediction model, based on the first image, and generating the target image based on the first image, using the predicted texture information and the predicted color information, wherein a format of the target image is different from that of the first image.
Abstract:
A processor-implemented method includes: generating first input data comprising phase information of an input image; generating second input data in which lens position information is encoded; and determining position information of a lens corresponding to autofocus by inputting the first input data and the second input data to a neural network model.
Abstract:
A method and apparatus for image restoration based on burst images. The method includes generating a plurality of feature representations corresponding to individual images of a burst image set by encoding the individual images, determining a reference feature representation from among the plurality of feature representations, determining a first comparison pair including the reference feature representation and a first feature representation of the plurality of feature representations, generating a first motion-embedding feature representation of the first comparison pair based on a similarity score map of the reference feature representation and the first feature representation, generating a fusion result by fusing a plurality of motion-embedding feature representations including the first motion-embedding feature representation, and generating at least one restored image by decoding the fusion result.
Abstract:
A computing device and an operation method thereof are disclosed. The method includes unshuffling first image data to generate input data, generating output data by implementing a neural network (NN) model provided the input data, and generating second image data by shuffling the output data.
Abstract:
A method with image processing includes: setting an offset window for an offset pattern of a kernel offset and an offset parameter for an application intensity of the kernel offset; determining an output kernel by applying the kernel offset to an input kernel based on the offset window and the offset parameter; and adjusting contrast of a degraded image using the output kernel.
Abstract:
Methods and apparatuses with training or image enhancement are disclosed. The image enhancement method includes obtaining an input image, estimating a noise distribution of the input image by implementing a noise model based on the input image, and generating an enhanced image by implementing an image enhancement model dependent on the input image and the estimated noise distribution.
Abstract:
A method with image augmentation includes recognizing, based on a gaze of the user corresponding to the input image, any one or any combination of any two or more of an object of interest of a user, a situation of the object of interest, and a task of the user from partial regions of an input image determining relevant information indicating an intention of the user, based on any two or any other combination of the object of interest of the user, the situation of the object of interest, and the task of the user, and generating a visually augmented image by visually augmenting the input image based on the relevant information.
Abstract:
A display method includes displaying, in a virtual environment, an object to which a light source is set. The method further includes illuminating an area around the object based on the light source.