Patent classifications
G06T5/94
Autonomous aerial navigation in low-light and no-light conditions
Autonomous aerial navigation in low-light and no-light conditions includes using night mode obstacle avoidance intelligence, training, and mechanisms for vision-based unmanned aerial vehicle (UAV) navigation to enable autonomous flight operations of a UAV in low-light and no-light environments using infrared data.
Selectively increasing depth-of-field in scenes with multiple regions of interest
The present disclosure provides systems, apparatus, methods, and computer-readable media that support multi-frame depth-of-field (MF-DOF) for deblurring background regions of interest (ROIs), such as background faces, that may be blurred due to a large aperture size or other characteristics of the camera used to capture the image frame. The processing may include the use of two image frames obtained at two different focus points corresponding to the multiple ROIs in the image frame. The corrected image frame may be determined by deblurring one or more ROIs of the first image frame using an AI-based model and/or local gradient information. The MF-DOF may allow selectively increasing a depth-of-field (DOF) of an image to provide focused capture of multiple regions of interest, without causing a reduction in aperture (and subsequently an amount of light available for photography) or background blur that may be desired for photography.
Regularized Derivative Operators for Image Processing System and Method
Devices, methods, and non-transitory program storage devices are disclosed herein to provide improved image processing, the techniques comprising: obtaining an input image and target image data, and then calculating derivatives for the target image data using a regularized derivative kernel operator. In some embodiments, the regularized operator may comprise the following operator: [1 (1+)], wherein may be a controllable system parameter and preferably is independent of the particular type of image processing being applied to the image. In some embodiments, the techniques may find look-up table (LUT) mappings or analytical functions to approximate the derivative structure of the target image data. Finally, the techniques disclosed herein may generate an output image from the input image based on attempting to closely approximate the calculated derivatives for the target image data. In preferred embodiments, by controlling the mapping, e.g., using regularization techniques, halos and other image artifacts may be ameliorated.
UTILIZING CONTEXT-AWARE SENSORS AND MULTI-DIMENSIONAL GESTURE INPUTS TO EFFICIENTLY GENERATE ENHANCED DIGITAL IMAGES
The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize context-aware sensors and multi-dimensional gesture inputs across a digital image to generate enhanced digital images. In particular, the disclosed systems can provide a dynamic sensor over a digital image within a digital enhancement user interface (e.g., a user interface without visual elements for modifying parameter values). In response to selection of a sensor location, the disclosed systems can determine one or more digital image features at the sensor location. Based on these features, the disclosed systems can select and map parameters to movement directions. Moreover, the disclosed systems can identify a user input gesture comprising movements in one or more directions across the digital image. Based on the movements and the one or more features at the sensor location, the disclosed systems can modify parameter values and generate an enhanced digital image.
CORRECTION FOR PIXEL-TO-PIXEL SIGNAL DIFFUSION
A method to correct a digital image to reverse the effect of signal diffusion among pixels of the digital image. For a target pixel j of the digital image, a set of signal values and a set of signal amplitudes are received, each corresponding to a set of kernel pixels i surrounding and including the target pixel j. For each kernel pixel i, a weighting coefficient is computed based on the signal amplitude of that kernel pixel i and on the signal amplitude of the target pixel j. A linear combination of signal values corresponding to the set of kernel pixels i is computed, wherein the signal value for each pixel i is weighted by the weighting coefficient corresponding to that pixel i. The linear combination is stored in volatile memory of an electronic device as a corrected signal value for the target pixel j.
IMAGE INTERPRETATION SUPPORT APPARATUS, IMAGE INTERPRETATION SUPPORT METHOD, AND IMAGE INTERPRETATION SUPPORT PROGRAM
An image interpretation support apparatus includes: an acquisition unit that acquires a plurality of projection images obtained by tomosynthesis imaging in which a radiation is irradiated to a breast from different irradiation angles by a radiation source and a projection image is captured at each irradiation angle by a radiation detector; a first generation unit that generates a plurality of tomographic images on each of a plurality of tomographic planes of the breast from the plurality of projection images; a second generation unit that generates a synthetic two-dimensional image from a plurality of images among the plurality of projection images and the plurality of tomographic images; a detection unit that detects an object of interest candidate region estimated to include an object of interest from the synthetic two-dimensional image; and a determination unit that determines whether or not the object of interest is included in the object of interest candidate region on the basis of the plurality of tomographic images.
Method and apparatus for accelerated tonemapping and display
According to some embodiments, a camera captures video images at a high dynamic range. These images are then tonemapped into images of a lower dynamic range with enhanced contrast. The contrast enhancement for a given pixel depends on the image's local contrast at a variety of different scales. The tonemapped images are then shown on a display. Calculation of this contrast is accelerated by the camera creating a plurality of low-pass filtered versions of the original image at progressively stronger low-pass filtering; these images may be stored at increasingly lower resolutions in a mipmap. Calculations are enhanced by use of a massively parallel processor and a texture mapping unit for hardware-accelerated sampling of blended averages of several pixels. Other embodiments are shown and discussed.
Dynamic overlay display properties adjustment methods and systems
Systems and methods for dynamically computing display properties of an overlay on a base image are disclosed. The method receives base image data and its properties, and overlay data and its position on the base image. The method identifies a set of pixels of the base image situated under at least a portion of the overlay. For each pixel in the set of pixels, the method computes a value of a display property of the pixel. The method then computes a reference display property value of the base image based on the computed display property values of pixels and computes a display property value of the overlay using the computed reference display property value of the base image. The method then assigns a display property of the overlay based on the computed display property value of the overlay.
Distance estimation using machine learning
A method receives a captured image depicting image content including an object, the captured image being captured by an image sensor located at a sensor position; generates, using a trained first machine learning logic, a lighting-corrected image from an imitative simulation image depicting at least a portion of the image content of the captured image in a simulation style associated with an environment simulator; generates, using a trained second machine learning logic, a depth estimation image from the lighting-corrected image, the depth estimation image indicating a relative distance between the object depicted in the captured image and the sensor position of the image sensor; and determines an object position of the object depicted in the captured image based on the depth estimation image.
Medical image processing apparatus and X-ray diagnosis apparatus
A medical image processing apparatus according to an embodiment includes processing circuitry configured: to sequentially acquire X-ray images; to set a unit number of frames used as a unit during image processing; and to sequentially generate images in each of which a pixel value of each pixel expresses either the largest pixel value or the smallest pixel value among corresponding pixels in X-ray images corresponding to the unit number of frames, on the basis of the X-ray images corresponding to the unit number of frames including each of new X-ray image that is sequentially acquired and at least one X-ray image acquired before the new X-ray image.