Patent classifications
G06T2207/20024
Photoacoustic image evaluation apparatus, method, and program, and photoacoustic image generation apparatus
A photoacoustic image evaluation apparatus includes a processor configured to acquire a first photoacoustic image generated at a first point in time and a second photoacoustic image generated at a second point in time before the first point in time, the first and second photoacoustic images being photoacoustic images generated by detecting photoacoustic waves generated inside a subject, who has been subjected to blood vessel regeneration treatment, by emission of light into the subject; acquire a blood vessel regeneration index, which indicates a state of a blood vessel by the regeneration treatment, based on a difference between a blood vessel included in the first photoacoustic image and a blood vessel included in the second photoacoustic image; and display the blood vessel regeneration index on a display.
Artificial intelligence dispatch in healthcare
Patient, user, and/or AI information are used in a multi-objective optimization to select one of a plurality of available AIs for a task. On a patient or user-specific basis, an optimal AI is selected and applied for medical imaging or other healthcare actions. The selection may be before application, avoiding costs of applying multiple AIs to obtain the best results. The optimization may be based on statistical feedback from the user for various of the available AIs, providing information not otherwise available. The optimization may be based on AI performance, AI inclusion and/or exclusion criteria, and/or pricing information. By using optimization based on various information related to the patient, user, and/or available AI, the application of AI for a given user and/or patient by the computer may be improved. The computer operates better to provide more focused information through AI application.
MULTI-DOMAIN CONVOLUTIONAL NEURAL NETWORK
In one embodiment, an apparatus comprises a memory and a processor. The memory is to store visual data associated with a visual representation captured by one or more sensors. The processor is to: obtain the visual data associated with the visual representation captured by the one or more sensors, wherein the visual data comprises uncompressed visual data or compressed visual data; process the visual data using a convolutional neural network (CNN), wherein the CNN comprises a plurality of layers, wherein the plurality of layers comprises a plurality of filters, and wherein the plurality of filters comprises one or more pixel-domain filters to perform processing associated with uncompressed data and one or more compressed-domain filters to perform processing associated with compressed data; and classify the visual data based on an output of the CNN.
METHOD AND SYSTEM FOR INSPECTING A BUILDING CONSTRUCTION SITE USING A MOBILE ROBOTIC SYSTEM
A method of inspecting a building construction site using a mobile robotic system includes a mobile platform and a sensor system mounted on the mobile platform and configured to generate one or more types of sensor data. The method includes: receiving object identification information identifying at least one building object to be inspected by the mobile robotic system in the building construction site; obtaining a robot navigation map covering the at least one building object based on a building information model for the building construction site; and determining at least one goal point in the robot navigation map for the at least one building object, each goal point being a position in the robot navigation map for the mobile robotic system to navigate autonomously to for inspecting corresponding one or more building objects of the at least one building object. A corresponding inspection system is also provided.
Depth-based image stitching for handling parallax
A solution to the problem of image and video stitching is disclosed that compensates for the effects of lens distortion, camera misalignment, and parallax in combining multiple images. The disclosed image stitching technique includes depth or disparity estimation, alignment, and blending processes configured to be computationally efficient and produce quality results by limiting the presence of noticeable seams and artifacts in the final stitched image. An inter-frame approach applies image stitching to video frames to maintain temporal continuity between successive frames across a stitched video output having a 360-degree viewing perspective. A temporal adjustment is configured to improve temporal continuity between a subsequent frame and a previous frame in a sequence of video frames.
TRAINING METHOD OF GENERATOR NETWORK MODEL AND ELECTRONIC DEVICE FOR EXECUTION THEREOF
A training method of a generator network model and an electronic device for execution thereof are provided. The training method includes: extracting a first tensor matrix and a second tensor matrix, wherein the first tensor matrix and the second tensor matrix respectively represent a first picture and a second picture and individually include a plurality of first parameters and a plurality of second parameters; generating a plurality of third pictures according to a plurality of difference values between the first parameters of the first tensor matrix and the second parameters of the second tensor matrix; performing a similarity test on a plurality of original pictures and the plurality of third pictures; and adopting at least one of the third pictures whose similarity is lower than or equal to a similarity threshold as at least one new sample picture.
FUZZY LOGIC-BASED PATTERN MATCHING AND CORNER FILTERING FOR DISPLAY SCALER
Aspects presented herein relate to methods and devices for display processing including an apparatus, e.g., a DPU. The apparatus may receive at least one input image for a scaling operation, the at least one input image being associated with one or more scanning windows, each of the scanning windows including a plurality of pixels. The apparatus may also detect one or more features in the plurality of pixels in each of the one or more scanning windows. Further, the apparatus may adjust an amount of the plurality of pixels in each of the scanning windows for each of the detected features. The apparatus may also combine the adjusted amount of the plurality of pixels for each of the detected one or more features into a plurality of output pixels. The apparatus may also process each of the plurality of output pixels into at least one output image.
Physical object boundary detection techniques and systems
Physical object boundary detection techniques and systems are described. In one example, an augmented reality module generates three dimensional point cloud data. This data describes depths at respective points within a physical environment that includes the physical object. A physical object boundary detection module is then employed to filter the point cloud data by removing points that correspond to a ground plane. The module then performs a nearest neighbor search to locate a subset of the points within the filtered point cloud data that correspond to the physical object. Based on this subset, the module projects the subset of points onto the ground plane to generate a two-dimensional boundary. The two-dimensional boundary is then extruded based on a height determined from a point having a maximum distance from the ground plane from the filtered cloud point data.
Transducer spectral normalization
Systems and methods are disclosed for an ultrasound system. In various embodiments, a system is configured to receive echo data corresponding to a detection of an echo of a pulse signal, generate a set of transformations based on the echo data, and generate a set of point estimates for a frequency dependent filtering coefficient of a spectral response. The system is further configured to extract a set of attenuation coefficients based on the set of point estimates for the frequency dependent filtering coefficient and generate image data for the material of interest based on the set of attenuation coefficients.
METHOD AND DEVICE FOR DETECTING A TRAILER
A method for determining a location of a trailer in an image includes obtaining at least one real-time image from a vehicle. The at least one real-time image is processed with a controller on the vehicle to obtain a feature patch describing at least one real-time image. A convolution is performed of the feature patch and each filter from a set of filters with the filter being based on data representative of known trailers. A location of a trailer is determined in the at least one real-time image based on the convolution between the feature patch and each filter from the set of filters.