Patent classifications
G06V10/446
METHOD AND ELECTRONIC DEVICE FOR MANAGING ARTIFACTS OF IMAGE
A method and an electronic device for managing artifacts of an image includes: receiving an input image and extracting multiple features from the input image. The multiple features include a texture of the input image, a color composition of the input image and edges in the input image. Further, the method includes determining a region of interest (RoI) in the input image including an artifact based on the features and generating an intermediate output image by removing the artifact using multiple generative adversarial networks (GANs). Further, the method includes generating a binary mask using the intermediate output image, the input image, an image illustrating edges in the input image and an image illustrating edges in the intermediate output image and obtaining a final output image by applying the generated binary mask to the input image.
Systems and methods for generating semantic information for scanning image
A method for generating semantic information may include obtain a scanning image. The scanning image may include a plurality of pixels representing an anatomical structure. The method may also include obtain a trained segmentation model. The method may further include determine a location probability distribution of the anatomical structure in the scanning image based on the trained segmentation model. The method may also include generate a segmentation result related to the anatomical structure based on the location probability distribution. The method may further include save the segmentation result into a tag of a digital imaging and communications in medicine (DICOM) file.
Dynamic distance estimation output generation based on monocular video
Aspects of the disclosure relate to a dynamic distance estimation output platform that utilizes improved computer vision and perspective transformation techniques to determine vehicle proximities from video footage. A computing platform may receive, from a visible light camera located in a first vehicle, a video output showing a second vehicle that is in front of the first vehicle. The computing platform may determine a longitudinal distance between the first vehicle and the second vehicle by determining an orthogonal distance between a center-of-projection corresponding to the visible light camera, and an intersection of a backside plane of the second vehicle and ground below the second vehicle. The computing platform may send, to an autonomous vehicle control system, a distance estimation output corresponding to the longitudinal distance, which may cause the autonomous vehicle control system to perform vehicle control actions.
Object detection device, object detection method, and program
An object detection device detects a predetermined object from an image. The object detection device includes a first detection unit configured to detect a plurality of candidate regions where the predetermined object exists from the image, a region integrating unit configured to determine one or a plurality of integrated regions according to the plurality of candidate regions detected by the first detection unit, and a second detection unit configured to detect, in the one or the plurality of integrated regions, the predetermined object by using a detection algorithm different from an algorithm of the first detection unit. As a result, it is possible to detect the predetermined object faster and more accurately than before.
Security system with face recognition
An intelligent face recognition system for an installed security system can include a camera and a local or remote video processor including a face recognition engine. The video processor can be configured to receive the image information from the one or more cameras, and generate an alert for communication to a user device based on a recognition event in the environment. In an example, the face recognition engine is configured to apply machine learning to analyze images from the camera and determine whether the images include or correspond to an enrolled face, and the face recognition engine is configured to provide the recognition event based on the determination.
ULTRASOUND ANALYSIS METHOD AND DEVICE
The invention provides an ultrasound data processing method (30) for detecting presence of an intravascular object in a vessel lumen based on analysis of acquired intravascular ultrasound data of the lumen. The method comprises receiving (32) data comprising multiple frames, and each frame containing data for a plurality of radial lines, corresponding to different circumferential positions around the IVUS device body, and reducing (34) the data to a single representative value for each radial line in each frame. These representative values are subsequently processed to derive (36) values for at least each frame representative of a probability of presence of an object within the given frame. Based on the probability values, a region within the data occupied by an intravascular object, for instance a consecutive set of frames occupied by an object, is determined (38).
Video background subtraction using depth
Implementations described herein relate to methods, systems, and computer-readable media to render a foreground video. In some implementations, a method includes receiving a plurality of video frames that include depth data and color data. The method further includes downsampling the frames of the video. The method further includes, for each frame, generating an initial segmentation mask that categorizes each pixel of the frame as foreground pixel or background pixel. The method further includes determining a trimap that classifies each pixel of the frame as known background, known foreground, or unknown. The method further includes, for each pixel that is classified as unknown, calculating and storing a weight in a weight map. The method further includes performing fine segmentation to obtain a binary mask for each frame. The method further includes upsampling the plurality of frames based on the binary mask for each frame to obtain a foreground video.
AUTOMATIC CAMERA GUIDANCE AND SETTINGS ADJUSTMENT
An image capture and processing device captures an image. Based on the image and/or one or more additional images, the image capture and processing device generates and outputs guidance for optimizing image composition, image capture settings, and/or image processing settings. The guidance can be generated based on determination of a direction that a subject of the image is facing, based on sensor measurements indicating that a horizon may be skewed, another image of the same scene captured using a wide-angle lens, another image of the same subject, another image of a different subject, and/or outputs of a machine learning model trained using a set of images. The image capture and processing device can automatically apply certain aspects of the generated guidance, such as image capture settings and/or image processing settings.
OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD, AND PROGRAM
An object detection device detects a predetermined object from an image. The object detection device includes a first detection unit configured to detect a plurality of candidate regions where the predetermined object exists from the image, a region integrating unit configured to determine one or a plurality of integrated regions according to the plurality of candidate regions detected by the first detection unit, and a second detection unit configured to detect, in the one or the plurality of integrated regions, the predetermined object by using a detection algorithm different from an algorithm of the first detection unit. As a result, it is possible to detect the predetermined object faster and more accurately than before.
BALE DETECTION AND CLASSIFICATION USING STEREO CAMERAS
An apparatus comprises a sensor comprising a left camera and a right camera. A processor is coupled to the sensor. The processor is configured to produce an image and disparity data for the image, and search for a vertical object within the image using the disparity data. The processor is also configured to determine whether the vertical object is a bale of material using the image, and compute an orientation of the bale relative to the sensor using the disparity data. The sensor and processor can be mounted for use on an autonomous bale mover comprising an integral power system, a ground-drive system, a bale loading system, and a bale carrying system.