G06V10/751

DETECTION RESULT OUTPUT METHOD, ELECTRONIC DEVICE AND MEDIUM
20230014409 · 2023-01-19 ·

A detection result output method, an electronic device, and a medium are provided. The detection result output method includes: obtaining first image information of a first object, where the first image information includes skin information of the first object; outputting a target detection result in a case that a matching degree between a first image and a target image meets a first preset condition; and outputting a first detection result in a case that the matching degree between the first image and the target image does not meet the first preset condition. The target detection result is a detection result corresponding to the target image, and the first detection result is a detection result corresponding to the first image.

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD
20230013468 · 2023-01-19 · ·

An information processing system includes an imaging unit that generates an image signal by imaging and an information processing device. The information processing device performs at least any one of plural kinds of image processing on a taken image corresponding to the image signal. The information processing device specifies an object corresponding to a partial image included in the taken image on the basis of a state of the object corresponding to the partial image included in the taken image or a degree of reliability given to a processing result of the performed image processing.

INFORMATION PROCESSING APPARATUS
20230012843 · 2023-01-19 · ·

An autonomous driving system for a vehicle reduces the amount of computations for object extraction carried out by a DNN, using information a traveling environment or the like. An information processing apparatus including a processor, a memory, and an arithmetic unit that executes a computation using an inference model is provided. The information processing apparatus includes a DNN processing unit that receives external information, the DNN processing unit extracting an external object from the external information, using the inference model, and a processing content control unit that controls processing content of the DNN processing unit. The DNN processing unit includes an object extracting unit that executes the inference model in a deep neural network having a plurality of layers of neurons, and the processing content control unit includes an execution layer determining unit that determines the layers used by the object extracting unit.

MICROWAVE IDENTIFICATION METHOD AND SYSTEM
20230014948 · 2023-01-19 · ·

The present disclosure discloses a microwave identification method, which is implemented on at least one device, including at least one processor and at least one storage device, the method including: the at least one processor obtains microwave data; the at least one processor generates an image of one or more objects based on the microwave data; the at least one processor obtains a model of each of the one or more objects; and based on the model of each of the one or more objects, the at least one processor identifies the one or more objects in the image of the one or more objects.

Electronic endoscope processor and electronic endoscopic system
11701032 · 2023-07-18 · ·

An electronic endoscope processor includes a converting means for converting each piece of pixel data that is made up of n (n≥3) types of color components and constitutes a color image of a biological tissue in a body cavity into a piece of pixel data that is made up of m (m≥2) types of color components, m being smaller than n; an evaluation value calculating means for calculating, for each pixel of the color image, an evaluation value related to a target illness based on the converted pieces of pixel data that are made up of m types of color components; and a lesion index calculating means for calculating a lesion index for each of a plurality of types of lesions related to the target illness based on the evaluation values calculated for the pixels of the color image.

METHOD AND SYSTEM FOR CONFIDENCE LEVEL DETECTION FROM EYE FEATURES

State of art techniques attempt in extracting insights from eye features, specifically pupil with focus on behavioral analysis than on confidence level detection. Embodiments of the present disclosure provide a method and system for confidence level detection from eye features using ML based approach. The method enables generating overall confidence level label based on the subject's performance during an interaction, wherein the interaction that is analyzed is captured as a video sequence focusing on face of the subject. For each frame facial features comprising an Eye-Aspect ratio, a mouth movement, Horizontal displacements, Vertical displacements, Horizontal Squeezes and Vertical Peaks, are computed, wherein HDs, VDs, HSs and VPs are features that are derived from points on eyebrow with reference to nose tip of the detected face. This is repeated for all frames in the window. A Bi-LSTM model is trained using the facial features to derive confidence level of the subject.

SYSTEMS AND METHODS OF CONTRASTIVE POINT COMPLETION WITH FINE-TO-COARSE REFINEMENT
20230019972 · 2023-01-19 ·

An electronic apparatus performs a method of recovering a complete and dense point cloud from a partial point cloud. The method includes: constructing a sparse but complete point cloud from the partial point cloud through a contrastive teacher-student neural network; and transforming the sparse but complete point cloud to the complete and dense point cloud. In some embodiments, the contrastive teacher-student neural network has a dual network structure comprising a teacher network and a student network both sharing the same architecture. The teacher network is a point cloud self-reconstruction network, and the student network is a point cloud completion network.

Field Change Detection and Alerting System Using Field Average Crop Trend
20230017169 · 2023-01-19 ·

A system and method for detecting changes in an agricultural field uses a time series of target images of the agricultural field in which a vegetation index value is calculated for each target image. A target trend line is calculated from the time series of the vegetation index values. A time series of candidate images of one or more candidate fields having one or more attributes that correspond to one or more attributes of the agricultural field is also acquired in which an expected trend line can be determined from calculated vegetation index values representative of respective candidate images. An alert is generated in response to a deviation of the target trend line from the expected trend line that meets alert criteria.

SYSTEM AND METHOD IN THE PREDICTION OF TARGET VEHICLE BEHAVIOR BASED ON IMAGE FRAME AND NORMALIZATION
20230015357 · 2023-01-19 ·

An apparatus includes at least one camera configured to capture a series of image frames for traffic lanes in front of an ego vehicle, where each of the series of image frames is captured at a different one of a plurality of times. A target object detection and tracking controller is configured to process each of the image frames using pixel measurements extracted from the respective image frame to determine, from the pixel measurements, a predicted time to line crossing for a target vehicle detected in the respective image frame at a time corresponding to capture of the respective image frame.

IDENTITY RECOGNITION UTILIZING FACE-ASSOCIATED BODY CHARACTERISTICS

Techniques are disclosed for determining whether to include a bodyprint in a cluster of bodyprints associated with a recognized person. For example, a device performs facial recognition to identify the identity of a first person. The device also identifies and stores physical characteristic information of the first person, the stored information associated with the identity of the first person based on the recognized face. Subsequently, the device receives a second video feed showing an image of a second person whose face is also determined to be recognized by the device. The device then generates a quality score for physical characteristics in the image of the user. The device can then add the image with the physical characteristics to a cluster of images associated with the person if the quality score is above a threshold, or discard the image if not.