Patent classifications
G06V10/12
SEMANTIC ANNOTATION OF SENSOR DATA WITH OVERLAPPING PHYSICAL FEATURES
A method for semantic annotation of sensor data may include obtaining sensor data representing an image of a geographic area. The boundary points defining a first polygon in the image of the geographic area may be determined based on the sensor data. An overlap between the first polygon and a second polygon in the image of the geographic area may be detected based at least on the boundary points defining the first polygon. At least one of the first polygon or the second polygon may be modified to remove the overlap between the first polygon and the second polygon. An annotation corresponding to the first polygon may be generated based on the modifying of at least one of the first polygon or the second polygon. The annotation may identify a physical feature within the geographic area. Related systems and computer program products are also provided.
DISPLAY APPARATUS AND METHOD OF DRIVING THE SAME
A display apparatus includes: a display panel including a plurality of pixels; a data driver which applies data voltages to the pixels; a gate driver which applies gate signals to the pixels; and a driving controller which controls the data driver and the gate driver. The driving controller divides the display panel into a plurality of panel blocks, calculates a skin color inclusion ratio of each of the panel blocks based on input image data, determines at least one face region candidate block among the panel blocks based on the skin color inclusion ratio, determines a face region block of the at least one face region candidate block based on the at least one face region candidate block and face matching data, and performs image quality processing on the face region block.
DISPLAY APPARATUS AND METHOD OF DRIVING THE SAME
A display apparatus includes: a display panel including a plurality of pixels; a data driver which applies data voltages to the pixels; a gate driver which applies gate signals to the pixels; and a driving controller which controls the data driver and the gate driver. The driving controller divides the display panel into a plurality of panel blocks, calculates a skin color inclusion ratio of each of the panel blocks based on input image data, determines at least one face region candidate block among the panel blocks based on the skin color inclusion ratio, determines a face region block of the at least one face region candidate block based on the at least one face region candidate block and face matching data, and performs image quality processing on the face region block.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, PROGRAM, AND GENERATION METHOD FOR TRAINED MODEL
An information processing device that includes: an image acquisition unit that acquires a catheter image obtained by an image acquisition catheter inserted into a first cavity; and a first classification data output unit configured to input the acquired catheter image to a first classification trained model that, upon receiving input of the catheter image, outputs first classification data in which a non-biological tissue region including a first inner cavity region that is inside the first cavity and a second inner cavity region that is inside a second cavity where the image acquisition catheter is not inserted and a biological tissue region are classified as different regions, and outputs the first classification data, in which the first classification trained model is generated using first training data that indicates at least the non-biological tissue region including the first inner cavity region and the second inner cavity region and the biological tissue region.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, PROGRAM, AND GENERATION METHOD FOR TRAINED MODEL
An information processing device that includes: an image acquisition unit that acquires a catheter image obtained by an image acquisition catheter inserted into a first cavity; and a first classification data output unit configured to input the acquired catheter image to a first classification trained model that, upon receiving input of the catheter image, outputs first classification data in which a non-biological tissue region including a first inner cavity region that is inside the first cavity and a second inner cavity region that is inside a second cavity where the image acquisition catheter is not inserted and a biological tissue region are classified as different regions, and outputs the first classification data, in which the first classification trained model is generated using first training data that indicates at least the non-biological tissue region including the first inner cavity region and the second inner cavity region and the biological tissue region.
Fast 3D Radiography with Multiple Pulsed X-ray Sources by Deflecting Tube Electron Beam using Electro-Magnetic Field
An X-ray imaging system using multiple puked X-ray sources to perform highly efficient and ultrafast 3D radiography is presented. There are multiple puked X-ray sources mounted on a structure in motion to form an array of sources. The multiple X-ray sources move simultaneously relative to an object on a pre-defined arc track at a constant speed as a group. Electron beam inside each individual X-ray tube is deflected by magnetic or electrical field to move focal spot a small distance. When focal spot of an X-ray tube beam has a speed that is equal to group speed but with opposite moving direction, the X-ray source and X-ray flat panel detector are activated through an external exposure control unit so that source tube stay momentarily standstill equivalently. 3D scan can cover much wider sweep angle in much shorter time and image analysis can also be done in real-time.
Image sensor with integrated single object class detection deep neural network (DNN)
An image sensor, electronic device and method thereof that performs on-sensor single object class detection using an on-sensor single object class detection deep neural network (DNN), such as a face detection DNN. The single object class detection DNN includes a pixel array layer configured to capture an image and transfer image data of the captured image, and a logic and single object class detection deep neural network (DNN) layer that receives the image data directly from the pixel array layer and outputs the image data with the single object class detection data to a communication bus of an electronic device.
SYSTEM FOR A MOTOR VEHICLE AND METHOD FOR ASSESSING THE EMOTIONS OF A DRIVER OF A MOTOR VEHICLE
A system for a motor vehicle includes a sensor apparatus having a sensor for determining motor vehicle data and/or driving data of the motor vehicle, and an evaluation unit. The evaluation unit includes an emotion determination unit configured to assess the emotions of the driver of the motor vehicle on the basis of sensor signals transmitted by the sensor.
SYSTEM FOR A MOTOR VEHICLE AND METHOD FOR ASSESSING THE EMOTIONS OF A DRIVER OF A MOTOR VEHICLE
A system for a motor vehicle includes a sensor apparatus having a sensor for determining motor vehicle data and/or driving data of the motor vehicle, and an evaluation unit. The evaluation unit includes an emotion determination unit configured to assess the emotions of the driver of the motor vehicle on the basis of sensor signals transmitted by the sensor.
SYSTEM AND METHOD FOR IDENTIFYING ACTIVITY IN AN AREA USING A VIDEO CAMERA AND AN AUDIO SENSOR
Identifying activity in an area even during periods of poor visibility using a video camera and an audio sensor are disclosed. The video camera is used to identify visible events of interest and the audio sensor is used to capture audio occurring temporally with the identified visible events of interest. A sound profile is determined for each of the identified visible events of interest based on sounds captured by the audio sensor during the corresponding identified visible event of interest. Then, during a time of poor visibility, a subsequent sound event is identified in a subsequent audio stream captured by the audio sensor. One or more sound characteristics of the subsequent sound event are compared with the sound profiles associated with each of the identified visible events of interest, and if there is a match, one or more matching sound profiles are filtered out from the subsequent audio stream.