Patent classifications
H04N5/247
USE OF DBSCAN FOR LANE DETECTION
A system and method of lane detection using density based spatial clustering of applications with noise (DBSCAN) includes capturing an input image with one or more optical sensors disposed on a motor vehicle. The method further includes passing the input image through a heterogeneous convolutional neural network (HCNN). The HCNN generates an HCNN output. The method further includes processing the HCNN output with DBSCAN to selectively classify outlier data points and clustered data points in the HCNN output. The method further includes generating a DBSCAN output selectively defining the clustered data points as predicted lane lines within the input image. The method further includes marking the input image by overlaying the predicted lane lines on the input image.
ITEM IDENTIFICATION USING DIGITAL IMAGE PROCESSING
A device configured to detect a triggering event at a platform and to capture a depth image of items on the platform using a three-dimensional (3D) sensor. The device is further configured to determine an object pose for each item on the platform and to identify one or more cameras from among a plurality of cameras based on the object pose for each item on the platform. The device is further configured to capture one or more images of the items on the platform using the identified cameras and to identify items within the one or more images based on features of the items. The device is further configured to identify a user associated with the identified items on the platform, to identify an account that is associated with the user, and to associate the identified items with the account of the user.
SYSTEM AND METHOD FOR REFINING AN ITEM IDENTIFICATION MODEL BASED ON FEEDBACK
A system for refining an item identification model detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system captures images of the item. The system extracts a set of features from at least one of the images. The system identifies the item based on the set of features. The system receives an indication that the item is not identified correctly. The system receives an identifier of the item. The system identifies the item based on the identifier of the item. The system feeds the identifier of the item and the images to the item identification model. The system retrains the item identification model to learn to associate the item to the images. The system updates the set of features based on the determined association between the item and the images.
DATABASE MANAGEMENT SYSTEM AND METHOD FOR UPDATING A TRAINING DATASET OF AN ITEM IDENTIFICATION MODEL
A system for updating a training dataset of an item identification model determines that an item is not included in a training dataset. In response to determining that the item is not included in the training dataset, the system obtains an identifier of the item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system captures images of the item. The system extracts a set of features associated with the item from the images. The system associates the item to the identifier and the set of features. The system adds a new entry to the training dataset, where the new entry represents the item labeled with the identifier and the set of features.
METHODS AND SYSTEMS FOR CAMERA SHARING BETWEEN AUTONOMOUS DRIVING AND IN-VEHICLE INFOTAINMENT ELECTRONIC CONTROL UNITS
An imaging system according to at least one embodiment of the present disclosure includes a plurality of cameras disposed at different locations on a body of a vehicle, each camera of the plurality of cameras configured to collect image data from an environment around the vehicle; an autonomous drive (“AD”) electronic control unit (“ECU”) comprising an AD processor; an in-vehicle infotainment (“IVI”) ECU comprising an IVI processor; and a communication path running from a first camera of the plurality of cameras to the AD ECU, wherein the AD ECU splits the communication path into a first path that is interconnected to the AD processor and a second path exiting the AD ECU and that is interconnected to the IVI ECU, and wherein image data collected by the first camera is sent along the communication path to the AD ECU before being sent to the IVI ECU.
IMAGE CROPPING USING DEPTH INFORMATION
A device configured to capture a first image of an item on a platform using a camera and to determine a first number of pixels in the first image that corresponds with the item. The device is further configured to capture a first depth image of an item on the platform using a three-dimensional (3D) sensor and to determine a second number of pixels within the first depth image that corresponds with the item. The device is further configured to determine that the difference between the first number of pixels in the first image and the second number of pixels in the first depth image is less than the difference threshold value, to extract the plurality of pixels corresponding with the item in the first image from the first image to generate a second image, and to output the second image.
Methods and apparatus for label compensation during specimen characterization
A method of characterizing a serum and plasma portion of a specimen in regions occluded by one or more labels. The characterization method may be used to provide input to an HILN (H, I, and/or L, or N) detection method. The characterization method includes capturing one or more images of a labeled specimen container including a serum or plasma portion from multiple viewpoints, processing the one or more images to provide segmentation data including identification of a label-containing region, determining a closest label match of the label-containing region to a reference label configuration selected from a reference label configuration database, and generating a combined representation based on the segmentation information and the closest label match. Using the combined representation allows for compensation of the light blocking effects of the label-containing region. Quality check modules and testing apparatus and adapted to carry out the method are described, as are other aspects.
Emergency automated gunshot lockdown system (EAGL)
The Emergency Automatic Gunshot Lockdown (EAGL) System detects gunshots and executes at least one predetermined adaptive response action, such as notifying law enforcement of an active shooter, providing access control measures such as locking down soft target areas, and alerting building occupants of an active shooter situation. A gunshot is detected and verified using a triple validation system. Once a firearm is discharged, the EAGL system sends “real time” data to building officials, law enforcement, and building occupants notifying them of an active shooter situation. Simultaneously, predetermined commands are sent to access control devices for perimeter, office, classroom, and other soft target areas to lockdown and stay secure, to keep the shooter from entering these soft target areas, and to prevent shooter from entering other buildings.
Systems and methods to check-in shoppers in a cashier-less store
Systems and techniques are provided for linking subjects in an area of real space with user accounts. The user accounts are linked with client applications executable on mobile computing devices. A plurality of cameras are disposed above the area. The cameras in the plurality of cameras produce respective sequences of images in corresponding fields of view in the real space. A processing system is coupled to the plurality of cameras. The processing system includes logic to determine locations of subjects represented in the images. The processing system further includes logic to match the identified subjects with user accounts by identifying locations of the mobile computing devices executing client applications in the area of real space and matching locations of the mobile computing devices with locations of the subjects.
Detection, counting and identification of occupants in vehicles
A method of detecting occupants in a vehicle includes detecting an oncoming vehicle and acquiring a plurality of images of occupants in the vehicle in response to detection of the vehicle. The method includes performing automated facial detection on the plurality of images and adding a facial image for each face detected to a gallery of facial images for the occupants of the vehicle. The method includes performing automated facial recognition on the gallery of facial images to group the facial images into groups based on which occupant is in the respective facial images, and counting the final group of unique facial images to determine how many occupants are in the vehicle.