Patent classifications
G06V20/13
Advanced driver-assistance system (ADAS) operation utilizing algorithmic skyline detection
Disclosed are techniques for improving an advanced driver-assistance system (ADAS) by pre-processing image data. In one embodiment, a method is disclosed comprising receiving one or more image frames captured by an image sensor installed on a vehicle; identifying a position of a skyline in the one or more image frames, the position comprising a horizontal position of the skyline; cropping one or more future image frames based on the position of the skyline, the cropping generating cropped images comprising a subset of the corresponding future image frames; and processing the cropped images at an advanced driver-assistance system (ADAS).
Structural characteristic extraction using drone-generated 3D image data
A structural analysis computing device may generate a proposed insurance claim and/or generate a proposed insurance quote for an object pictured in a three-dimensional (3D) image. The structural analysis computing device may be coupled to a drone configured to capture exterior images of the object. The structural analysis computing device may include a memory, a user interface, an object sensor configured to capture the 3D image, and a processor in communication with the memory and the object sensor. The processor may access the 3D image including the object, and analyze the 3D images to identify features of the object—such as by inputting the 3D image into a trained machine learning or pattern recognition program. The processor may generate a proposed claim form for a damaged object and/or a proposed quote for an uninsured object, and display the form to a user for their review and/or approval.
Structural characteristic extraction using drone-generated 3D image data
A structural analysis computing device may generate a proposed insurance claim and/or generate a proposed insurance quote for an object pictured in a three-dimensional (3D) image. The structural analysis computing device may be coupled to a drone configured to capture exterior images of the object. The structural analysis computing device may include a memory, a user interface, an object sensor configured to capture the 3D image, and a processor in communication with the memory and the object sensor. The processor may access the 3D image including the object, and analyze the 3D images to identify features of the object—such as by inputting the 3D image into a trained machine learning or pattern recognition program. The processor may generate a proposed claim form for a damaged object and/or a proposed quote for an uninsured object, and display the form to a user for their review and/or approval.
System and method using deep learning machine vision to analyze localities
A system, method, and computer-readable storage medium are disclosed that execute machine vision operations to categorize a locality. At least one embodiment accesses a map image of a locality, where the map image includes geographical artefacts corresponding to entities within the locality; analyzes the map image to detect the entities in the locality using the geographical artefacts; assigns entity classes to detected entities in the locality; assigns a locality score to the locality based on entity classes included in the locality; retrieves street view images for one or more of the detected entities in the locality; and analyzes street view images of the detected entities to assign one or more further classifications to the detected entities. Other embodiments include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the method.
MAP DATA PROCESSING METHOD AND APPARATUS, AND STORAGE MEDIUM
Provided are a map data processing method and apparatus, and a storage medium, which relate to the field of data processing technology and, in particular, to artificial intelligence technology, for example, computer vision, map technology, and intelligent transportation. A specific implementation solution includes: processing landform coverage data according to a type of surface species coverage and a data processing rule associated with the type of surface species coverage to obtain a surface species coverage effect map; and generating a landform map according to the surface species coverage effect map and a reference map.
Onboard AI-based Cloud Detection System for Enhanced Satellite Autonomy Using PUS
An onboard cloud detection system comprising: a camera (1000) configured to acquire images of the Earth at predetermined acquisition intervals; and a data processing unit (2000) comprising: a cloud detection unit (2210) configured to use artificial intelligence, AI, algorithms to detect clouds; a packet utilization standard, PUS, application layer (2230) configured to issue telemetry and/or telecommands corresponding to a predetermined parameter of the output of the cloud detection unit (2210); and an interface configured to distribute the telemetry and/or telecommands to an external hardware and/or an external software terminal (3000, 4000).
Onboard AI-based Cloud Detection System for Enhanced Satellite Autonomy Using PUS
An onboard cloud detection system comprising: a camera (1000) configured to acquire images of the Earth at predetermined acquisition intervals; and a data processing unit (2000) comprising: a cloud detection unit (2210) configured to use artificial intelligence, AI, algorithms to detect clouds; a packet utilization standard, PUS, application layer (2230) configured to issue telemetry and/or telecommands corresponding to a predetermined parameter of the output of the cloud detection unit (2210); and an interface configured to distribute the telemetry and/or telecommands to an external hardware and/or an external software terminal (3000, 4000).
FEATURE EXTRACTION FROM PERCEPTION DATA FOR PILOT ASSISTANCE WITH HIGH WORKLOAD TASKS
Offline task-based feature processing for aerial vehicles is provided. A system can extract features from a world model generated using sensor information captured by sensors mounted on an aerial vehicle. The system generates a label for each of the features and identifies identify processing levels based on the features. The system selects a processing level for each feature of a subset of features based on a task performed by the aerial vehicle and the label associated with the feature. The system generates one or more processed features by applying the processing level to a respective feature of the subset of the plurality of features. The system presents the one or more processed features on a display device of the aerial vehicle.
NEUROMORPHIC CAMERAS FOR AIRCRAFT
An onboard aircraft landing system includes one or more event-based cameras disposed at known locations to capture the runway and visible surrounding features such as lights and runway markings. The event-based cameras produce a continuous stream of event data that may be quickly processed to identify both light and dark features contemporaneously, and calculate an aircraft pose relative to the runway based on the identified features and the known locations of the event-based cameras. Composite features are identified via the relative location of individual features corresponding to pixel events.
AIRCRAFT CLASSIFICATION FROM AERIAL IMAGERY
A system and method are disclosed for determining a classification and sub-classification of an aircraft. The system receives an aerial image of a geographic area that includes one or more aircrafts. The system inputs the aerial image into a machine learning model. The system receives an output from the machine learning model for each aircraft of the one or more aircrafts. Based on the output for each aircraft, the system determines a set of geometric measurements. The system compares the set of geometric measurements to a plurality of known sets of geometric measurements. Based on the comparison, the system identifies a known set of geometric measurements from the plurality of known sets of geometric measurements. The known set is mapped by a database to a sub-classification. The system outputs the sub-classification.