Patent classifications
H04N7/18
LOW POWER MACHINE LEARNING USING REAL-TIME CAPTURED REGIONS OF INTEREST
Systems and methods are described for generating image content. The systems and methods may include, in response to receiving a request to cause a sensor of a computing device to identify image content associated with optical data captured by the sensor, detecting a first sensor data stream having a first image resolution, and detecting a second sensor data stream having a second image resolution. The systems and method may also include identifying, by processing circuitry of the computing device, at least one region of interest in the first sensor data stream, determining cropping coordinates that define a first plurality of pixels in the at least one region of interest in the first sensor data stream, and generating a cropped image representing the at least one region of interest.
MULTI-CAMERA LIVE-STREAMING METHOD AND DEVICES
The embodiments disclose a method including capturing video footage of a youth sports event using at least one video camera with a mobile application installed, transmitting to at least one network server with internet and WI-FI connectivity mobile application multi-camera live-streaming video camera captured game footage, recording on at least one database coupled to the network server at least one video camera mobile application multi-camera live-streaming video camera captured game footage, using at least one network computer coupled to at least one network server configured for processing and displaying multi-camera live-streaming video camera captured game footage for live video streaming game broadcast on a plurality of subscribed viewer digital devices, and mixing advertising into the processed multi-camera live-streaming video camera captured game footage broadcast using the at least one network computer.
IMAGING RANGE ESTIMATION DEVICE, IMAGING RANGE ESTIMATION METHOD, AND PROGRAM
An imaging range estimation device includes an image data processor configured to acquire image data imaged by a camera device and generate image data with an object name label added, a reference data generator configured to set, by using geographic information, a region within a predetermined distance that is imageable from an estimated position at which the camera device is installed and generate reference data with an object name label added, and an imaging range estimator configured to calculate a concordance rate by comparing a feature indicated by a region of an object name label of the image data with a feature indicated by a region of an object name label of the reference data, and estimate the imaging range of the camera device to be a region of the reference data that corresponds to the image data.
A METHOD FOR ADAPTING TO A DRIVER POSITION AN IMAGE DISPLAYED ON A MONITOR IN A VEHICLE CAB
The invention relates to a method for adapting to a driver position an image displayed on a monitor in a cab of the vehicle. The invention also relates to a system for adapting to a driver position an image displayed on a monitor in a cab of the vehicle. The invention further relates to a vehicle comprising such a system.
Method for Controlling Vehicle-Mounted Camera by Using Mobile Device, Device, and System
Embodiments relate to the field of intelligent vehicles, and may be applied to vehicle to everything. A method for controlling an operation of a vehicle-mounted camera based on computer vision, a device, and a system are provided. The method is implemented by a mobile device, and the method includes: communicatively connecting a vehicle control apparatus, where the vehicle control apparatus includes at least one vehicle-mounted camera; configuring at least one virtual camera based on camera information, where the at least one virtual camera corresponds to the at least one vehicle-mounted camera in a one-to-one manner; and enabling the at least one virtual camera to obtain a video signal shot by a vehicle-mounted camera corresponding to the at least one virtual camera. A passenger can control an operation on the vehicle-mounted camera, which brings good user experience. The method may be applied to an artificial intelligence device.
VIDEO PROCESSING METHOD, APPARATUS AND SYSTEM
The present disclosure provides video processing methods, apparatuses and systems. The method includes: obtaining a to-be-processed video, where the to-be-processed video is obtained by performing feature removal processing for one or more objects in an original video; obtaining a feature restoration processing request for one or more to-be-processed objects; according to the feature restoration processing request for the one or more to-be-processed objects, obtaining feature image information corresponding to the one or more to-be-processed objects, where the feature image information for one of the one or more to-be-processed objects includes pixel position information of all or part of features for the one of the one or more to-be-processed objects in the original video; according to the feature image information for the one or more to-be-processed objects, performing feature restoration processing for the one or more to-be-processed objects in the to-be-processed video.
METHOD FOR AIDING THE MANOEUVRING OF AN AUTOMOTIVE VEHICLE AND AUTOMOTIVE LIGHTING DEVICE
A method for aiding the maneuvering of an automotive vehicle. The method includes the steps of projecting a light pattern towards a projection zone of the ground surface, acquiring an image of the automotive vehicle and the light pattern from an image sensor and providing the acquired image to a user of the automotive vehicle. The projection zone includes at least one of a first virtual rectangle, a second virtual rectangle and/or a third virtual rectangle, defined with respect to a rear point of the carbody, a front point of the carbody and the contact points between the wheels and the ground surface. The invention also provides an automotive lighting device for performing the steps of such a method.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
The present technology relates to an information processing apparatus, an information processing method, and a program that allow a sensing time of a sensor to be easily and accurately determined. An information processing apparatus includes a control circuit that outputs a control signal for controlling a sensing timing of a sensor, a counter that updates a counter value in a predetermined cycle, and an addition circuit that adds, to sensor data output from the sensor, sensing time information including a first counter value, a second counter value, and a GNSS (Global Navigation Satellite System) time in a GNSS. The first counter value is obtained when the control signal is output from the control circuit, and the second counter value is obtained when a pulse signal synchronous with the GNSS time is output from a GNSS receiver. The present technology can be applied to, for example, a vehicle-mounted camera.
DISPLAY SYSTEM AND DISPLAY METHOD
A display apparatus (10) of a display system (100) generates a map of a shot region based on video information, and acquires information on a shooting position of each scene in the video information on the map. Then, when receiving specification of a shooting position on the map through a user's operation, the display apparatus (10) searches for information on a scene in the video information shot at the shooting position using the information on the shooting position, and outputs found information on the scene.
SURVEILLANCE SYSTEM, SURVEILLANCE APPARATUS, SURVEILLANCE METHOD, AND NON-TRANSITORYCOMPUTER-READABLE STORAGE MEDIUM
A surveillance apparatus (100) includes a feature value storage apparatus (200) that associates and stores a feature value of a person belonging to the same group, a detection unit (102) that detects an approach of a person not belonging to the same group to the person belonging to the same group within a reference distance by processing a captured image by using the feature value, and an output unit (104) that performs a predetermined output by using a detection result of the detection unit (102).