G06T7/20

ADAPTIVE SUB-PIXEL SPATIAL TEMPORAL INTERPOLATION FOR COLOR FILTER ARRAY

The present disclosure describes devices and methods for generating RGB images from Bayer filter images using adaptive sub-pixel spatiotemporal interpolation. An electronic device includes a processor configured to estimate green values at red and blue pixel locations of an input Bayer frame based on green values at green pixel locations of the input Bayer frame and a kernel for green pixels, generate a green channel of a joint demosaiced-warped output RGB pixel from the input Bayer frame based on the green values at the green pixel locations, the kernel for green pixels, and an alignment vector map, and generate red and blue channels of the joint demosaiced-warped output RGB pixel from the input Bayer frame based on the estimated green values at the red and blue pixel locations, kernels for red and blue pixels, and the alignment vector map.

ADAPTIVE SUB-PIXEL SPATIAL TEMPORAL INTERPOLATION FOR COLOR FILTER ARRAY

The present disclosure describes devices and methods for generating RGB images from Bayer filter images using adaptive sub-pixel spatiotemporal interpolation. An electronic device includes a processor configured to estimate green values at red and blue pixel locations of an input Bayer frame based on green values at green pixel locations of the input Bayer frame and a kernel for green pixels, generate a green channel of a joint demosaiced-warped output RGB pixel from the input Bayer frame based on the green values at the green pixel locations, the kernel for green pixels, and an alignment vector map, and generate red and blue channels of the joint demosaiced-warped output RGB pixel from the input Bayer frame based on the estimated green values at the red and blue pixel locations, kernels for red and blue pixels, and the alignment vector map.

IDENTIFICATION OF SPURIOUS RADAR DETECTIONS IN AUTONOMOUS VEHICLE APPLICATIONS
20230046274 · 2023-02-16 ·

The described aspects and implementations enable fast and accurate verification of radar detection of objects in autonomous vehicle (AV) applications using combined processing of radar data and camera images. In one implementation, disclosed is a method and a system to perform the method that includes obtaining a radar data characterizing intensity of radar reflections from an environment of the AV, identifying, based on the radar data, a candidate object, obtaining a camera image depicting a region where the candidate object is located, and processing the radar data and the camera image using one or more machine-learning models to obtain a classification measure representing a likelihood that the candidate object is a real object.

IDENTIFICATION OF SPURIOUS RADAR DETECTIONS IN AUTONOMOUS VEHICLE APPLICATIONS
20230046274 · 2023-02-16 ·

The described aspects and implementations enable fast and accurate verification of radar detection of objects in autonomous vehicle (AV) applications using combined processing of radar data and camera images. In one implementation, disclosed is a method and a system to perform the method that includes obtaining a radar data characterizing intensity of radar reflections from an environment of the AV, identifying, based on the radar data, a candidate object, obtaining a camera image depicting a region where the candidate object is located, and processing the radar data and the camera image using one or more machine-learning models to obtain a classification measure representing a likelihood that the candidate object is a real object.

APPARATUS AND METHOD WITH IMAGE RECOGNITION-BASED SECURITY

An apparatus and a method with image recognition-based security are disclosed. The method includes, for an unlocked terminal, tracking a face detected in a previous frame, detecting a background region change between the previous frame and a current frame based on a region of the tracked face, when the background region change is not detected, determining whether a state maintenance time fails to meet a preset time, in response to the state maintenance time failing to meet the preset time, determining an operation mode to be a first operation mode for determining whether recognition succeeds for the current frame, performing the first operation mode, including performing face detection with respect to the current frame, and maintaining the unlocked state of the terminal for the current frame when the face is detected as a result of the performing of the face detection, representing that the recognition succeeded for the current frame.

APPARATUS AND METHOD WITH IMAGE RECOGNITION-BASED SECURITY

An apparatus and a method with image recognition-based security are disclosed. The method includes, for an unlocked terminal, tracking a face detected in a previous frame, detecting a background region change between the previous frame and a current frame based on a region of the tracked face, when the background region change is not detected, determining whether a state maintenance time fails to meet a preset time, in response to the state maintenance time failing to meet the preset time, determining an operation mode to be a first operation mode for determining whether recognition succeeds for the current frame, performing the first operation mode, including performing face detection with respect to the current frame, and maintaining the unlocked state of the terminal for the current frame when the face is detected as a result of the performing of the face detection, representing that the recognition succeeded for the current frame.

Method and system for detecting physical presence

A method including providing a sensor device including one or several sensors. The sensor device is arranged to perform at least one high-power type measurement and at least one low-power type measurement and includes at least one image sensor arranged to depict a person by a measurement of said high-power type. Each of the low-power type measurements over time requires less electric power for operation as compared to each of the high-power type measurements. The method includes detecting a potential presence of the person using at least one of said low-power type measurements. The method includes producing, using one of the high-power type measurements, an image depicting a person and detecting a presence of the person based on im-age analysis of the image. The method includes detecting, using at least one of the low-power type measurements, a maintained presence of the person. The method includes failing to detect a maintained presence of the person.

Method and system for detecting physical presence

A method including providing a sensor device including one or several sensors. The sensor device is arranged to perform at least one high-power type measurement and at least one low-power type measurement and includes at least one image sensor arranged to depict a person by a measurement of said high-power type. Each of the low-power type measurements over time requires less electric power for operation as compared to each of the high-power type measurements. The method includes detecting a potential presence of the person using at least one of said low-power type measurements. The method includes producing, using one of the high-power type measurements, an image depicting a person and detecting a presence of the person based on im-age analysis of the image. The method includes detecting, using at least one of the low-power type measurements, a maintained presence of the person. The method includes failing to detect a maintained presence of the person.

Method and system for determining a current gaze direction
11579687 · 2023-02-14 · ·

A method for determining a current gaze direction of a user in relation to a three-dimensional (“3D”) scene, the 3D scene sampled by a rendering function to produce a two-dimensional (“2D”) projection image of the 3D scene, the sampling performed based on a virtual camera in turn being associated with a camera position and camera direction in the 3D scene. The method includes determining, by a gaze direction detection means, a first gaze direction of the user related to the 3D scene at a first gaze time point. The method includes determining a time-dependent virtual camera 3D transformation representing a change of a virtual camera position and/or virtual camera direction between the first gaze time point and a second sampling. The method includes determining the current gaze direction as a modified gaze direction calculated based on the first gaze direction and an inverse of the time-dependent virtual camera 3D transformation.

Method and system for determining a current gaze direction
11579687 · 2023-02-14 · ·

A method for determining a current gaze direction of a user in relation to a three-dimensional (“3D”) scene, the 3D scene sampled by a rendering function to produce a two-dimensional (“2D”) projection image of the 3D scene, the sampling performed based on a virtual camera in turn being associated with a camera position and camera direction in the 3D scene. The method includes determining, by a gaze direction detection means, a first gaze direction of the user related to the 3D scene at a first gaze time point. The method includes determining a time-dependent virtual camera 3D transformation representing a change of a virtual camera position and/or virtual camera direction between the first gaze time point and a second sampling. The method includes determining the current gaze direction as a modified gaze direction calculated based on the first gaze direction and an inverse of the time-dependent virtual camera 3D transformation.