Patent classifications
G06V10/143
INFRARED LIGHT-GUIDED PORTRAIT RELIGHTING
An imaging system includes a processor, a memory, a visible light camera configured to record a first image of a scene, and an infrared camera configured to record a second image of the scene. The processor configured to execute instructions stored in the memory to input the first image and the second image into a neural network. The neural network relights the first image, based on characteristics of the second image, to correspond to an image of the scene under canonical illumination conditions.
DEVICE FOR DISPLAYING INFORMATION AND FOR CAPTURING AUTOPODIAL IMPRESSIONS
A device for displaying information and for capture of prints of a plurality of skin areas of human autopodia by means of reflection, comprising: a placement surface for applying the autopodia, a touch-sensitive layer, an LC unit with pixels arranged which are individually controllable by a control unit, an illumination unit with a transparent light-guide-layer body, first and second illumination means, and an optical sensor layer with sensors below the light-guide-layer body. The first illumination means emits diffuse light in a first wavelength range, and the second emits directed light in a predefined angular range and in a second wavelength range. The sensor elements are sensitive to light of the second wavelength range. The pixels are switchable between a state which is transparent to the diffuse light and directed light and a state which is opaque to the diffuse light, and are illuminated by the diffuse light for displaying information.
DEVICE FOR DISPLAYING INFORMATION AND FOR CAPTURING AUTOPODIAL IMPRESSIONS
A device for displaying information and for capture of prints of a plurality of skin areas of human autopodia by means of reflection, comprising: a placement surface for applying the autopodia, a touch-sensitive layer, an LC unit with pixels arranged which are individually controllable by a control unit, an illumination unit with a transparent light-guide-layer body, first and second illumination means, and an optical sensor layer with sensors below the light-guide-layer body. The first illumination means emits diffuse light in a first wavelength range, and the second emits directed light in a predefined angular range and in a second wavelength range. The sensor elements are sensitive to light of the second wavelength range. The pixels are switchable between a state which is transparent to the diffuse light and directed light and a state which is opaque to the diffuse light, and are illuminated by the diffuse light for displaying information.
TYPING BIOLOGICAL CELLS
A system for typing biological cells includes a tunable Fabry-Perot etalon, and imaging sensor, and a processor. The imaging sensor acquires one or more images of one or more biological cells from light transmitted through the tunable Fabry-Perot etalon. Each image represents signal associated with one or more wavelengths transmitted through the tunable Fabry-Perot etalon. The processor is configured to determine a type of each of the one or more biological cells. Determining the type uses a machine learning algorithm and is based at least in part on one or more of an image segmentation, a patch extraction, a feature extraction, a feature compression, a deep feature extraction, a feature fusion, a feature classification, and a prediction map reconstruction.
Natural human-computer interaction for virtual personal assistant systems
Technologies for natural language interactions with virtual personal assistant systems include a computing device configured to capture audio input, distort the audio input to produce a number of distorted audio variations, and perform speech recognition on the audio input and the distorted audio variants. The computing device selects a result from a large number of potential speech recognition results based on contextual information. The computing device may measure a user's engagement level by using an eye tracking sensor to determine whether the user is visually focused on an avatar rendered by the virtual personal assistant. The avatar may be rendered in a disengaged state, a ready state, or an engaged state based on the user engagement level. The avatar may be rendered as semitransparent in the disengaged state, and the transparency may be reduced in the ready state or the engaged state. Other embodiments are described and claimed.
Natural human-computer interaction for virtual personal assistant systems
Technologies for natural language interactions with virtual personal assistant systems include a computing device configured to capture audio input, distort the audio input to produce a number of distorted audio variations, and perform speech recognition on the audio input and the distorted audio variants. The computing device selects a result from a large number of potential speech recognition results based on contextual information. The computing device may measure a user's engagement level by using an eye tracking sensor to determine whether the user is visually focused on an avatar rendered by the virtual personal assistant. The avatar may be rendered in a disengaged state, a ready state, or an engaged state based on the user engagement level. The avatar may be rendered as semitransparent in the disengaged state, and the transparency may be reduced in the ready state or the engaged state. Other embodiments are described and claimed.
Electronic Monitoring System Using Video Notification
A camera-based monitoring system is provided that that, upon generation of an alert or notification, can provide a video clip formed from multiple frames or images to the notification system of a user-accessible monitoring device, such as a cell phone, to make it easy detect an object that is moving in the camera's field of view. Since the human eye is extremely sensitive to motion, the “triggering object” whose activities triggered image acquisition can be identified more easily, rapidly, and reliably from the video clip than from a still image. In addition to including the camera and detector(s), the system may include a base station and a controller. A method of operating such an electronic monitoring system also is disclosed
Imaging processing apparatus and method extracting a second RGB ToF feature points having a correlation between the first RGB and TOF feature points
An image processing apparatus and method of extracting a second RGB feature point and a second ToF feature point such that a correlation between the first RGB feature point and the first ToF feature point is equal to or greater than a predetermined value; calculating an error value between the second RGB feature point and the second ToF feature point; updating pre-stored calibration data when the error value is greater than a threshold value, and calibrating the RGB image and the ToF image by using the updated calibration data; and synthesizing the calibrated RGB and ToF images.
System for detecting surface type of object and artificial neural network-based method for detecting surface type of object
An artificial neural network-based method for detecting a surface type of an object includes: receiving a plurality of object images, wherein a plurality of spectra of the plurality of object images are different from one another and each of the object images has one of the spectra; transforming each object image into a matrix, wherein the matrix has a channel value that represents the spectrum of the corresponding object image; and executing a deep learning program by using the matrices to build a predictive model for identifying a target surface type of the object. Accordingly, the speed of identifying the target surface type of the object is increased, further improving the product yield of the object.
System for detecting surface type of object and artificial neural network-based method for detecting surface type of object
An artificial neural network-based method for detecting a surface type of an object includes: receiving a plurality of object images, wherein a plurality of spectra of the plurality of object images are different from one another and each of the object images has one of the spectra; transforming each object image into a matrix, wherein the matrix has a channel value that represents the spectrum of the corresponding object image; and executing a deep learning program by using the matrices to build a predictive model for identifying a target surface type of the object. Accordingly, the speed of identifying the target surface type of the object is increased, further improving the product yield of the object.