Patent classifications
H04N13/207
Medical environment monitoring system
A system and a method are described for monitoring a medical care environment. In one or more implementations, a method includes identifying a first subset of pixels within a field of view of a camera as representing a bed. The method also includes identifying a second subset of pixels within the field of view of the camera as representing an object (e.g., a subject, such as a patient, medical personnel; bed; chair; patient tray; medical equipment; etc.) proximal to the bed. The method also includes determining an orientation of the object within the bed.
Medical environment monitoring system
A system and a method are described for monitoring a medical care environment. In one or more implementations, a method includes identifying a first subset of pixels within a field of view of a camera as representing a bed. The method also includes identifying a second subset of pixels within the field of view of the camera as representing an object (e.g., a subject, such as a patient, medical personnel; bed; chair; patient tray; medical equipment; etc.) proximal to the bed. The method also includes determining an orientation of the object within the bed.
IMAGE PROCESSING METHOD, PROGRAM, AND IMAGE PROCESSING DEVICE
Image processing includes obtaining image I[0,0] of a picture captured by an image capture means, in a state where light is irradiated to the picture from a light source at a reference position relative to a normal line of the picture, obtaining image I[α1,0] of the picture captured by an image capture means, in a state where the light is irradiated to the picture from the light source at a position inclined from the reference position at an angle α1 in the first direction, obtaining image I[0, β1] of the picture captured by an image capture means, in a state where the light is irradiated to the picture from the light source at a position inclined by an angle β1 from the reference position in a second direction different from the first direction, creating a three-dimensional map of the picture, using a set of images I[0, β1] and I[0, β2], merging at least a part of each of image I[α1,0], image I[0,β1], and image I[0,β2] with respect to image I[0,0], and recording as two-dimensional image data the image subjected to the emphasizing process.
OPTICAL TRANSMITTING APPARATUS AND ELECTRONIC DEVICE
An optical transmitting apparatus is disclosed, in the apparatus, an array light source include M*N light sources, and an included angle between any column of light sources in the N columns of light sources and any row of light sources in the M rows of light sources is a preset angle. The array light source is located on a first side of a collimating lens, a plane on which the array light source is located is perpendicular to an optical axis of the collimating lens, and a distance between the plane on which the array light source is located and a center point of the collimating lens is a focal length of the collimating lens. An rotatable scanning mirror is located on a second side of the collimating lens, and a center point of a reflective surface of the scanning mirror is on the optical axis of the collimating lens.
MULTI-VIEW NEURAL HUMAN RENDERING
An image-based method of modeling and rendering a three-dimensional model of an object is provided. The method comprises: obtaining a three-dimensional point cloud at each frame of a synchronized, multi-view video of an object, wherein the video comprises a plurality of frames; extracting a feature descriptor for each point in the point cloud for the plurality of frames without storing the feature descriptor for each frame; producing a two-dimensional feature map for a target camera; and using an anti-aliased convolutional neural network to decode the feature map into an image and a foreground mask.
MULTI-VIEW NEURAL HUMAN RENDERING
An image-based method of modeling and rendering a three-dimensional model of an object is provided. The method comprises: obtaining a three-dimensional point cloud at each frame of a synchronized, multi-view video of an object, wherein the video comprises a plurality of frames; extracting a feature descriptor for each point in the point cloud for the plurality of frames without storing the feature descriptor for each frame; producing a two-dimensional feature map for a target camera; and using an anti-aliased convolutional neural network to decode the feature map into an image and a foreground mask.
PLANAR OBJECT SEGMENTATION
Robots might interact with planar objects (e.g., garments) for process automation, quality control, to perform sewing operations, or the like. It is recognized herein that robots interacting with such planar objects can pose particular problems, for instance problems related to detecting the planar object and estimating the pose of the detected planar object. A system can be configured to detect or segment planar objects, such as garments. The system can include a three-dimensional (3D) sensor positioned to detect a planar object along a transverse direction. The system can further include a first surface that supports the planar object. The first surface can be positioned such that the planar object is disposed between the first surface and the 3D sensor along the transverse direction. In various examples, the 3D sensor is configured to detect the planar object without detecting the first surface.
IMAGE SENSORS AND SENSING METHODS TO OBTAIN TIME-OF-FLIGHT AND PHASE DETECTION INFORMATION
Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.
IMAGE SENSORS AND SENSING METHODS TO OBTAIN TIME-OF-FLIGHT AND PHASE DETECTION INFORMATION
Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.
METHOD AND APPARATUS FOR COLOUR IMAGING A THREE-DIMENSIONAL STRUCTURE
A device for determining the surface topology and associated color of a structure, such as a teeth segment, includes a scanner for providing depth data for points along a two-dimensional array substantially orthogonal to the depth direction, and an image acquisition means for providing color data for each of the points of the array, while the spatial disposition of the device with respect to the structure is maintained substantially unchanged. A processor combines the color data and depth data for each point in the array, thereby providing a three-dimensional color virtual model of the surface of the structure. A corresponding method for determining the surface topology and associate color of a structure is also provided.