G06V10/145

METHOD OF IMAGE EVALUATION FOR SIM MICROSCOPY AND SIM MICROSCOPY METHOD

A method of image evaluation when performing SIM microscopy on a sample includes: A) providing n raw images of the sample, which were each generated by illuminating the sample with an individually positioned SIM illumination pattern and imaging the sample in accordance with a point spread function, B) providing (S1) n illumination pattern functions, which each describe one of the individually positioned SIM illumination patterns, C) providing (S1) the point spread function and D) Carrying out an iteration method, which includes following iteration steps a) to e), as follows: a) providing an estimated image of the sample, b) generating simulated raw images, in each case by image processing of the estimated image using the point spread function and one of the n illumination pattern functions such that n simulated raw images are obtained, c) assigning each of the n simulated raw images to that of the n provided raw images which was generated by the illumination pattern that corresponds to the illumination pattern function used to generate the simulated raw image, and calculating n correction raw images by the comparison of each provided raw image with the simulated raw image assigned thereto, d) generating a correction image by combining image processing of the n correction raw images using the point spread function and the n illumination pattern functions, wherein a filtering step is carried out in each implementation of iteration step d), said filtering step suppressing a spatial fundamental frequency of the illumination pattern, and e) reconstructing the estimated image of the sample by means of the correction image and using the corrected estimated image of the sample as the estimated image of the sample in iteration step a) in the next run through the iteration.

METHOD OF IMAGE EVALUATION FOR SIM MICROSCOPY AND SIM MICROSCOPY METHOD

A method of image evaluation when performing SIM microscopy on a sample includes: A) providing n raw images of the sample, which were each generated by illuminating the sample with an individually positioned SIM illumination pattern and imaging the sample in accordance with a point spread function, B) providing (S1) n illumination pattern functions, which each describe one of the individually positioned SIM illumination patterns, C) providing (S1) the point spread function and D) Carrying out an iteration method, which includes following iteration steps a) to e), as follows: a) providing an estimated image of the sample, b) generating simulated raw images, in each case by image processing of the estimated image using the point spread function and one of the n illumination pattern functions such that n simulated raw images are obtained, c) assigning each of the n simulated raw images to that of the n provided raw images which was generated by the illumination pattern that corresponds to the illumination pattern function used to generate the simulated raw image, and calculating n correction raw images by the comparison of each provided raw image with the simulated raw image assigned thereto, d) generating a correction image by combining image processing of the n correction raw images using the point spread function and the n illumination pattern functions, wherein a filtering step is carried out in each implementation of iteration step d), said filtering step suppressing a spatial fundamental frequency of the illumination pattern, and e) reconstructing the estimated image of the sample by means of the correction image and using the corrected estimated image of the sample as the estimated image of the sample in iteration step a) in the next run through the iteration.

Data transmission system having multiple line buffer memories for image sensor

Provided is a data transmission system including analog image frame buffer, line analog-to-digital converter, line buffer memory, and an interface. First, the analog image frame buffer stores the image data lines generated from the image sensor as analog signals, and then the line analog digital converter which is electrically connected to the analog image frame buffer converts the image data lines from analog signals to digital signals. Then, the image data lines converted into digital signals are stored in one of the line buffer memories. Then, according to the user's needs, the image data line of the digital signal is temporarily stored in another line buffer memory. Finally, according to the instructions of the master device, the interface outputs the image data lines of digital signals according to the conversion order of the line analog to digital converter.

Device for monitoring vehicle occupant(s)

A device for monitoring occupants of seats in a passenger compartment of a vehicle comprises a heat sink divided into a plurality of sections. Each of the sections comprises a base and cooling fins. The bases extend along a common axis and define a central niche therebetween. Structured light sources are attached to the sections. The structured light sources have an optical element for forming a structured light pattern. The structured light sources are oriented along the common axis and at oblique angles to the central niche, such that the structured light patterns, in combination, are directed such that they would cover occupants of seats of the vehicle. A camera is attached to the niche and configured to capture image patterns resulting from distortion of the plurality of structured light patterns by the occupants of the seats.

Artificial intelligence apparatus for estimating pose of head and method for the same

Disclosed is an artificial intelligence (AI) apparatus including a two-dimensional (2D) image sensor configured to acquire a 2D image of a head of a person, a three-dimensional (3D) image sensor configured to acquire 3D head pose information of the head, and a processor configured to match the 2D image with the 3D head pose information, to extract 3D head pose information for determining a rotation direction of the head from the 3D head pose information, to extract a 2D image matched with the extracted 3D head pose information, to acquire 3D relative coordinates as a reference for correcting the 3D head pose information based on 2D coordinates of a predetermined landmark point of the extracted 2D image, and to acquire the corrected 3D head pose information of the predetermined landmark point of each 2D image by correcting the 3D head pose information based on the 3D relative coordinates.

ANALYSIS AND DEEP LEARNING MODELING OF SENSOR-BASED OBJECT DETECTION DATA IN BOUNDED AQUATIC ENVIRONMENTS

Techniques for analysis and deep learning modeling of sensor-based object detection data in bounded aquatic environments are described, including capturing an image from a sensor disposed substantially above a waterline, the sensor being housed in a structure electrically coupled to a light housing, converting the image into data, the data being digitally encoded, evaluating the data to separate background data from foreground data, generating tracking data from the data after the background data is removed, the tracking data being evaluated to determine whether a head or a body are detected by comparing the tracking data to classifier data, tracking the head or the body relative to the waterline if the head or the body are detected in the tracking data, and determining a state associated with the head or the body.

Control method for electronic device, electronic device and computer readable storage medium

A control method for an electronic device and an electronic device are provided. The electronic device includes an infrared emitter, an infrared sensor, and a visible light sensor. The control method includes: acquiring an original pixel position of a zero-level region on the infrared sensor; acquiring a human eye pixel position of a human eye region on the visible light sensor; determining whether a human eye enters the zero-level region according to the original pixel position and the human eye pixel position; and triggering a protection mechanism of the infrared emitter when the human eye enters the zero-level region.

Control method for electronic device, electronic device and computer readable storage medium

A control method for an electronic device and an electronic device are provided. The electronic device includes an infrared emitter, an infrared sensor, and a visible light sensor. The control method includes: acquiring an original pixel position of a zero-level region on the infrared sensor; acquiring a human eye pixel position of a human eye region on the visible light sensor; determining whether a human eye enters the zero-level region according to the original pixel position and the human eye pixel position; and triggering a protection mechanism of the infrared emitter when the human eye enters the zero-level region.

Depth sensing using line pattern generators
11835362 · 2023-12-05 · ·

A distance measurement system includes two or more line pattern generators (LPGs), a camera, and a processor. Each LPG emits a line pattern having a first set of dark portions separated by a respective first set of bright portions. A first line pattern has a first angular distance between adjacent bright portions, and a second line pattern has a second angular distance between adjacent bright portions. The camera captures at least one image of the first line pattern and the second line pattern. The camera is a first distance from the first LPG and a second distance from the second LPG. The processor identifies a target object illuminated by the first and second line patterns and determines a distance to the target object based on the appearance of the target object as illuminated by the first and second line patterns.

Depth sensing using line pattern generators
11835362 · 2023-12-05 · ·

A distance measurement system includes two or more line pattern generators (LPGs), a camera, and a processor. Each LPG emits a line pattern having a first set of dark portions separated by a respective first set of bright portions. A first line pattern has a first angular distance between adjacent bright portions, and a second line pattern has a second angular distance between adjacent bright portions. The camera captures at least one image of the first line pattern and the second line pattern. The camera is a first distance from the first LPG and a second distance from the second LPG. The processor identifies a target object illuminated by the first and second line patterns and determines a distance to the target object based on the appearance of the target object as illuminated by the first and second line patterns.