Patent classifications
H04N23/90
ELECTRONIC DEVICE COMPRISING MULTI-CAMERA, AND PHOTOGRAPHING METHOD
An electronic device having a multi-camera, according to various embodiments of the present disclosure, includes: the multi-camera, a display, a memory, and a processor operatively connected to the camera, the display, and the memory. The processor may be configured to receive a first image being photographed at a first angle of view of the camera, receive a second image being photographed at a second angle of view of the camera, identify a subject in the first image according to predetermined criteria, generate a third image in which the identified subject is cropped according to a predetermined area of interest, and display the second image and the third image on at least a portion of an area in which the first image is displayed. Various other embodiments may be possible.
METHOD FOR ACQUIRING DISTANCE FROM MOVING BODY TO AT LEAST ONE OBJECT LOCATED IN ANY DIRECTION OF MOVING BODY BY PERFORMING NEAR REGION SENSING AND IMAGE PROCESSING DEVICE USING THE SAME
A method for acquiring a distance from a moving body to an object located in any direction of the moving body includes steps of: an image processing device (a) instructing a rounded cuboid sweep network to project pixels of images, generated by cameras covering all directions of the moving body, onto N virtual rounded cuboids to generate rounded cuboid images and apply 3D concatenation operation thereon to generate an initial 4D cost volume, (b) instructing a cost volume computation network to generate a final 3D cost volume from the initial 4D cost volume, and (c) generating inverse radius indices, corresponding to inverse radii representing inverse values of separation distances of the N virtual rounded cuboids, by referring to the final 3D cost volume and extracting the inverse radii by using the inverse radius indices, to acquire the separation distances and thus, the distance from the moving body to the object.
METHOD FOR ACQUIRING DISTANCE FROM MOVING BODY TO AT LEAST ONE OBJECT LOCATED IN ANY DIRECTION OF MOVING BODY BY PERFORMING NEAR REGION SENSING AND IMAGE PROCESSING DEVICE USING THE SAME
A method for acquiring a distance from a moving body to an object located in any direction of the moving body includes steps of: an image processing device (a) instructing a rounded cuboid sweep network to project pixels of images, generated by cameras covering all directions of the moving body, onto N virtual rounded cuboids to generate rounded cuboid images and apply 3D concatenation operation thereon to generate an initial 4D cost volume, (b) instructing a cost volume computation network to generate a final 3D cost volume from the initial 4D cost volume, and (c) generating inverse radius indices, corresponding to inverse radii representing inverse values of separation distances of the N virtual rounded cuboids, by referring to the final 3D cost volume and extracting the inverse radii by using the inverse radius indices, to acquire the separation distances and thus, the distance from the moving body to the object.
System and method to simultaneously track multiple organisms at high resolution
A microscopy includes multiple cameras working together to capture image data of a sample having a group of organisms distributed over a wide area, under the influence of an excitation instrument. A first processor is coupled to each camera to process the image data captured by the camera. Outputs from the multiple first processors are aggregated and streamed serially to a second processor for tracking the organisms. The presence of the multiple cameras capturing images from the sample, configured with 50% or more overlap, can allow 3D tracking of the organisms through photogrammetry.
LED And/Or Laser Projection Light Device
The LED and/or laser projection light device has three major project parts including (a) light source (b) image-forming-unit (c) project/refractive lens to make desired enlarge projected image, patterns or light beams. The project light has at least one of inner optics-lens or optics-elements rotating to create the splendid lighted image or patterns or light-beam to emit to outer-cover. Further, The project light preferred have at least one of additional-functions built-in project light device select from (i) 2.sup.nd light source for preferred illumination function(s), (ii) glow, back light, (iii) 2.sup.nd or more project assemblies in one light device, (iv) other light functions, (v) candle light illumination, (vi) bulb illumination, (vii) desk top or floor light illumination, (viii) having battery or rechargeable battery or built-in/outside AC-to-DC circuit to get power source, (ix) apply the USB port or adaptor or connector or AC-plug wire to get power source, (x) steady, rotating, replaceable, detachable, movable 3 major project parts.
SYNTHETIC GEOREFERENCED WIDE-FIELD OF VIEW IMAGING SYSTEM
An imaging system for an aircraft is disclosed. A plurality of image sensors are attached, affixed, or secured to the aircraft. Each image sensor is configured to generate sensor-generated pixels based on an environment surrounding the aircraft. Each of the sensor-generated pixels is associated with respective pixel data including, position data, intensity data, time-of-acquisition data, sensor-type data, pointing angle data, latitude data, and longitude data. A controller generates a buffer image including synthetic-layer pixels, maps the sensor-generated pixels to the synthetic-layer pixels in the buffer image, fills a plurality of regions of the buffer image with the sensor-generated pixels, and presents the buffer image on a head-mounted display (HMD) to a user of the aircraft.
Devices, systems and methods for predicting gaze-related parameters using a neural network
A method for creating and updating a database is disclosed. In one example, the method includes presenting a first stimulus to a first user wearing a head-wearable device, using a first camera of the head-wearable device to generate. When the first user is expected to respond to the first stimulus or expected to have responded to the first stimulus, using a second camera of the head-wearable device to generate a first right image of at least a portion of the right eye of the first user. A data connection is established between the head-wearable device and the database. A first dataset is generated comprising the first left image, the first right image and a first representation of a gaze-related parameter, the first representation being correlated with the first stimulus, and adding the first dataset to a device database.
Battery efficient wireless network connection and registration for a low-power device
A client device is configured to communicate with an access point over a wireless network, exchanging data with the access point over a selected communication channel. The client device stores an identifier of the selected communication channel. After the wireless connection to the access point has ended, the client device initiates a process to reconnect to the access point over the selected communication channel using the stored identifier.
Digital display welding mask with HDR imaging
A display system for a welding helmet that includes a darkening filter layer, a high-dynamic range (HDR) camera system to capture an HDR light field, and an optical image stabilization subsystem. Captured images are displayed on an HDR electronic display within the welding helmet without risk of overexposure of ultraviolet radiation to the operator. In some examples, dual electronic displays are used to display different HDR images to each eye of the operator.
Train wheel detection and thermal imaging system
A system that includes a detection device, an imaging device, and a control device is disclosed. The detection device may generate proximity data relating to a proximity of an undercarriage of a rail vehicle, and the imaging device may capture one or more thermal images of the undercarriage. The control device may receive a first thermal image and a second thermal image of the undercarriage. The first thermal image may be captured using a first integration time, and the second thermal image may be captured using a second integration time. The control device may determine composite thermal data associated with the undercarriage. The composite thermal data may include information mapping a first range of thermal data and mapping a second range of thermal data to one or more components of the undercarriage. The control device may cause an action to be performed in connection with the composite thermal data.