Patent classifications
H04N5/265
Conference device with multi-videostream capability
A conference device comprising a first image sensor for provision of first image data, a second image sensor for provision of second image data, a first image processor configured for provision of a first primary videostream and a first secondary videostream based on the first image data, a second image processor configured for provision of a second primary videostream and a second secondary videostream based on the second image data, and an intermediate image processor in communication with the first image processor and the second image processor and configured for provision of a field-of-view videostream and a region-of-interest videostream, wherein the field-of-view videostream is based on the first primary videostream and the second primary videostream, and wherein the region-of-interest videostream is based on one or more of the first secondary videostream and the second secondary videostream.
INTELLIGENT SYSTEM FOR CONTROLLING FUNCTIONS IN A COMBAT VEHICLE TURRET
A system for controlling turret functions of a land-based combat vehicle includes: a plurality of image detection sensors for recording sequences of images having an at least partial view of a 360° environment of the land-based combat vehicle; at least one virtual, augmented or mixed reality headset for wear by an operator, the headset presenting the at least partial view of the environment of the land-based combat vehicle on a display, the headset including a direction sensor for tracking an orientation of the headset imparted during a movement of a head of the operator and eye tracking means for tracking eye movements of the operator; a control unit including at least one computing unit for receiving as input and processing: images supplied by the plurality of image detection sensors; headset position and orientation data supplied by the direction sensor; eye position data supplied by the eye tracking means.
INTELLIGENT SYSTEM FOR CONTROLLING FUNCTIONS IN A COMBAT VEHICLE TURRET
A system for controlling turret functions of a land-based combat vehicle includes: a plurality of image detection sensors for recording sequences of images having an at least partial view of a 360° environment of the land-based combat vehicle; at least one virtual, augmented or mixed reality headset for wear by an operator, the headset presenting the at least partial view of the environment of the land-based combat vehicle on a display, the headset including a direction sensor for tracking an orientation of the headset imparted during a movement of a head of the operator and eye tracking means for tracking eye movements of the operator; a control unit including at least one computing unit for receiving as input and processing: images supplied by the plurality of image detection sensors; headset position and orientation data supplied by the direction sensor; eye position data supplied by the eye tracking means.
System And Method For Programing Video
A method for generating video from scratch, including retrieving template video, reading commands of predefined programming language, wherein commands include instruction corresponding to video action and time line at predefined layer of media/video element (properties) and new parameters including information from external data sources. The video action includes at least one of: drawing action, object selection, change of object properties, creating text, motion action relating at least one object, background creation, defining layer of video, animation of the object itself. The method includes generating video layers by applying the relevant actions based on commands instruction, scheduling timing and layer definitions. The actions include changing object properties drawing action, object selection, change of object properties, creating text, motion action relating at least one object, background creation, defining layer of video, animation of the object itself. The method also includes integrating video template with generated video layers, and rendering the frames to generate a video.
SYSTEM AND METHOD FOR MULTI-MODAL MICROSCOPY
A system and method for processing multi-modal microscopy imaging data on small-scale computer architecture which avoids restrictive manufacturer data formats and APIs. The system and method leverage a web-based application made available to microscopy instrument control hardware by which direct visual output of the control hardware is captured and transmitted to an edge computing device for processing by one or more inference models in parallel to construct a composite hyperimage.
SYSTEM AND METHOD FOR MULTI-MODAL MICROSCOPY
A system and method for processing multi-modal microscopy imaging data on small-scale computer architecture which avoids restrictive manufacturer data formats and APIs. The system and method leverage a web-based application made available to microscopy instrument control hardware by which direct visual output of the control hardware is captured and transmitted to an edge computing device for processing by one or more inference models in parallel to construct a composite hyperimage.
SYNTHETIC GEOREFERENCED WIDE-FIELD OF VIEW IMAGING SYSTEM
An imaging system for an aircraft is disclosed. A plurality of image sensors are attached, affixed, or secured to the aircraft. Each image sensor is configured to generate sensor-generated pixels based on an environment surrounding the aircraft. Each of the sensor-generated pixels is associated with respective pixel data including, position data, intensity data, time-of-acquisition data, sensor-type data, pointing angle data, latitude data, and longitude data. A controller generates a buffer image including synthetic-layer pixels, maps the sensor-generated pixels to the synthetic-layer pixels in the buffer image, fills a plurality of regions of the buffer image with the sensor-generated pixels, and presents the buffer image on a head-mounted display (HMD) to a user of the aircraft.
SYNTHETIC GEOREFERENCED WIDE-FIELD OF VIEW IMAGING SYSTEM
An imaging system for an aircraft is disclosed. A plurality of image sensors are attached, affixed, or secured to the aircraft. Each image sensor is configured to generate sensor-generated pixels based on an environment surrounding the aircraft. Each of the sensor-generated pixels is associated with respective pixel data including, position data, intensity data, time-of-acquisition data, sensor-type data, pointing angle data, latitude data, and longitude data. A controller generates a buffer image including synthetic-layer pixels, maps the sensor-generated pixels to the synthetic-layer pixels in the buffer image, fills a plurality of regions of the buffer image with the sensor-generated pixels, and presents the buffer image on a head-mounted display (HMD) to a user of the aircraft.
Privacy-protecting multi-pass street-view photo-stitch
Generating a controllable panoramic image while eliminating unsuitable dynamic elements by receiving a plurality of images of a location from a user device, wherein the plurality of images includes images of a location at various times, identifying an object of one or more images of the plurality of images, wherein the object corresponds to an unsuitable condition for a database, determining a score of the one or more images of the plurality of images based at least in part on the identified object, determining a base image from the one or more images of the plurality of images, and generating a set of replacement images of the location based at least in part on respective determined scores of the one or more images of the plurality of images.
Privacy-protecting multi-pass street-view photo-stitch
Generating a controllable panoramic image while eliminating unsuitable dynamic elements by receiving a plurality of images of a location from a user device, wherein the plurality of images includes images of a location at various times, identifying an object of one or more images of the plurality of images, wherein the object corresponds to an unsuitable condition for a database, determining a score of the one or more images of the plurality of images based at least in part on the identified object, determining a base image from the one or more images of the plurality of images, and generating a set of replacement images of the location based at least in part on respective determined scores of the one or more images of the plurality of images.