G06F3/0346

Moving Target of Interest Predictive Locating, Reporting, and Alerting
20230046309 · 2023-02-16 ·

Systems and corresponding methods are provided for moving object predictive locating, reporting, and alerting. This method includes receiving moving object data corresponding to a moving object; receiving sensor data from a sensor and merging the received moving object data and the received sensor data into a set of merged data. The example method further includes based on the set of merged data, automatically determining one or more of a predicted location or range of locations for the moving object, a potential path of travel for the moving object, an alert concerning the moving object, and providing the alert. The automatically determining may be further based on one or more historical traits concerning the object, and the geographic medium the object is moving through. The geographic medium may include one or more of terrain, air, water, and space. The object may be a soldier, vehicle, drone, or ballistic.

Photodetector activations

An example computing device includes a photodetector to measure an amount of light incident on a detection surface of the photodetector. The example computing device includes a state sensor to activate the photodetector responsive to the computing device being in a detection state. The example computing device also includes a processor. An example processor identifies, during the detection state, a user gesture based on an output of the photodetector. The user gesture blocks light incident on the detection surface of the photodetector. The example processor also alters an operation of the computing device based on the user gesture.

Photodetector activations

An example computing device includes a photodetector to measure an amount of light incident on a detection surface of the photodetector. The example computing device includes a state sensor to activate the photodetector responsive to the computing device being in a detection state. The example computing device also includes a processor. An example processor identifies, during the detection state, a user gesture based on an output of the photodetector. The user gesture blocks light incident on the detection surface of the photodetector. The example processor also alters an operation of the computing device based on the user gesture.

Double-tap event detection device, system and method

Digital signal processing circuitry, in operation, determines, based on accelerometer data, a carry-position of a device. Double-tap detection parameters are set using the determined carry-position. Double-taps are detected using the set double-tap detection parameters. In response to detection of a double-tap, control signals, such as a flag or an interrupt signal, are generated and used to control operation of the device. For example, a device may enter a wake mode of operation in response to detection of a double-tap.

Double-tap event detection device, system and method

Digital signal processing circuitry, in operation, determines, based on accelerometer data, a carry-position of a device. Double-tap detection parameters are set using the determined carry-position. Double-taps are detected using the set double-tap detection parameters. In response to detection of a double-tap, control signals, such as a flag or an interrupt signal, are generated and used to control operation of the device. For example, a device may enter a wake mode of operation in response to detection of a double-tap.

Devices, systems, and methods for multi-device interactions

There is provided a pointing device including a mode switching apparatus that switches the pointing device between a two-dimensional (2D) operational mode and a three-dimensional (3D) operational mode and a sensor configured to determine a pointing direction of the pointing device and locations of a plurality of computing devices. When in the 2D operational mode, the pointing device is paired with a first computing device of the plurality of computing devices and controls the first computing device and when in the 3D operational mode, the pointing device is configured to select a second computing device of the plurality of computing devices additionally to control, the selection based on one or more of the pointing direction of the pointing device and the location of the second computing device.

Devices, systems, and methods for multi-device interactions

There is provided a pointing device including a mode switching apparatus that switches the pointing device between a two-dimensional (2D) operational mode and a three-dimensional (3D) operational mode and a sensor configured to determine a pointing direction of the pointing device and locations of a plurality of computing devices. When in the 2D operational mode, the pointing device is paired with a first computing device of the plurality of computing devices and controls the first computing device and when in the 3D operational mode, the pointing device is configured to select a second computing device of the plurality of computing devices additionally to control, the selection based on one or more of the pointing direction of the pointing device and the location of the second computing device.

Systems, methods, and media for displaying interactive augmented reality presentations

Systems, methods, and media for displaying interactive augmented reality presentations are provided. In some embodiments, a system comprises: a plurality of head mounted displays, a first head mounted display comprising a transparent display; and at least one processor, wherein the at least one processor is programmed to: determine that a first physical location of a plurality of physical locations in a physical environment of the head mounted display is located closest to the head mounted display; receive first content comprising a first three dimensional model; receive second content comprising a second three dimensional model; present, using the transparent display, a first view of the first three dimensional model at a first time; and present, using the transparent display, a first view of the second three dimensional model at a second time subsequent to the first time based one or more instructions received from a server.

Systems, methods, and media for displaying interactive augmented reality presentations

Systems, methods, and media for displaying interactive augmented reality presentations are provided. In some embodiments, a system comprises: a plurality of head mounted displays, a first head mounted display comprising a transparent display; and at least one processor, wherein the at least one processor is programmed to: determine that a first physical location of a plurality of physical locations in a physical environment of the head mounted display is located closest to the head mounted display; receive first content comprising a first three dimensional model; receive second content comprising a second three dimensional model; present, using the transparent display, a first view of the first three dimensional model at a first time; and present, using the transparent display, a first view of the second three dimensional model at a second time subsequent to the first time based one or more instructions received from a server.

Color-sensitive virtual markings of objects
11582312 · 2023-02-14 · ·

Disclosed are systems, methods, and non-transitory computer readable media for making virtual colored markings on objects. Instructions may include receiving an indication of an object; receiving from an image sensor an image of a hand of an individual holding a physical marking implement; detecting in the image a color associated with the marking implement; receiving from the image sensor image data indicative of movement of a tip of the marking implement and locations of the tip; determining from the image data when the locations of the tip correspond to locations on the object; and generating, in the detected color, virtual markings on the object at the corresponding locations.