Patent classifications
G01S5/163
System and method for radio based location of modular arm carts in a surgical robotic system
A position and tracking system for radio-based localization in an operating room, includes a receiver, a mobile cart, a processor, and a memory coupled to the processor. The mobile cart includes a robotic arm and a transmitter in operable communication with the receiver. The memory has instructions stored thereon which, when executed by the processor, cause the system to receive, from the transmitter, a signal including a position of the mobile carts in a 3D space based on the signal communicated by the transmitter and determine a spatial pose of the mobile carts based on the received signal.
ADPATIVE HOLOGRAPHIC PROJECTION SYSTEM WITH USER TRACKING
A holographic display system including an electronic device, a camera and a holographic projection unit. The holographic projection unit is configured to generate a volumetric projection for viewing by a user in response to a rendering signal provided by a volumetric display application executing on the electronic device. The holographic projection unit includes a housing, a projector at least partially disposed within the housing and operative to display images based upon the rendering information, and a semi-reflective element being oriented to reflect light from the images in order to create the volumetric projection. The camera is oriented such that the user is within a field of view, the camera being operative to provide the image information to the volumetric display application for determination of a position of the user. The volumetric projection is adapted in response to the position of the user.
Method and system for generating an HRTF for a user
A method of obtaining a head-related transfer function for a user is provided. The method comprises generating an audio signal for output by a handheld device and outputting the generated audio signal at a plurality of locations by moving the handheld device to those locations. The audio output by the handheld device is detected at left-ear and right-ear microphones. A pose of the handheld device relative to the user's head is determined for at least some of the locations. One or more personalised HRTF features are then determined based on the detected audio and corresponding determined poses of the handheld device. The one or more personalised HRTF features are then mapped to a higher-quality HRTF for the user, wherein the higher-quality HRTF corresponds to an HRTF measured in an anechoic environment. This mapping may be learned using machine learning, for example. A corresponding system is also provided.
METHOD FOR DETERMINING SITUATIONAL AWARENESS IN WORKSITE
A method for determining situational awareness in a worksite includes setting at least one environment modelling apparatus (EM) at least one of: on a machine or external from the machine; setting at least one tracking apparatus (TA) at least one of: on the machine or external from the machine; acquiring data by the at least one tracking apparatus (TA); and acquiring data by the at least one environment modelling apparatus. Further, the method includes receiving, by at least one position determination unit (PDU), data related to the at least one tracking apparatus (TA) and data related to the at least one environment modelling apparatus (EM); and determining, by the at least one position determination unit (PDU), based at least in part on the received data, the location and orientation of the machine in the worksite.
VEHICLE POSITIONING METHOD, APPARATUS, AND CONTROLLER, INTELLIGENT VEHICLE, AND SYSTEM
The present disclosure relates to vehicle positioning methods, apparatus, controllers, intelligent vehicles, and systems. One example vehicle positioning method includes obtaining a first relative pose between a first vehicle and a help providing object, obtaining a global pose of the help providing object, and calculating a global pose of the first vehicle based on the first relative pose and the global pose.
IMAGE ACQUISITION METHOD, HANDLE DEVICE, HEAD-MOUNTED DEVICE AND HEAD-MOUNTED SYSTEM
The embodiments of the disclosure relate to an image acquisition method, a handle device, a head-mounted device and a head-mounted system. The handle device comprises a shell and a control module arranged in the shell. A switch control end of an infrared circuit is connected with the control module, and infrared light beads of the infrared circuit penetrate outwards through the shell; and a switch control end of a visible light circuit is connected with the control module, and visible light strips of the visible light circuit penetrate outwards through the shell. Through visible light and infrared light set on the handle, the position of the handle device can be judged, and the tracking accuracy is extremely high.
SYSTEMS AND METHODS FOR DETERMINING A POSITION OF A SENSOR DEVICE RELATIVE TO AN OBJECT
A method and system to determine the position of a moveable platform relative to an object is disclosed. The method can include storing one or more synthetic models each trained by one of the one or more synthetic model datasets corresponding to one or more objects in a database; capturing an image of the object by one or more sensors associated with the moveable platform; identifying the object by comparing the captured image of the object to the one or more synthetic model datasets; generating a first model output using a first synthetic model of the one or more synthetic models, the first model output including a first relative coordinate position and a first spatial orientation of the moveable platform; and generating a platform coordinate output and a platform spatial orientation output of the moveable platform at the first position based on the first model output.
SYSTEMS AND METHODS FOR REMOTE CONTROL AND AUTOMATION OF A TOWER CRANE
Systems and methods for remote control and automatization of tower cranes are provided herein. One system may include: a first sensing unit comprising a first image sensor configured to generate a first image sensor dataset; a second sensing unit comprising a second image sensor configured to generate a second image sensor dataset; wherein the first sensing unit and the second sensing unit are adapted to be disposed on a jib of a tower crane at a distance with respect to each other such that a field-of-view of the first sensing unit at least partly overlaps with a field-of-view of the second sensing unit; and a control unit comprising a processing module configured to: determine a real-world geographic location data indicative at least of a real-world geographic location of a hook of the tower crane.
Light direction detector systems and methods
Intensity of a light from a light array comprising a plurality of light sources configured to illuminate in sequence may be detected at two optically isolated points of a motion tracker device. The optically isolated points may be disposed at a distance from one another such that a variation in intensity of light due to shadowing effects from the plurality of light sources is different at the optically isolated points. The optically isolated points may be separated by a T-shaped wall. The motion tracker device may generate a current signal representing a photodiode differential between the two optically isolated points and proportional to the intensity of the light. The current signal may be used for sensor fusion with an inertial measurement unit.
Systems and methods for distributed avionics processing
Disclosed are methods, systems, and non-transitory computer-readable medium for distributed vehicle processing. For instance, the method may include: in response to determining a first trigger condition of a first set of trigger conditions is satisfied, performing a first process corresponding to the first trigger condition on-board a vehicle; in response to determining a second trigger condition of a second set of trigger conditions is satisfied, prompting a second process corresponding to the second trigger condition by transmitting an edge request to an edge node and receiving an edge response from the edge node; and in response to determining a third trigger condition of a third set of trigger conditions is satisfied, prompting a third process corresponding to the third trigger condition by transmitting a cloud request to a cloud node and receiving a cloud response from the cloud node.