Patent classifications
G06T2219/004
Method for visualizing at least a zone of an object in at least one interface
The invention concerns a method implemented by computer means for visualizing at least a zone of an object in at least one interface, said method comprising the following steps: obtaining at least one image of said zone, said image comprising at least one channel, said image being a 2-dimensional or 3-dimensional image comprising pixels or voxels, a value being associated to each channel of each pixel or voxel of said image, a representation of said image being displayed in the interface, obtaining at least one annotation from a user, said annotation defining a group of selected pixels or voxels of said image, calculating a transfer function based on said selected pixels or voxels and applying said transfer function to the values of each channel of the image, updating said representation of the image in the interface, in which the colour and the transparency of the pixels or voxels of said representation are dependent on the transfer function.
VIRTUAL MANUFACTURING USING VIRTUAL BUILD AND ANALYSIS TOOLS
A virtual build of an assembly may be performed by operating a virtual build tool inside of an active session of design software that is configured to design an assembly having multiple individual parts; importing characteristics of the assembly from a three-dimensional (3D) model of the assembly that is maintained by the design software; receiving an input of sequence numbers that indicate an assembly order for the individual parts; generating images for the individual parts as they are incrementally added to the assembly based on the sequence numbers; and generating a set of build instructions based on the sequence numbers and the images for the individual parts, where the set of build instructions illustrate how to physically manufacture the assembly.
GLASSES-TYPE WEARABLE DEVICE PROVIDING AUGMENTED REALITY GUIDE AND METHOD FOR CONTROLLING THE SAME
A glasses-type wearable device providing an augmented reality guide and a method for controlling the same is provided. The glasses-type wearable device includes a communication circuit, a camera, and a processor configured to receive a second surrounding-image of an external electronic device connected with the glasses-type wearable device from the external electronic device through the communication module while obtaining a first surrounding-image of the glasses-type wearable device using the camera, identify a first task being performed by a user of the glasses-type wearable device using the first surrounding-image and a second task being performed by a user of the external electronic device using the second surrounding-image, identify a difference between a current progress status of the first task and second task, and control the communication circuit to provide an AR guide corresponding to the second task to the external electronic device, based on the identified difference in progress status.
Automated Spatial Indexing of Images to Video
A spatial indexing system receives a video that is a sequence of frames depicting an environment, such as a floor of a construction site, and performs a spatial indexing process to automatically identify the spatial locations at which each of the images were captured. The spatial indexing system also generates an immersive model of the environment and provides a visualization interface that allows a user to view each of the images at its corresponding location within the model.
Device, method, and system for displaying a combined image representing a position of sensor having defect and a vehicle
The present technology relates to an information processing device, an information processing method, and an information processing system by which a user can easily recognize a position of a sensor having a defect from among sensors mounted on a vehicle. Position-related information about a relative position or direction with respect to a vehicle is acquired, and a combined image obtained by combining a defect image representing a position of the sensor having a defect from among the sensors mounted on the vehicle with a vehicle image reflecting the vehicle is displayed in response to the position-related information. The present technology can be applied so that a user can recognize a sensor having a defect from among the sensors mounted on a vehicle.
Providing enhanced functionality in an interactive electronic technical manual
Embodiments of the present disclosure provide methods, apparatus, systems, computer program products for transferring a performance of a procedure found in technical documentation for an item via an interactive electronic technical manual system (IETM) configured to provide electronic and credentialed access to the technical documentation. In one embodiment, a method is provided comprising: providing the steps of the procedure in an order in which the steps are to be carried out; and while a user is participating in the performance of the procedure: causing a particular step that is being carried out to be highlighted; receiving input of a selection of a transfer mechanism and in response: causing an indication to be displayed between the particular step and a next step to be carried out identifying where the performance has been suspended; providing a transfer window displaying transfer information; and recording the transfer information and an identifier for the indication.
Display apparatus, image processing apparatus, and control method
A display apparatus includes a display screen, and a controller that causes the display screen to display a composite image in which a first image acquired by imaging a space by a camera and a second image representing at least one type of aerosol existing in the space are combined. The position of the at least one type of aerosol as seen in a depth direction in the first image is reflected in the second image.
Method for displaying annotation information, electronic device and storage medium
A method for displaying annotation information, an electronic device and a storage medium, related to the field of computers and image information processing, are provided. The method includes: acquiring depth information and annotation information of a target region in a first image at a first angle of view; establishing an association relationship between the depth information and the annotation information; and in a case that a second image at a second angle of view is acquired, determining a display region of the target region in the second image based on the depth information, and displaying, in the display region in the second image, the annotation information of the target region based on the association relationship. Thereby, the cross-image display of the annotation information of the target region with the depth information can be realized.
Securing virtual objects tracked in an augmented reality experience between multiple devices
There are provided systems and methods for securing virtual objects tracked in an augmented reality experience between multiple devices. A user may capture visual data utilizing a device at a location, where the visual data includes one or more real-world objects. An augmented reality experience may be displayed with the real-world objects, where virtual graphics or other visual indicators are overlaid onto an output of the environment and may be associated with various objects so that the virtual graphics may be seen with the environment. The virtual graphics may further be associated with an amount left by the user for another user, such as a tip or a payment for a service. The other user may be required to complete some task, where the completion of the task may be identified by changes to the real-world objects when the environment is captured by the other user's device.
Surgical navigation with stereovision and associated methods
A surgical guidance system has two cameras to provide stereo image stream of a surgical field; and a stereo viewer. The system has a 3D surface extraction module that generates a first 3D model of the surgical field from the stereo image streams; a registration module for co-registering annotating data with the first 3D model; and a stereo image enhancer for graphically overlaying at least part of the annotating data onto the stereo image stream to form an enhanced stereo image stream for display, where the enhanced stereo stream enhances a surgeon's perception of the surgical field. The registration module has an alignment refiner to adjust registration of the annotating data with the 3D model based upon matching of features within the 3D model and features within the annotating data; and in an embodiment, a deformation modeler to deform the annotating data based upon a determined tissue deformation.