Patent classifications
G06T2219/004
Financial education tool
A system includes one or more processors configured to receive income-related data, receive an input comprising an image that corresponds to a desired acquisition, access at least one database to identify a cost of the desired acquisition, generate a first purchase plan for the desired acquisition based on the income-related data and the cost, graphically augment the image to generate a graphically-augmented version of the image comprising at least a portion of the image and the purchase plan, and display the graphically-augmented version of the image on a device.
HEALTH MANAGEMENT SYSTEM, AND HUMAN BODY INFORMATION DISPLAY METHOD AND HUMAN BODY MODEL GENERATION METHOD APPLIED TO SAME
A health management system, including: a health assessment module configured to obtain human health-related health parameter information of a user, and generate a health condition assessment result on the basis of the health parameter information; a health intervention module configured to generate a health management plan on the basis of the health condition assessment result; a human body model generation module configured to generate a human body model that can be displayed on a display interface; and a human body information display model configured to display, on the basis of a received display instruction, the human body model on the display interface based on human body tissue layers, human body systems, or human body parts. In addition, also provided are a human body information display method and a human body model generation method applicable to the health management system.
THREE-DIMENSIONAL DISPLAY DEVICE, THREE-DIMENSIONAL DISPLAY METHOD, AND THREE-DIMENSIONAL DISPLAY PROGRAM
A CPU 20 that functions as a processor of a three-dimensional display device 10 maps pieces of damage information stored in a storage unit 16 and associated with members of a construction onto members, on a three-dimensional model stored in the storage unit 16, corresponding to the pieces of damage information. When accepting an instruction for displaying an inspection target member of the construction from an operation unit 18, the CPU 20 creates a three-dimensional model of only the member for which the instruction for displaying is accepted and onto which a piece of damage information is mapped and causes a display unit 30 to display the three-dimensional model.
LUNG ANALYSIS AND REPORTING SYSTEM
Systems, methods, and executable programs for providing lung candidacy information to health care professionals. A method includes receiving three-dimensional image data categorized as lung lobe voxels, airway voxels, or lung fissure voxels. A fissure integrity score is generated for the lung fissure voxels. First perspective transparent views of the categorized lung lobe voxels, the categorized airway voxels, and the categorized lung fissure voxels are generated based on a first point of view. The first perspective view of the lung fissure voxels includes a visual representation of fissure integrity based on the generated fissure integrity scores for the corresponding voxels. A report is generated that includes the generated views. The report is outputted.
SELF GUIDANCE BASED ON DIMENSIONAL RELATIONSHIP
A method, a computer program product, and a computer system guide a user through dimensional relationships. The method includes receiving a plurality of images of a unit of a perspective of a user. When a number of corresponding three-dimensional points between a current image and a previous image is less than a registration threshold to perform a three-dimensional registration operation, the method includes performing a three-dimensional data augmentation operation based on two-dimensional data from the current image and the previous image to generate extended corresponding three-dimensional points. The method includes determining a three-dimensional transform function between the current image and the previous image based on the extended corresponding three-dimensional points. The method includes generating annotations to be shown for the unit in the current image based on the three-dimensional transform function, the annotations being shown as a virtual rendering in a mixed reality environment viewed by the user.
DIARISATION AUGMENTED REALITY AIDE
An image of a real-world environment including one or more users, is received from an image capture device. A mask status of a first user of is determined by a processor based on the image. A stream of audio including speech from one or more users is captured from one or more audio transceivers. A first user speech from the stream of audio identified by the processor. The stream of audio is parsed, by the processor and based on the first user speech and based on an audio processing technique, to create a first user speech element. An augmented view that includes the first user speech element is generated, for a wearable computing device, based on the first user speech and based on the mask status.
Virtual augmentation of anatomical models
A mixed reality device (30) employing a mixed reality display (40) for visualizing a virtual augmentation of a physical anatomical model, and a mixed reality controller (50) for controlling a visualization by the mixed reality display (40) of the virtual augmentation of the physical anatomical model including a mixed reality interaction between the physical anatomical model and a virtual anatomical model. The mixed reality controller (50) may employ a mixed reality registration module (51) for controlling a spatial registration between the physical anatomical model within a physical space and the virtual anatomical model within a virtual space, and a mixed reality interaction module for controlling the mixed reality interaction between the physical anatomical model and the virtual anatomical model based on the spatial registration between the physical anatomical model within the physical space and the virtual anatomical model within the virtual space.
Virtual object driving method, apparatus, electronic device, and readable storage medium
The present application discloses a virtual object driving method, an apparatus, an electronic device and a readable storage medium, which relate to technical fields of artificial intelligence and deep learning. A specific implementing solution is as follows: obtaining a target image of a real object acquired by a camera when the real object makes a limb movement; inputting the target image into a coordinate acquisition model to obtain coordinates of a plurality of key points on a limb of the real object; determining a posture of the limb of the real object according to coordinates of each key point; driving, according to the posture of the real object, a virtual object displayed on a screen to present the limb movement of the real object. The method greatly reduces operation complexity and cost consumption when driving a virtual image.
METHOD AND SYSTEM FOR INDOOR POSITIONING AND IMPROVING USER EXPERIENCE
Methods and systems provide a computer technologic enhanced experience to a user when the user visits an indoor environment. A method may include computer readable instructions for identifying and tracking the user in the indoor environment using an indoor positioning system. An indoor positioning system may include a leaky feeder cable network, a plurality of Wi-Fi access points and location tracking using triangulation and Tine-to-Flight calculations. Further, based on the tracking of the user, the user is provided with an augmented navigation route for navigating in the indoor environment. The optimized navigation route is displayed on a virtual 3D model of the indoor environment. Further, the method comprises providing augmented item list to the user while navigating in the indoor environment, such that the augmented item list is generated via the use of advanced analytics, AI/Machine learning capabilities and computer vision based machine learning model for object tracking and recognition.
Mobile Viewer Object Statusing
An example computing platform is configured to (i) maintain a three-dimensional, federated model of a construction project, where the model includes respective objects created using at least two different authoring tools, (ii) receive, via a client device installed with a viewing tool for displaying the model, one or more user inputs that collectively (a) select a displayed representation of a given object within the model and (b) assign a value for a property of the given object, (iii) based on the one or more inputs, identify a GUID of the given object within a hierarchical data structure for the model and cause the model to be updated by associating the assigned value for the property with the GUID of the given object, and (iv) cause the client device to display, via the viewing tool, the updated model including an indication of the assigned value for the property of the given object.