Patent classifications
G06T2219/012
Techniques for producing three-dimensional models from one or more two-dimensional images
Described are techniques for producing a three-dimensional model of a scene from one or more two dimensional images. The techniques include receiving by a computing device one or more two dimensional digital images of a scene, the image including plural pixels, applying the received image data to scene generator/scene understanding engine that produces from the one or more digital images a metadata output that includes depth prediction data for at least some of the plural pixels in the two dimensional image and that produces metadata for a controlling a three-dimensional computer model engine, and outputting the metadata to a three-dimensional computer model engine to produce a three-dimensional digital computer model of the scene depicted in the two dimensional image.
Artificial Intelligence Intra-Operative Surgical Guidance System and Method of Use
The inventive subject matter is directed to a computing platform configured to execute one or more automated artificial intelligence models, wherein the one or more automated artificial intelligence models includes a neural network model, wherein the one or more automated artificial intelligence models are trained on a plurality of radiographic images from a data layer to detect a plurality of anatomical structures or a plurality of hardware, wherein at least one anatomical structure is a pelvic teardrop and a symphysis pubis joint; detecting at a plurality of anatomical structures in a radiographic image of a subject, wherein the plurality of anatomical structures are detected by the computing platform by the step of classifying the radiographic image with reference to a subject good side radiographic image; and constructing a graphical representation of data, wherein the graphical representation is a subject specific functional pelvis grid; the subject specific functional pelvis grid generated based upon the anatomical structures detected by the computing platform in the radiographic image. Various types of functional grids can be generated based on the situation detected.
METHODS AND SYSTEMS FOR PROVISIONING A VIRTUAL EXPERIENCE OF A BUILDING
Disclosed herein is a method of provisioning a virtual experience. The method may include receiving a 2D floor plan data associated with a building, receiving at least one contextual data, analyzing each of the 2D floor plan data and the at least one contextual data using a machine learning model, determining at least one textual data embedded in the 2D floor plan data based on the analyzing, identifying a plurality of building objects based on the analyzing, identifying a plurality of amenity regions, identifying a plurality of utility objects, retrieving a plurality of virtual building objects, retrieving a plurality of virtual utility objects, generating an interactive 3D model data associated with the 2D floor plan data based on the analyzing, the plurality of virtual building objects and the plurality of virtual utility objects and transmitting the interactive 3D model data to a user device.
DUAL MODE CONTROL OF VIRTUAL OBJECTS IN 3D SPACE
Systems, methods, and non-transitory computer readable media containing instructions for selectively controlling display of virtual objects are provided. In one implementation, virtual objects may be virtually presented in an environment via a wearable extended reality appliance operable in a first and second display modes; in the first display mode, positions of the virtual objects are maintained in the environment regardless of detected movements of the wearable extended reality appliance, and in the second display mode, the virtual objects move in the environment in response to detected movements of the wearable extended reality appliance; movement of the wearable extended reality appliance may be detected; selection of the first or second display mode may be received; display signals configured to present the virtual objects in a manner consistent with the selected display mode may be outputted for presentation via the wearable extended reality appliance in response to the selected display mode.
Three-dimensional shape data generation apparatus, three-dimensional modeling apparatus, three-dimensional shape data generation system, and non-transitory computer readable medium storing three-dimensional shape data generation program
A three-dimensional shape data generation apparatus includes: a processor configured to obtain two-dimensional shape data representing a two-dimensional shape corresponding to a three-dimensional shape of a target to which attribute information is to be assigned, obtain the attribute information of the two-dimensional shape, and assign the obtained attribute information to at least some three-dimensional elements among plural three-dimensional elements representing the three-dimensional shape to generate three-dimensional shape data.
COMPARISON METHOD AND MODELING METHOD FOR CHIP PRODUCT, DEVICE AND STORAGE MEDIUM
The present application provides a comparison method and a modeling method for a chip product, a device and a storage medium. According to the method, the chip product is modeled by using a neural network based on a slice sequence of the chip product in advance to obtain a three-dimensional stereoscopic model. When the chip products are compared, a comparison feature is acquired responsive to an operation of a user. For each chip product, a comparison result corresponding to the comparison feature is acquired from the three-dimensional stereoscopic model corresponding to each chip product. Then, the comparison result corresponding to each chip product is displayed.
Image generating device
The present disclosure provides an image generating device that includes processing circuitry configured to acquire an image captured by an imaging device to be installed in a water-surface movable body, acquire positional information indicative of a position of the water-surface movable body, acquire posture information indicative of a posture of the water-surface movable body, acquire additional display information including information indicative of positions of one or more locations, generate a synthesized image where a graphic rendering a three-dimensional virtual reality object indicative of the additional display information is synthesized on the captured image based on the positional information, the posture information, and the additional display information, and place the graphic across a boundary of the captured image when the captured image is placed only in a certain portion of the synthesized image.
METHOD AND SYSTEM FOR DESIGNING ORTHOSES
A method and computer program product, the method comprising: receiving two or more images including a representation a foot of a patient, wherein the images are captured when the patient is lying with the patient's shin elevated, and wherein the patient is wearing a sock on the foot, the sock having attached thereto an object having known dimensions; generating a three-dimensional model of the foot from the images, comprising determining at least one dimension of the foot from a representation of the object in at least one of the at least two images; and creating a design of an orthotic in accordance with the three-dimensional model of the foot.
GEOMETRICAL COMPENSATIONS
An example method includes acquiring, by at least one processor, (i) an indication of measured dimensions of objects generated in a common additive manufacturing build operation, wherein the objects include at least one instance of a first object generated based on first object model data and at least one instance of a second object based on second object model data; and (ii) an indication of the orientation of the measured dimensions. Vector components for each of the measured dimensions may be determined based on the indication of the orientation. A first geometrical compensation for use in modifying the first object model data may be determined based on the measured dimensions and the vector components relating to the first object and a second geometrical compensation for use in modifying the second object model data may be determined based on the measured dimensions and the vector components relating to the second object.
HEADSET-BASED INTERFACE AND MENU SYSTEM
Disclosed herein are various embodiments for a headset-based interface and menu system. An embodiment operates by determining a first position of a headset, configured to display and enable interactions with an interface of a computing system, is less than a threshold. A second position of the headset is detected and it is determined that the second position of the headset is greater than the threshold. Responsive to the determining that the second position of the headset is greater than the threshold, a menu is provided for display in the interface visible via the headset overlaying the interface.