G06T2219/2008

MICROSERVICE DEPLOYMENT IN INTEGRATION AND API LAYERS USING AUGMENTED REALITY

An augmented reality (AR) development system includes computer hardware including an AR system and a development server. The development server is configured to perform identifying a plurality of microservices to be deployed into an architecture, at least one integration layer in the architecture, and at least one application programming interface (API) layer in the architecture. The AR system is configured to perform generating a first visualization of the architecture that includes: a plurality of representations respectively corresponding to the plurality of microservices to be deployed in the architecture and a plurality of distinct and visually identifiable locations that respectively correspond to a unique combination of a specific API layer and a specific integration layer; receiving an indication for modifying a placement of one of the plurality of microservices within the first visualization; and generating a second visualization of the architecture based upon the indication.

Virtual Object Structures and Interrelationships

A virtual object system can orchestrate virtual objects defined as a collection of components and with inheritance in an object hierarchy. Virtual object components can include a container, data, a template, and a controller. A container can define the volume the virtual object is authorized to write into. A virtual object’s data can specify features such as visual elements, parameters, links to external data, meta-data, etc. The template can define view states of the virtual object and contextual breakpoints for transitioning between them. Each view state can control when and how the virtual object presents data elements. The controller can define logic for the virtual object to respond to input, context, etc. The definition of each object can specify which other object in an object hierarchy that object extends, where extending an object includes inheriting that object’s components, which can be modified or overwritten as part of the extension.

INTELLIGENT CONTAINER CONFIGURING USING AUGMENTED REALITY

An augmented reality (AR) container orchestration system includes computer hardware including an AR system and a container orchestration platform. The AR system is configured to perform identifying a container to be deployed, displaying, to a user, a grid having a plurality of cells each of which are configured to receive a representation of the container, and redisplaying, using the AR system and to the user, the grid based upon a movement of the representation of the container to a particular position within the grid. The container orchestration platform is configured to perform identifying possible configurations for the container, and configuring the container based upon the movement of the representation of the container to the particular position. The includes a plurality of axes, and each axis of the plurality of axis represents a different configuration-related parameter of the configurations for the container.

Method, apparatus and terminal device for constructing parts together
11636779 · 2023-04-25 · ·

The present disclosure is provides a method, an apparatus and a terminal device for constructing parts together. The method includes: determining an assembly progress of an object constructed of parts; determining a currently required part according to the assembly progress; identifying, in a part photo of the object, the currently required part using a pre-trained first neural network model and marking the one or more identified currently required parts in the part photo; and displaying, via a display, a 3D demonstration animation with the marked part photo as a background image. The method is based on the existing 3D demonstration animation, which identifies the currently required part in the captured part photo and marks the identified currently required part in the part photo, and then displays the marked part photo as the background image of the 3D demonstration animation, thereby adding a prompt for the real part.

Prediction of contact points between 3D models
11600051 · 2023-03-07 · ·

According to an aspect, a method includes receiving a first three-dimensional model (3D) model of at least a body part of a person, receiving a second 3D model of a wearable device, and predicting, by at least one machine-learning (ML) model, a plurality of contact points between the first 3D model and the second 3D model.

System and method for generating combined embedded multi-view interactive digital media representations

Various embodiments describe systems and processes for capturing and generating multi-view interactive digital media representations (MIDMRs). In one aspect, a method for automatically generating a MIDMR comprises obtaining a first MIDMR and a second MIDMR. The first MIDMR includes a convex or concave motion capture using a recording device and is a general object MIDMR. The second MIDMR is a specific feature MIDMR. The first and second MIDMRs may be obtained using different capture motions. A third MIDMR is generated from the first and second MIDMRs, and is a combined embedded MIDMR. The combined embedded MIDMR may comprise the second MIDMR being embedded in the first MIDMR, forming an embedded second MIDMR. The third MIDMR may include a general view in which the first MIDMR is displayed for interactive viewing by a user on a user device. The embedded second MIDMR may not be viewable in the general view.

Virtual Mannequin - Method and Apparatus for Online Shopping Clothes Fitting
20220327783 · 2022-10-13 ·

Methods for virtual clothes fitting when shopping online are presented. The online shopper prepares and keeps a 3D volumetric scan or point cloud of their body. Shoppers upload their 3D volumetric body scan to an online store. The online store stores the 3D volumetric scan of the clothing it offers for sale on their online shop. The online store has the processing capabilities to fit the 3D body scan provided by the shopper with the 3D volumetric scan of clothing it offers for sale on their online shop. Once the 3D volumetric data of the offered clothing item is digitally fitted by the online store on the 3D volumetric scan of the offered clothing item, the 3D volumetric data of the offered clothing fitted on the created “Virtual Mannequin” of the shopper is sent back to shopper to view and examine. On the screen of their computing platform, the shopper can then view and examine 3D images received (uploaded) from the online shop of the selected clothing item fitted on their virtual mannequin and make a buy decision.

METHOD OF SEPARATING TERRAIN MODEL AND OBJECT MODEL FROM THREE-DIMENSIONAL INTEGRATED MODEL AND APPARATUS FOR PERFORMING THE SAME

Provided is a method of separating a terrain model and an object model from a three-dimensional integrated model and an apparatus for performing the same. A separation method according to various example embodiments includes creating separation information about an integrated model based on a multi-viewpoint image including an object on a terrain, model information of the integrated model obtained by restoring the multi-viewpoint image in three dimensions, and information of an image shooting device shooting the multi-viewpoint image, and separating a terrain model and an object model from the integrated model based on the separation information.

Image data processing method and printing system for printing technology

An image data processing method and a printing system for printing technology are provided. The image includes a first bitmap image. The image data processing method includes: dividing the first bitmap image into a plurality of regions, selecting sampling positions in each of the plurality of regions, performing sampling to acquire sample points, and rearranging the sample points to form a second bitmap image. The second bitmap image is different from the first bitmap image.

PROVIDING A TAKEABLE ITEM WITHIN A VIRTUAL CONFERENCING SYSTEM
20230108152 · 2023-04-06 ·

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing a takeable item within a virtual conferencing system. The program and method provide for providing, in association with designing a room for virtual conferencing, an interface for designating an element within the room to be takeable during virtual conferencing; receiving, based on the interface, an indication of first user input designating the element to be takeable; providing a virtual conference between plural participants within the room, the room including the element; receiving an indication of second user input, by a first participant of the plural participants, to take the element; and associating, in response to receiving the indication of second user input, the element with the first participant.