G06F111/18

Systems and methods for enabling precise object interaction within an augmented reality environment

The present disclosure provides systems and methods for displaying a real-world vehicle in an augmented reality environment. The system employs a user device camera to obtain image data of an environment that includes a real-world vehicle. The system analyzes the image data to identify the vehicle within the environment. A wireframe model of the vehicle is then generated and registered to the vehicle. The image data is displayed on the user device. In response to user input, the system may then attach a virtual vehicle accessory to the wireframe model. The accessory is then displayed on the user device display in an augmented reality environment such that the vehicle appears to seamlessly incorporate the accessory.

Context projection and wire editing in augmented media

Embodiments are for using design context projection and wire editing in augmented media. Responsive to receiving an indication of an error in a design for an integrated circuit (IC), a localized area is extracted encompassing the error in the design. Augmented reality media of the localized area of the design is generated with a guide pattern, the localized area including objects. The augmented reality media of the localized area is caused to be presented in a three-dimensional (3D) projection on an augmented reality device for a user. Responsive to receiving at least one modification to the augmented media in the 3D projection, the design for the IC is updated with the modifications.

Spatial localization design service

A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.

Self-expanding augmented reality-based service instructions library

Augmented reality (AR) servicing guidance uses a mobile device (30, 90) with a display and a camera. Computer vision (CV) processing (102) such as Simultaneous Location and Mapping (SLAM) is performed to align AR content (80) with a live video feed (96) captured by the camera. The aligned AR content is displayed on the display of the mobile device. In one example, the mobile device is a head-up display (HUD) (90) with a transparent display (92, 94) and camera (20). In another example the mobile device is a cellphone or tablet computer (30) with a front display (34) and a rear-facing camera (36). The AR content (80) is generated by CV processing (54) to align (70) AR content with recorded video (40) of a service call. The aligning (72) includes aligning locations of interest (LOIs) identified in the recorded video of the service call by a user input.

Computer vision and speech algorithm design service

A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.

Method and system for visualization of 3D electronic device
11893330 · 2024-02-06 · ·

Provided are a method and system for visualization of a 3D electronic device. The method includes: a structural and/or electrical characteristic of the electronic device is simulated to obtain first visualization data, the electronic device including a semiconductor device and/or an integrated circuit and the first visualization data including 3D grid position information of the electronic device and/or a physical quantity at a grid point; the 3D grid position information and/or physical quantity at the grid point in the first visualization data are/is converted into second visualization data suitable for virtual 3D displaying according to a data type; and the second visualization data is rendered in a virtual space to display a structure and/or physical quantity of the electronic device in the virtual space.

Managing a smart city

A smart city management system may enable creating a digital twin of the smart city based on mapping lidar data for the smart city and radio frequency data for the smart city; determining placement of a set of network devices in the smart city based on the created digital twin; and providing a visualization of the determined placement of the set of network devices.

Systems and methods for optimal color calibration for LED volume stages
11979692 · 2024-05-07 · ·

The disclosed computer-implemented method includes systems for optimizing color rendition in an LED volume virtual production stage. For example, the described systems optimize or correct color rendition by applying a series of color correction matrices to color pixel values within the virtual production stage and to final captured imagery filmed within the virtual production stage. The described systems generate the color correction matrices from four calibration images taken within the virtual production stage. Various other methods, systems, and computer-readable media are also disclosed.

RF scene generation simulation with external maritime surface
12007502 · 2024-06-11 · ·

Embodiments of a system for simulating a radio frequency (RF) scene associated with a moving maritime surface are generally described herein. An RF scene is generated using an RF scene generation model and a moving maritime surface is generated using a maritime surface model. The RF scene is integrated with the moving maritime surface model. The RF scene generation model is configured to apply a radar model to generate and update the RF scene based on simulated radar returns at a radar pulse repetition frequency (PRF) and the maritime surface model is configured to update the moving maritime surface at a maritime surface update rate, access previous and current maritime surfaces, and interpolate surface facet properties to pulse times of the radar model. The maritime surface model is configured to update the moving maritime surface once every subdwell.

Information processing apparatus and information processing method

An information processing apparatus according to the present disclosure includes: an acquisition unit that acquires a character string whose part of speech is to be estimated; and a generation unit that generates part-of-speech estimation information for estimating a part of speech of the character string based on a byte sequence obtained by converting the character string.