G06T3/20

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND INFORMATION PROCESSING METHOD

An information processing apparatus includes a processor configured to: obtain a video and an instruction to generate a still image from the video, the video being a video in which a work target is photographed, the work target being a target on which to work; generate the still image in response to the instruction, the still image being cut from the video including the work target; specify the work target in the video, position information, and a superimposition area by using the still image, the position information describing a position of the work target, the superimposition area being an area in which an image is superimposed, the image being obtained by using the position of the work target as a reference; receive instruction information indicating an instruction for work on the work target; and superimpose and display an instruction image in the superimposition area in the video, the instruction image being an image according to the instruction information.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND INFORMATION PROCESSING METHOD

An information processing apparatus includes a processor configured to: obtain a video and an instruction to generate a still image from the video, the video being a video in which a work target is photographed, the work target being a target on which to work; generate the still image in response to the instruction, the still image being cut from the video including the work target; specify the work target in the video, position information, and a superimposition area by using the still image, the position information describing a position of the work target, the superimposition area being an area in which an image is superimposed, the image being obtained by using the position of the work target as a reference; receive instruction information indicating an instruction for work on the work target; and superimpose and display an instruction image in the superimposition area in the video, the instruction image being an image according to the instruction information.

Virtual reality training, simulation, and collaboration in a robotic surgical system

A virtual reality system providing a virtual robotic surgical environment, and methods for using the virtual reality system, are described herein. Within the virtual reality system, various user modes enable different kinds of interactions between a user and the virtual robotic surgical environment. For example, one variation of a method for facilitating navigation of a virtual robotic surgical environment includes displaying a first-person perspective view of the virtual robotic surgical environment from a first vantage point, displaying a first window view of the virtual robotic surgical environment from a second vantage point and displaying a second window view of the virtual robotic surgical environment from a third vantage point. Additionally, in response to a user input associating the first and second window views, a trajectory between the second and third vantage points can be generated sequentially linking the first and second window views.

Virtual reality training, simulation, and collaboration in a robotic surgical system

A virtual reality system providing a virtual robotic surgical environment, and methods for using the virtual reality system, are described herein. Within the virtual reality system, various user modes enable different kinds of interactions between a user and the virtual robotic surgical environment. For example, one variation of a method for facilitating navigation of a virtual robotic surgical environment includes displaying a first-person perspective view of the virtual robotic surgical environment from a first vantage point, displaying a first window view of the virtual robotic surgical environment from a second vantage point and displaying a second window view of the virtual robotic surgical environment from a third vantage point. Additionally, in response to a user input associating the first and second window views, a trajectory between the second and third vantage points can be generated sequentially linking the first and second window views.

ANIMATION GENERATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
20230043150 · 2023-02-09 ·

Provided are an method for generating an animation and apparatus, an electronic device, and a computer-readable storage medium. The method for generating an animation comprises: determining a background image and a first foreground image of an original image (S110); respectively rotating, scaling and translating the first foreground image and a first 2D sticker image to obtain a second foreground image and a second 2D sticker image, wherein the first 2D sticker image is generated in advance on the basis of a predetermined coverage manner and according to the original image (S120); mixing the second foreground image with the background image to obtain a first mixed image (S130); and mixing the first mixed image with the second 2D sticker image to generate an animation of the original image (S140).

ANIMATION GENERATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
20230043150 · 2023-02-09 ·

Provided are an method for generating an animation and apparatus, an electronic device, and a computer-readable storage medium. The method for generating an animation comprises: determining a background image and a first foreground image of an original image (S110); respectively rotating, scaling and translating the first foreground image and a first 2D sticker image to obtain a second foreground image and a second 2D sticker image, wherein the first 2D sticker image is generated in advance on the basis of a predetermined coverage manner and according to the original image (S120); mixing the second foreground image with the background image to obtain a first mixed image (S130); and mixing the first mixed image with the second 2D sticker image to generate an animation of the original image (S140).

System and method for orientating capture of ultrasound images

A downloadable navigator for a mobile ultrasound unit having an ultrasound probe, implemented on a portable computing device. The navigator includes a trained orientation neural network to receive a non-canonical image of a body part from the mobile ultrasound unit and to generate a transformation associated with the non-canonical image, the transformation transforming from a position and rotation associated with a canonical image to a position and rotation associated with the non-canonical image; and a result converter to convert the transformation into orientation instructions for a user of the probe and to provide and display the orientation instructions to the user to change the position and rotation of the probe.

System and method for orientating capture of ultrasound images

A downloadable navigator for a mobile ultrasound unit having an ultrasound probe, implemented on a portable computing device. The navigator includes a trained orientation neural network to receive a non-canonical image of a body part from the mobile ultrasound unit and to generate a transformation associated with the non-canonical image, the transformation transforming from a position and rotation associated with a canonical image to a position and rotation associated with the non-canonical image; and a result converter to convert the transformation into orientation instructions for a user of the probe and to provide and display the orientation instructions to the user to change the position and rotation of the probe.

DISTORTION RECTIFICATION METHOD AND TERMINAL

Disclosed is a distortion rectification method, comprising: taking a wide-angle photograph using a camera of a terminal; determining distortion regions and non-distortion regions in the wide-angle photograph; obtaining a target distortion region selected by a user; dividing the target distortion region into M grid regions of a first pre-set size, wherein M is an integer greater than or equal to one; and respectively performing distortion rectification on the M grid regions of the first pre-set size. Also disclosed is a terminal.

DISTORTION RECTIFICATION METHOD AND TERMINAL

Disclosed is a distortion rectification method, comprising: taking a wide-angle photograph using a camera of a terminal; determining distortion regions and non-distortion regions in the wide-angle photograph; obtaining a target distortion region selected by a user; dividing the target distortion region into M grid regions of a first pre-set size, wherein M is an integer greater than or equal to one; and respectively performing distortion rectification on the M grid regions of the first pre-set size. Also disclosed is a terminal.