Patent classifications
G06T3/20
Vehicle interaction system as well as corresponding vehicle and method
According to an aspect of the present disclosure, an interaction system for a first vehicle is provided, which comprises a processor and a memory storing processor-executable instructions that, when executed by the processor, cause the latter to implement steps comprising: receiving a first input from the first vehicle and displaying a first avatar on a display; and receiving a second input from a second vehicle and displaying a second avatar on the display, wherein the first input and the second input are updated in real time, and the first avatar and the second avatar dynamically change accordingly.
Method and device for correcting vehicle view cameras
A method for correcting a camera by using a plurality of pattern members placed on the ground of a vehicle enables receiving pattern information of a plurality of pattern members by using a plurality of cameras disposed on the circumference of a vehicle being driven, calculating a first parameter on the basis of the pattern information, calculating trajectory information of the vehicle by using the pattern information, and calculating a second parameter by correcting the first parameter on the basis of the trajectory information of the vehicle.
Method and device for correcting vehicle view cameras
A method for correcting a camera by using a plurality of pattern members placed on the ground of a vehicle enables receiving pattern information of a plurality of pattern members by using a plurality of cameras disposed on the circumference of a vehicle being driven, calculating a first parameter on the basis of the pattern information, calculating trajectory information of the vehicle by using the pattern information, and calculating a second parameter by correcting the first parameter on the basis of the trajectory information of the vehicle.
Intuitive 3D transformations for 2D graphics
A graphics design system provides intuitive 3D transformations for 2D objects. A user interface element is presented on 2D object or group of 2D objects. The user interface element comprises a combination of components for applying different 3D transformations, including at least one rotation component for rotating a 2D object or group of 2D objects around an axis and at least one translation component for translating the 2D object or group of 2D objects along at least one axis. 3D transformations are non-destructive and performed relative to axes local to a 2D object or 2D objects. When a 2D object or group of 2D objects is rotated around an axis, the other axes are rotated. As such, subsequent rotations and translations are performed based on the rotated axes. Additionally, editing actions associated with rotated 2D object(s) are performed in the rotated x-y plane of the rotated 2D object(s).
Intuitive 3D transformations for 2D graphics
A graphics design system provides intuitive 3D transformations for 2D objects. A user interface element is presented on 2D object or group of 2D objects. The user interface element comprises a combination of components for applying different 3D transformations, including at least one rotation component for rotating a 2D object or group of 2D objects around an axis and at least one translation component for translating the 2D object or group of 2D objects along at least one axis. 3D transformations are non-destructive and performed relative to axes local to a 2D object or 2D objects. When a 2D object or group of 2D objects is rotated around an axis, the other axes are rotated. As such, subsequent rotations and translations are performed based on the rotated axes. Additionally, editing actions associated with rotated 2D object(s) are performed in the rotated x-y plane of the rotated 2D object(s).
Displaying a window in an augmented reality view
For displaying a window in an augmented reality view, a processor detects a new augmented reality placetime that includes a new augmented reality position and/or a new augmented reality time of an augmented reality device. The processor calculates new window characteristics for a window at the new augmented reality placetime based on previous window characteristics. The processor further displays the window with the new window characteristics.
Displaying a window in an augmented reality view
For displaying a window in an augmented reality view, a processor detects a new augmented reality placetime that includes a new augmented reality position and/or a new augmented reality time of an augmented reality device. The processor calculates new window characteristics for a window at the new augmented reality placetime based on previous window characteristics. The processor further displays the window with the new window characteristics.
GENERATIVE NEURAL NETWORKS WITH REDUCED ALIASING
Systems and methods are disclosed that improve output quality of any neural network, particularly an image generative neural network. In the real world, details of different scale tend to transform hierarchically. For example, moving a person's head causes the nose to move, which in turn moves the skin pores on the nose. Conventional generative neural networks do not synthesize images in a natural hierarchical manner: the coarse features seem to mainly control the presence of finer features, but not the precise positions of the finer features. Instead, much of the fine detail appears to be fixed to pixel coordinates which is a manifestation of aliasing. Aliasing breaks the illusion of a solid and coherent object moving in space. A generative neural network with reduced aliasing provides an architecture that exhibits a more natural transformation hierarchy, where the exact sub-pixel position of each feature is inherited from underlying coarse features.
GENERATIVE NEURAL NETWORKS WITH REDUCED ALIASING
Systems and methods are disclosed that improve output quality of any neural network, particularly an image generative neural network. In the real world, details of different scale tend to transform hierarchically. For example, moving a person's head causes the nose to move, which in turn moves the skin pores on the nose. Conventional generative neural networks do not synthesize images in a natural hierarchical manner: the coarse features seem to mainly control the presence of finer features, but not the precise positions of the finer features. Instead, much of the fine detail appears to be fixed to pixel coordinates which is a manifestation of aliasing. Aliasing breaks the illusion of a solid and coherent object moving in space. A generative neural network with reduced aliasing provides an architecture that exhibits a more natural transformation hierarchy, where the exact sub-pixel position of each feature is inherited from underlying coarse features.
CO-REGISTRATION OF INTRAVASCULAR DATA AND MULTI-SEGMENT VASCULATURE, AND ASSOCIATED DEVICES, SYSTEMS, AND METHODS
Disclosed is a medical imaging system, including a processor circuit configured for communication with an x-ray imaging device movable relative to a patient and an intravascular catheter or guidewire sized and shaped for positioning within a blood vessel of the patient, wherein the processor circuit is configured to receive a first angiographic image of a first length of the vessel and a second angiographic image of a second length of the vessel, wherein the first image is obtained at a first position and the second angiographic image is obtained at a second position. The processor is further configured to generate a roadmap image of a combined length of the blood vessel by combining the first image and the second image, and to receive intravascular data associated with the blood vessel, and to co-register the intravascular data to corresponding locations in the roadmap image; and output the roadmap image and a graphical representation of the intravascular data at the corresponding locations in the roadmap image.