A63F13/812

Method and apparatus for predicting the likely success of golf swings
11602667 · 2023-03-14 · ·

A method of predicting the likelihood of a post T-off golf swing or consecutive swings resulting in a ball being sunk in a hole; the method utilizing communication equipped cameras or communication equipped laser rangefinders at known locations to determine accurate ball lie information. Transmission of this location information in real time to a processing facility linked to a database of historical play information incorporating at least ball position information and golf course in order to calculate odds of success of the upcoming swing and/or subsequent swings.

Method and apparatus for predicting the likely success of golf swings
11602667 · 2023-03-14 · ·

A method of predicting the likelihood of a post T-off golf swing or consecutive swings resulting in a ball being sunk in a hole; the method utilizing communication equipped cameras or communication equipped laser rangefinders at known locations to determine accurate ball lie information. Transmission of this location information in real time to a processing facility linked to a database of historical play information incorporating at least ball position information and golf course in order to calculate odds of success of the upcoming swing and/or subsequent swings.

System and method for augmented and virtual reality
11601484 · 2023-03-07 · ·

One embodiment is directed to a system for enabling two or more users to interact within a virtual world comprising virtual world data, comprising a computer network comprising one or more computing devices, the one or more computing devices comprising memory, processing circuitry, and software stored at least in part in the memory and executable by the processing circuitry to process at least a portion of the virtual world data; wherein at least a first portion of the virtual world data originates from a first user virtual world local to a first user, and wherein the computer network is operable to transmit the first portion to a user device for presentation to a second user, such that the second user may experience the first portion from the location of the second user, such that aspects of the first user virtual world are effectively passed to the second user.

Mixed reality competitions involving real and virtual participants

The disclosure provides technology for generating a mixed reality computer simulated competition between real and virtual participants. The mixed reality simulation may merge aspects of the real world with aspects of a virtual world. The real participant may be any human being that wants to compete in-person with a virtual participant and the virtual participant may be a deceased person, a living person, a famous person, a friend, or other participant. The real participant may perform actions in the real world that are captured in video content and the technology may augment the video content to include simulated actions of the virtual participant. The simulation of the virtual participant may be based on an actual person (e.g., a body double), computer models (e.g., graphical model, behavioral model, and kinematic model), or a combination thereof. The technology may also determine scores for the participants and determine the winner of the competition.

Mixed reality competitions involving real and virtual participants

The disclosure provides technology for generating a mixed reality computer simulated competition between real and virtual participants. The mixed reality simulation may merge aspects of the real world with aspects of a virtual world. The real participant may be any human being that wants to compete in-person with a virtual participant and the virtual participant may be a deceased person, a living person, a famous person, a friend, or other participant. The real participant may perform actions in the real world that are captured in video content and the technology may augment the video content to include simulated actions of the virtual participant. The simulation of the virtual participant may be based on an actual person (e.g., a body double), computer models (e.g., graphical model, behavioral model, and kinematic model), or a combination thereof. The technology may also determine scores for the participants and determine the winner of the competition.

OPERATION INPUT PROGRAM AND OPERATION INPUTTING METHOD
20220323860 · 2022-10-13 · ·

To enable to accept users' operation inputs complying with users' wishes.

In a shot operation of a golf game, a portable digital assistant specifies the first touch point and the last touch point by a user as a first operation point and a third operation point, respectively. And the portable digital assistant specifies the touch point that the moving direction of the Y direction reversed or stopped as a second operation point, by tracing the touch points in reverse order to the order detected each touch point from the last touch point. Thereby, the portable digital assistant can recognize that, in the user's touch operations, the position the user lastly reversed the moving direction of the Y direction or stopped his/her finger or the like was operated while being conscious of as the second operation point. As a result, a shot operation complying with the user's wish can be performed.

OPERATION INPUT PROGRAM AND OPERATION INPUTTING METHOD
20220323860 · 2022-10-13 · ·

To enable to accept users' operation inputs complying with users' wishes.

In a shot operation of a golf game, a portable digital assistant specifies the first touch point and the last touch point by a user as a first operation point and a third operation point, respectively. And the portable digital assistant specifies the touch point that the moving direction of the Y direction reversed or stopped as a second operation point, by tracing the touch points in reverse order to the order detected each touch point from the last touch point. Thereby, the portable digital assistant can recognize that, in the user's touch operations, the position the user lastly reversed the moving direction of the Y direction or stopped his/her finger or the like was operated while being conscious of as the second operation point. As a result, a shot operation complying with the user's wish can be performed.

Devices, methods, and graphical user interfaces for depth-based annotation

While displaying playback of a first portion of a video in a video playback region, a device receives a request to add a first annotation to the video playback. In response to receiving the request, the device pauses playback of the video at a first position in the video and displays a still image that corresponds to the first, paused position of the video. While displaying the still image, the device receives the first annotation on a first portion of a physical environment captured in the still image. After receiving the first annotation, the device displays, in the video playback region, a second portion of the video that corresponds to a second position in the video, where the first portion of the physical environment is captured in the second portion of the video and the first annotation is displayed in the second portion of the video.

Augmented-reality game overlays in video communications

In one embodiment, a method includes, by a client system of a first user, receiving a request from a second user to initiate a first game within a first layer of a communication interface, wherein the communication interface includes several layers, wherein a first layer includes a video communication of the second user, and wherein a second layer of the communication interface includes a thumbnail view of a video communication of the first user; generating a first game container in a third layer of the communication interface, wherein the third layer contains the first game in an augmented reality overlay; expanding the second layer into a full-screen view within the communication interface; and displaying the third layer as the augmented reality overlay over the second layer, wherein the first layer is closed responsive to the overlaying of the augmented reality overlay onto the second layer.

INFORMATION PROCESSING DEVICE AND IMAGE SHARING METHOD

A state information acquisition section 164c acquires, from a management server, information indicating states of a plurality of members. A room image generation section 124c generates, on the basis of the information indicating the state of the plurality of members, a member displaying field in which information regarding a member transmitting an image and information regarding a member transmitting no image are included in different regions. A reception section 104c receives an operation of selecting a member transmitting an image. A request transmission section 180c sends a watching request including information for identifying the selected member to the management server or a distribution server that distributes an image.