G09G5/37

Information processing system, information processing method, and computer program

A live streaming system 10 identifies, with regard to a plurality of users each wearing an HMD 100 to watch same content, types of emotions that the plurality of users are experiencing based on information associated with the plurality of users, the information being detected by a predetermined apparatus. The live streaming system 10 generates, as an image to be displayed on the HMD 100 of a certain user of the plurality of users, an image including the content and at least one of a plurality of avatars corresponding to the plurality of users. The live streaming system 10 changes display appearances of the plurality of avatars depending on the types of emotions that the plurality of users are experiencing.

Augmented reality using a split architecture

A split-architecture for rendering and warping world-locked AR elements, such as graphics in a navigation application, for display on augmented reality (AR) glasses is disclosed. The split-architecture can help to alleviate a resource burden on the AR glasses by performing the more complex processes associated with the rendering and warping on a computing device, while performing the less complex processes associated with the rendering and warping on the AR glasses. The AR glasses and the computing device are coupled via wireless communication, and the disclosed systems and methods address the large and variable latencies associated with the wireless communication that could otherwise make splitting these processes impractical.

Facially responsive communication apparatus

A communication apparatus includes a display part, a display output part, and a control part. A plurality of face patterns are switched and displayed on the display part. The display output part is capable of outputting a face pattern to the display part. The control part controls, in accordance with a change of the face pattern which is output by the display output part, a different operation of a communication apparatus main body than the change of the face pattern.

Facially responsive communication apparatus

A communication apparatus includes a display part, a display output part, and a control part. A plurality of face patterns are switched and displayed on the display part. The display output part is capable of outputting a face pattern to the display part. The control part controls, in accordance with a change of the face pattern which is output by the display output part, a different operation of a communication apparatus main body than the change of the face pattern.

Display apparatus and virtual reality display system for generating a temporary image based on head tracking information

The present disclosure provides a display apparatus including a head tracker obtaining information of movement of a user and formed on a display substrate and generating a temporary image using the information of the movement of the user. The head tracker is configured to output information of movement of a user to the driving controller. The driving controller is configured to generate a temporary image based on the information of the movement of the user. The display panel is configured to selectively display an input image and the temporary image.

Display apparatus and virtual reality display system for generating a temporary image based on head tracking information

The present disclosure provides a display apparatus including a head tracker obtaining information of movement of a user and formed on a display substrate and generating a temporary image using the information of the movement of the user. The head tracker is configured to output information of movement of a user to the driving controller. The driving controller is configured to generate a temporary image based on the information of the movement of the user. The display panel is configured to selectively display an input image and the temporary image.

Apparatus and method for dynamic navigation of a selected geographical zone

There is provided a handheld device for navigating about a selected geographical zone. The handheld navigating device includes a processor circuit with a screen on which an image representative of a geographical map of a selected geographical zone is displayed. The geographical map of the selected geographical zone includes representations corresponding respectively with points of interest having locations and categories. The geographical map is configured so that (a) at least part of the representations corresponding respectively with points of interest are clustered within at least one viewing area, (b) the at least one viewing area is distinguished by a visible pattern having both a selected appearance and a first visual state that do not reveal the locations and the categories of the points of interest. The handheld navigation device includes a geographical position locating subsystem for determining when a selected relationship exists between the user and one or more of the points of interest. Responsive to the selected relationship existing, the processor circuit changes the appearance of at least part of the visible pattern to both reflect that the at least one viewing area has been transformed from the first visual state to a second visual state and the extent to which the user has traversed the physical area.

Apparatus and method for dynamic navigation of a selected geographical zone

There is provided a handheld device for navigating about a selected geographical zone. The handheld navigating device includes a processor circuit with a screen on which an image representative of a geographical map of a selected geographical zone is displayed. The geographical map of the selected geographical zone includes representations corresponding respectively with points of interest having locations and categories. The geographical map is configured so that (a) at least part of the representations corresponding respectively with points of interest are clustered within at least one viewing area, (b) the at least one viewing area is distinguished by a visible pattern having both a selected appearance and a first visual state that do not reveal the locations and the categories of the points of interest. The handheld navigation device includes a geographical position locating subsystem for determining when a selected relationship exists between the user and one or more of the points of interest. Responsive to the selected relationship existing, the processor circuit changes the appearance of at least part of the visible pattern to both reflect that the at least one viewing area has been transformed from the first visual state to a second visual state and the extent to which the user has traversed the physical area.

LINE-BASED RENDERING FOR GRAPHICS RENDERING SYSTEMS, METHODS, AND DEVICES

Systems, methods, and devices implement line-based rendering of graphics. Methods include receiving a command associated with graphical data, the command identifying a plurality of pixel mapping operations to be implemented on a plurality of data objects included in the graphical data. Methods also include determining a plurality of rendering parameters, the plurality of rendering parameters identifying a partitioning of the graphical data into a plurality of portions, and further identifying a pixel mapping operation for each of the plurality of portions. Methods further include generating a plurality of sub-commands based, at least in part, on the plurality of rendering parameters and the command, the plurality of sub-commands identifying a processing operation for each data object included in each of the plurality of portions of the graphical data. Methods also include implementing a processing operation for at least one portion based on at least some of the plurality of sub-commands.

LINE-BASED RENDERING FOR GRAPHICS RENDERING SYSTEMS, METHODS, AND DEVICES

Systems, methods, and devices implement line-based rendering of graphics. Methods include receiving a command associated with graphical data, the command identifying a plurality of pixel mapping operations to be implemented on a plurality of data objects included in the graphical data. Methods also include determining a plurality of rendering parameters, the plurality of rendering parameters identifying a partitioning of the graphical data into a plurality of portions, and further identifying a pixel mapping operation for each of the plurality of portions. Methods further include generating a plurality of sub-commands based, at least in part, on the plurality of rendering parameters and the command, the plurality of sub-commands identifying a processing operation for each data object included in each of the plurality of portions of the graphical data. Methods also include implementing a processing operation for at least one portion based on at least some of the plurality of sub-commands.