Patent classifications
G01C21/3647
IMAGE DISPLAY METHOD AND ELECTRONIC DEVICE THEREOF
An image display method includes obtaining an image group, wherein the image group includes a plurality of images corresponding to multiple viewing angles; displaying a preset viewing-angle image, wherein the preset viewing-angle image is one of the images; and displaying the images within a predetermined time period continuously.
Information processing apparatus and information processing method to perform transition display control
Use of the AR technology is facilitated. [Solution] Such an information processing apparatus is provided that includes a control unit that controls a display to superimpose a virtual object, associated with a set position in the real world, on the real world with reference to position and orientation information indicative of the position and the orientation of a mobile body, and a correction unit that corrects the position and orientation information. When the position and orientation information is corrected from first position and orientation information to second position and orientation information that is non-continuous to the first position and orientation information, the control unit controls the display to perform transition display that indicates transition of display conditions from a first display condition, which displays the virtual object based on the first position and orientation information, to a second display condition based on the second position and orientation information.
Route determination method
A method for determining a route of a robot is provided such that a moving apparatus can move smoothly to a destination point while avoiding interference with a plurality of moving objects such as traffic participants. In an environment in which a plurality of second pedestrians moves along predetermined movement patterns, a plurality of movement routes when a first pedestrian moves toward a destination point is recognized. Data, in which a compound environmental image constituted of time series of environmental images indicating a visual environment around a virtual robot when the virtual robot moves along each of the plurality of movement routes and a moving direction command indicating a moving direction of the virtual robot are combined, is generated as learning data. Model parameters of a CNN (action model) is learned using the learning data, and a moving velocity command for a robot is determined using a learned CNN.
Localizing transportation requests utilizing an image based transportation request interface
The present application discloses an improved transportation matching system, and corresponding methods and computer-readable media. According to the disclosed embodiments, the transportation matching system utilizes an image-based transportation request interface and environmental digital image stream to efficiently generate transportation requests with accurate pickup locations. For instance, the disclosed system can utilize one or more environmental digital images provided from a requestor computing device (e.g., a mobile device or an augmented reality wearable device) to determine information such as the location of the requestor computing device and a transportation pickup location within the environmental digital images. Furthermore, the disclosed system can provide, for display on the requestor computing device, one or more augmented reality elements at the transportation pickup location within an environmental scene that includes the transportation pickup location.
METHOD FOR PREPARING ROAD GUIDANCE INSTRUCTIONS
A method prepares voice guidance instructions for an individual, formulated in the most natural fashion possible. The method includes:—determining, by a computer, a path to be followed, —acquiring, by an image acquisition unit, at least one image of the environment of the individual, —processing the image in order to detect at least one object therein and in order to characterise the object, and —preparing a voice guidance instruction supplying the user with a piece of information for carrying out a manoeuvre in order to follow the path. The method also includes determining a level of complexity of the manoeuvre, and, in the preparation step, the voice guidance instruction is formulated using an indication deduced from the characterisation of the object only if the level of complexity of the manoeuvre is greater than a first threshold.
GUIDANCE DEVICE, GUIDANCE METHOD, AND PROGRAM
A behavioral ability obtainment unit 351 obtains behavioral abilities including, for example, at least one of physical behavioral abilities and intellectual behavioral abilities, of a plurality of members, i.e., members of a group to be guided. A guidance control unit 352 performs guidance control for the group on the basis of the behavioral abilities obtained by the behavioral ability obtainment unit 351. For example, the guidance control unit determines a member to be prioritized from the group members, generates a route plan to a destination according to the behavioral abilities of the determined member to be prioritized, and performs guidance control based on the route plan. This makes it possible to perform guidance operations according to the group to be guided.
Augmented reality displays for locating vehicles
Augmented reality displays for locating vehicles are disclosed herein. An example method includes determining a current location of a mobile device associated with a user, determining a current location of a vehicle, and generating augmented reality view data that includes a first arrow that identifies a path of travel for the ridehail user towards the vehicle. The path of travel is based on the current location of the mobile device and the current location of the vehicle. The first arrow is combined with a view obtained by a camera of the mobile device or a view obtained by a camera of the vehicle.
User Terminal and Control Method Thereof
An embodiment user terminal includes an image acquisition part, a user interface configured to display an image photographed through the image acquisition part, a position detection sensor configured to detect a position of the user terminal, and a controller configured to determine a recommended point of interest (POI) among a plurality of POIs located around the user terminal, based on operation information about each of the plurality of POIs, a user's search history for a POI, and the user's POI scrap, and to control the user interface to display an augmented reality (AR) image corresponding to the recommended POI by superimposing the AR image on the image photographed through the image acquisition part.
IMAGE PROCESSING DEVICE, AND IMAGE PROCESSING METHOD
An image processing device includes: an information acquisition unit configured to acquire outside information including color information of outside of a vehicle; an image generation unit configured to generate, based on the outside information, a first image in which a difference in the color information with respect to the outside of the vehicle is within a predetermined range; and an output unit configured to output the first image to a display provided in a cabin of the vehicle.
APPARATUS AND METHOD FOR PROVIDING EXTENDED FUNCTION TO VEHICLE
Provided may be a method for providing an extended function to a vehicle according to an embodiment, the method comprising the steps of: obtaining first image information required for providing an extended function, through a first photographing unit; obtaining predetermined running information related to the running of a vehicle; performing image processing for providing an extended function, through a first ECU on the basis of the running information and the first image information; and displaying a result of the image processing. Furthermore, provided may be an extended function providing apparatus capable of performing the extended function providing method, and a non-volatile computer-readable recording medium in which a computer program for performing the extended function providing method is contained.