Patent classifications
G01C21/3638
Three-dimensional representations of routes
Systems, methods, and non-transitory computer readable media configured to provide three-dimensional representations of routes. Locations for a planned movement may be obtained. The location information may include tridimensional information of a location. Route information for the planned movement may be obtained. The route information may define a route of one or more entities within the location. A three-dimensional view of the route within the location may be determined based on the location information and the route information. An interface through which the three-dimensional view of the route within the location is accessible may be provided.
Method and system for managing contextual views within a user interface
A method and system for managing contextual views within an application interface are disclosed. The method includes displaying a first view within an application interface that provides information about a user selected vehicle. The method also includes displaying a second view within an application interface that provides information about an environment. The second view may present an interactive three-dimensional model that provides an immersive experience. The application may switch between the first view and the second view according to user input.
Grouping maneuvers for display in a navigation presentation
Some embodiments of the invention provide several novel methods for generating a navigation presentation that displays a device navigating a route on a map. The method of some embodiments uses a virtual camera that, based on detected changes in the navigation context, dynamically modifies the way it captures portions of the map to produce different navigation scenes in the navigation presentation. To generate the navigation scenes, the method of some embodiments (1) identifies different sets of attributes that describe the different navigation contexts at different times during the navigation presentation, and (2) uses these different sets of attributes to identify different styles for operating the virtual camera. In some embodiments, the method uses an identified style to specify the virtual camera's positional attributes, which, in turn, define the portions of the map that the virtual camera identifies for rendering to produce several navigation scenes for a period of time (e.g., until the navigation context changes, or until the navigation presentation ends when the navigation context does not change again). During the navigation presentation, each time the navigation context changes, the identified set of attributes may change. This change, in turn, may cause the method of some embodiments to select a new style for operating the virtual camera. When the style for operating the virtual camera changes, the method of some embodiments modifies the way the virtual camera captures the portion of the map to render.
Navigation Application with Novel Declutter Mode
Some embodiments provide a navigation application with a novel declutter navigation mode. In some embodiments, the navigation application has a declutter control that when selected, directs the navigation application to simplify a navigation presentation by removing or de-emphasizing non-essential items that are displayed in the navigation presentation. In some embodiments, the declutter control is a mode-selecting control that allows the navigation presentation to toggle between normal first navigation presentation and a simplified second navigation presentation, which below is also referred to as a decluttered navigation presentation. During normal mode operation, the navigation presentation of some embodiments provides (1) a representation of the navigated route, (2) representations of the roads along the navigated route, (3) representation of major and minor roads that intersect or are near the navigated route, and (4) representations of buildings and other objects in the navigated scene. However, in the declutter mode, the navigation presentation of some embodiments provides a representation of the navigated route, while providing a de-emphasized presentation of the roads that intersect the navigated route or are near the navigated route. In some embodiments, the presentation shows the major roads that are not on the route with more emphasis than minor roads not on the route. Also, in some embodiments, the presentation fades out the minor roads not on the route more quickly than fading out the major roads not on the route.
METHOD FOR PROCESSING MAP DATA, AND ELECTRONIC DEVICE
A method for processing map data includes: determining a first region for which a map display image is to be drawn; determining multiple first reference points within the first region according to a second region corresponding to the first region; determining relative height data of the first region according to positions and real height data of all the first reference points; and determining a map display image of a position to be displayed according to the relative height data, in which the position to be displayed is within the first region.
Application and system providing indoor searching of a venue
In some implementations, a computing device can provide a map application providing a representation of a physical structure of venues (e.g., shopping centers, airports) identified by the application. The application can provide an inside view of the venue, which is accessible by other applications and programs on the user's device. Thus, whether intended or not, search results that are identified by the map application as having an inside view of the venue are also presented on a graphical user interface along with typical search results from the other applications.
Generating positions of map items for placement on a virtual map
This specification describes a system for generating positions of map items such as buildings, for placement on a virtual map. The system comprises: at least one processor; and a non-transitory computer-readable medium including executable instructions that when executed by the at least one processor cause the at least one processor to perform at least the following operations: receiving an input at a generator neural network trained for generating map item positions; generating, with the generator neural network, a probability of placing a map item for each subregion of a plurality of subregions of the region of the virtual map; and generating position data of map items for placement on the virtual map using the probability for each subregion. The input to the generator neural network comprises: map data comprising one or more channels of position information for at least a region of the virtual map, said one or more channels including at least one channel comprising road position information for the region; and a latent vector encoding a selection of a placement configuration.
DYNAMICALLY GENERATING SCENERY FOR A VIRTUAL REALITY DRIVING SESSION BASED ON ROUTE INFORMATION
In some implementations, a device may identify objects associated with the route based on one or more images associated with a selected route. The device may generate a model associated with the route including the scenery, wherein the scenery includes models of the objects based on geographic locations of the objects and three-dimensional spatial information of the objects. The device may determine a visual reference point for the virtual reality driving session based on at least one of vehicle information associated with the vehicle or user information. The device may provide, to a virtual reality device, presentation information that causes the virtual reality driving session to be displayed by the virtual reality device from a perspective of the visual reference point within a vehicle model associated with a selected vehicle placed in the model of the route with the scenery.
Vehicle and Method of Controlling the Same
In an embodiment a vehicle includes a communication module, a display module, an image sensor configured to acquire a front image of the vehicle and a controller configured to determine that the vehicle enters a predetermined range of a destination based on a global positioning system (GPS) signal received through the communication module, compare a feature point of the front image of the vehicle with point cloud map information to determine a first predicted position of the vehicle, based on a difference between the first predicted position and a second predicted position of the vehicle indicated by the GPS signal, determine one of the first predicted position and the second predicted position as a position of the vehicle and control the display module to display an augmented reality (AR) image for performing a route guidance to the destination based on the determined position of the vehicle.
USER INTERFACES FOR MAPS AND NAVIGATION
In some embodiments, an electronic device present navigation routes from various perspectives. In some embodiments, an electronic device modifies display of representations of (e.g., physical) objects in the vicinity of a navigation route while presenting navigation directions. In some embodiments, an electronic device modifies display of portions of a navigation route that are occluded by representations of (e.g., physical) objects in a map. In some embodiments, an electronic device presents representations of (e.g., physical) objects in maps. In some embodiments, an electronic device presents representations of (e.g., physical) objects in maps in response to requests to search for (e.g., physical) objects.