Patent classifications
G06V20/36
METHOD FOR INDOOR LOCALIZATION AND ELECTRONIC DEVICE
The disclosure provides a method for indoor localization, a related electronic device and a related storage medium. A first image position of a target feature point of a target object is obtained and an identifier of the target feature point is obtained based on a first indoor image. A 3D spatial position of the target feature point is obtained through retrieval based on the identifier of the target feature point. The 3D spatial position is pre-determined based on a second image position of the target feature point on a second indoor image, a posture of a camera for capturing the second indoor image, and a posture of the target object on the second indoor image. An indoor position of the user is determined based on the first image position of the target feature point and the 3D spatial position of the target feature point.
CAPTURING METHOD AND DEVICE
A capturing method for an electronic device, includes: obtaining an image captured during a capturing process, obtaining a target pop-up comment matching image information of the image, and displaying the image and the target pop-up comment.
PROXIMITY-BASED NAVIGATIONAL MODE TRANSITIONING
A non-transitory computer-readable medium includes instructions that when executed by a processor cause the processor to perform a method for providing visual navigation assistance in retail stores, which may include receiving a first indoor location of a user within a retail store; receiving a target destination; and providing first navigation data to the user through a first visual interface. The method may also include, after providing the first navigation data, receiving a second indoor location of the user within the retail store; determining that the second indoor location is within a selected area around the target destination, with the selected area not including the first indoor location; and, in response to the determination that the second indoor location is within the selected area around the target destination, providing second navigation data to the user through a second visual interface, where the second visual interface differs from the first visual interface.
METHOD AND APPARATUS FOR IDENTIFYING THE FLOOR OF A BUILDING
A method, apparatus and computer program product are provided to identify the floor of a building. In the context of a method, an image captured by a mobile device is received. The image includes a representation of a floor plan of the floor of the building. The method also includes comparing, with processing circuitry, the representation of the floor plan from the image to one or more predefined indoor maps of respective floors. Based upon the comparing, the method further includes identifying, with the processing circuitry, the floor depicted by the floor plan from the image captured by the mobile device.
Location tracking
Methods, systems, and apparatus, including computer programs encoded on computer-storage media, for location tracking. In some implementations, a corresponding method includes obtaining images of one or more persons within a property; identifying a heartbeat for each person of the one or more persons within the images; identifying a location for each person within the images; receiving a request that indicates a heartbeat signature generated from heartbeats sensed by a device worn by a user; determining that an identified heartbeat of a person of the one or more persons matches the heartbeat signature; in response to determining that the identified heartbeat of the person of the one or more persons matches the heartbeat signature, selecting the person as the user; and in response to selecting the person as the user, providing a response to the request based on a location of the person.
ON DEMAND VISUAL RECALL OF OBJECTS/PLACES
Aspects of the subject disclosure may include, for example, observing a plurality of objects viewed through a smart lens, wherein the plurality of objects are in a frame of an image viewed by the smart lens, determining an identification for an object of the plurality of objects, assigning tag information for the object based on the identification, storing the tag information for the object and the frame in which the object was observed, receiving a recall request for the object, retrieving the tag information for the object and the frame responsive to the receiving the recall request, and displaying the tag information and the frame. Other embodiments are disclosed.
SCENE LAYOUT ESTIMATION
Systems and techniques are provided for determining environmental layouts. For example, based on one or more images of an environment and depth information associated with the one or more images, a set of candidate layouts and a set of candidate objects corresponding to the environment can be detected. The set of candidate layouts and set of candidate objects can be organized as a structured tree. For instance, a structured tree can be generated including nodes corresponding to the set of candidate layouts and the set of candidate objects. A combination of objects and layouts can be selected in the structured tree (e.g., based on a search of the structured tree, such as using a Monte-Carlo Tree Search (MCTS) algorithm or adapted MCTS algorithm). A three-dimensional (3D) layout of the environment can be determined based on the combination of objects and layouts in the structured tree.
Centralized monitoring of confined spaces
A method that includes receiving a data stream from point monitoring cases proximate to confined spaces, the data stream include at least on camera feed of the confined space, generating a display from the data stream for a single display device, wherein the display comprises a tile view including a tile for each confined space of confined spaces being monitored. The tile includes at least one camera feed from a camera located at the confined space, a number of workers in proximity to the confined space, wherein the number of workers is displayed in the tile above the at least one camera feed, control buttons located in the tile to a right of the at least one camera feed, and gas sensor indicators located in the tile.
Gradual adjustments to planograms
A system for making gradual adjustments to planograms may include at least one processor. The processor may be programmed to receive a first image of a shelf; analyze the first image to determine a first placement of products on the shelf; determine a planned first adjustment to the first placement of products; and provide first information configured to cause the planned first adjustment. The processor may then receive a second image of the shelf captured after the first information was provided; analyze the second image to determine a second placement of products on the shelf; determine a planned second adjustment to the second placement of products; and provide second information configured to cause the planned second adjustment to the determined second placement of products.
Automated Tools For Generating Mapping Information For Buildings
Techniques are described for using computing devices to perform automated operations involved in analysis of images acquired in a defined area, as part of generating mapping information of the defined area for subsequent use (e.g., for controlling navigation of devices, for display on client devices in corresponding GUIs, etc.). The defined area may include an interior of a multi-room building, and the generated information including a floor map of the building, such as from an analysis of multiple 360° spherical panorama images acquired at various viewing locations within the building (e.g., using an image acquisition device with a spherical camera having one or more fisheye lenses to capture a panorama image that extends 360 degrees around a vertical axis)—the generating may be further performed without detailed information about distances from the images' viewing locations to objects in the surrounding building.