Patent classifications
H04M2250/52
Method and apparatus for a stereoscopic smart phone
This patent provides a novel stereoscopic imaging system. In the preferred embodiment, the improved stereoscopic imaging system would be incorporated onto a smart phone, which is called the stereoscopic smart phone (SSP). The SSP would work in conjunction with one or more stereoscopic head display units (SHDUs). Once this system is operational, it will allow significant improvements to obtaining stereoscopic imagery. One novel aspect of this patent comprises wherein the stereoscopic cameras on the SSP can move in position to alter stereo separation distance and change the convergence to optimally image a scene.
Photographing Method and Terminal
A method includes detecting a first operation from a user to start a camera of an electronic device, when a display of the electronic device is unfolded to a plane state, starting a camera application, displaying a preview interface in a first display area of the display, where the preview interface includes a viewfinder frame, and the viewfinder frame includes a first picture, detecting a second operation from the user to indicate photographing, controlling the camera to perform a photographing action to generate a first multimedia file, simultaneously displaying the preview interface and a gallery application interface in the first display area of the display, where the gallery application interface displays the first multimedia file, and the gallery application interface includes a deletion control, detecting a deletion operation from the user, deleting the first multimedia file, and displaying the preview interface in the first display area of the display.
Secure QR code system for distributed large payload transmissions for everyday use
A system for transferring a data file includes a first data device (124) that is configured to: partition the data file (200) into a plurality of sub-units (202); generate a plurality of sequence bits (211) for each sub-unit that indicates a place in the data file (200) that the sub-unit belongs; for each sub-unit, integrate the sequence bits into the sub-unit; and convert each sub-unit into a different sub-unit QR code (221), thereby generating a plurality of sub-unit QR codes (221, 222 . . . 229); and transmit each of the sub-unit QR codes (221, 222 . . . 229). A second data device (130) is configured to: receive each of the sub-unit QR codes (221, 222 . . . 229); convert each of the sub-unit QR codes (221, 222 . . . 229) into corresponding reconstructed sub-units; and assemble the reconstructed sub-units nto a reconstructed data file (110′) in an order indicated by the sequence bits.
MOBILE TERMINAL
The present disclosure provides a mobile terminal comprising: a frame which is expanded in a first direction or is contracted in a second direction which is a reverse direction of the first direction; a display unit which is provided on one surface of the frame and includes: a screen expanded or contracted in correspondence to the expansion or contraction of the frame; a camera for obtaining an image corresponding to a specific angle of view; and a control unit for controlling, when the screen is expanded in the first direction, the display unit to output an image which continues in the first direction, on the expanded screen.
Mobile terminal
A mobile terminal including a display; a Time of Flight (TOF) camera configured to obtain a depth image of an object; and a controller configured to display a guide interface on the display to guide the object to move into an interaction region of an imaging region of the TOF camera.
Voice interaction processing method and apparatus
This application provides a voice interaction processing method and apparatus, to achieve a friendly and natural voice interaction effect and reduce power consumption. In the method, a microprocessor enables an image collector only when determining, based on voice data collected by a voice collector, that a first user is a target user; then the image collector collects user image data and transmits the user image data to the microprocessor; and the microprocessor sends a wakeup instruction to an application processor only when determining, based on the user image data, that the target user is in a voice interaction state. Based on the foregoing method, nus-enabling of the image collector and the application processor is avoided to some extent, and power consumption is reduced.
ELECTRONIC DEVICE AND IMAGE TRANSMISSION METHOD BY ELECTRONIC DEVICE
An electronic device includes a first communication module; a second communication module; a sensor module; a camera module configured to capture a photographed image; and a processor configured to: determine photographing direction information of the electronic device based on sensor information received from the sensor module, determine direction information of an external electronic device based on a first signal received through the first communication module, in case that a camera interface is activated, determine a shared external electronic device based on the photographing direction information, the direction information of the external electronic device, and angle of view information of a camera, and transmit the photographed image to the determined shared external electronic device in case that a graphic object provided by the camera interface is selected.
IMAGE CAPTURING APPARATUS CAPABLE OF SUPPRESSING DETECTION OF SUBJECT NOT INTENDED BY USER, CONTROL METHOD FOR IMAGE CAPTURING APPARATUS, AND STORAGE MEDIUM
An image capturing apparatus capable of suppressing reading of a two-dimensional code not intended by a user when the two-dimensional code is present within a photographing view angle without taking time and labor for the user to operate is provided. The image capturing apparatus includes an obtaining unit configured to obtain an image, a first detecting unit configured to detect a specific subject from the image, a second detecting unit configured to detect an identifier from the image, a reading unit configured to read predetermined information from the identifier, and a processing unit configured to execute a processing based on the predetermined information. The processing unit is configured to selectively execute the processing based on a distance from the center of a photographing view angle of the obtained image to a region that the specific subject is detected and a distance from the center of the photographing view angle of the obtained image to a region that the identifier is detected.
Data Sharing Method and Device
A data sharing method and a device are provided. A first image is obtained by integrating a first 3D identifier and a second 3D identifier of a digital world into an image of a real world captured by a camera and performing AR rendering, the first 3D identifier is used to identify at least a building, a plant, or a mountain scenery in the real world, and the second 3D identifier is used to identify a first user in the first image. The first device displays one or a plurality of virtual objects in response to a second operation. In response to a sliding operation of which a start point of a sliding track is a first virtual object and an end point is an image of the first user, a server is requested to transmit the first virtual object to a second device.
ELECTRONIC DEVICE AND IMAGING DEVICE
Even in a case where the amount of incident light is small, a high-Quality captured image can be obtained.
An electronic device includes: a display unit; a first imaging unit that is disposed on a side opposite to a display surface of the display unit and is capable of capturing an image of light in an infrared light wavelength band that has passed through the display unit; a second imaging unit that is disposed on a side opposite to the display surface of the display unit and is capable of capturing as image of light in a visible light wavelength band that has passed through the display unit; and a correction unit that corrects image data imaged by the second imaging unit on the basis of image data imaged by the first imaging unit.