H04M2250/52

Mobile terminal for displaying a preview image to be captured by a camera and control method therefor

Disclosed are a mobile terminal and a control method therefor. The mobile terminal according to one embodiment of the present invention comprises: a camera provided to a main body; a display unit for displaying a preview image for the photographing of the camera; and a control unit for detecting the entering into of a subject recognition mode while the preview image is being displayed, and in the subject recognition mode, analyzing objects included in the preview image so as to recognize a subject to be photographed, and applying, to the preview image, a camera effect matched with the recognized subject. In addition, while the matched camera effect is being applied to the preview image, the control unit may further provide, on the display unit, on the basis of a touch input applied to the preview image, a list of camera effects additionally applicable to the recognized subject.

Imaging module, camera assembly and electronic device

Disclosed is an imaging module, including a housing, a moving element received in the housing, multiple lenses in contact with and fixed on the moving element, an image sensor provided at one side of the multiple lenses, and a drive mechanism connected to the housing and the moving element, wherein the drive mechanism is used for driving the moving element to move along the optical axis of the multiple lenses so that the multiple lenses focus on the image sensor for imaging. A camera assembly and an electronic device are further disclosed.

Selection of pulse repetition intervals for sensing time of flight

Sensing apparatus includes a radiation source, which emits pulses of optical radiation toward multiple points in a target scene. A receiver receives the optical radiation that is reflected from the target scene and outputs signals that are indicative of respective times of flight of the pulses to and from the points in the target scene. Processing and control circuitry selects a first pulse repetition interval (PRI) and a second PRI, greater than the first PRI, from a permitted range of PRIs, drives the radiation source to emit a first sequence of the pulses at the first PRI and a second sequence of the pulses at a second PRI, and processes the signals output in response to both the first and second sequences of the pulses in order to compute respective depth coordinates of the points in the target scene.

MACHINE LEARNING BASED PHONE IMAGING SYSTEM AND ANALYSIS METHOD

A machine learning based imaging system comprises an imaging apparatus for attachment to an imaging sensor of a mobile computing apparatus such as camera of a smartphone. A machine learning (or AI) based analysis system is trained on images captured with the imaging apparatus attached, and once trained may be deployed with or without the imaging apparatus. The imaging apparatus comprise an optical assembly that may magnify the image, an attachment arrangement and a chamber or a wall structure that forms a chamber when placed against an object. The inner surface of the chamber is reflective apart and has a curved profile to create uniform lighting conditions on the one or more objects being imaged and uniform background lighting to reduce the dynamic range of the captured images.

PORTABLE TERMINAL DEVICE AND INFORMATION PROCESSING SYSTEM
20230039067 · 2023-02-09 ·

A portable terminal device in an information processing system and method includes a camera and a microphone. Data of obtained images and voice are transmitted to a server that identifies operations to be executed based on the received voice and image data. The server transmits an identification of one or more results of the plurality of operations to the portable terminal device. When the portable terminal device receives only one result from the server, an operation corresponding to the one result is executed, and when a plurality of results is received, the portable terminal device displays information corresponding to the plurality of results as candidates. Additional voice is captured for selecting one of the plurality of results during the displaying of the information. A determination of one result from the plurality of results is made based on the captured voice, and an operation corresponding to the determined result is executed.

Car surveillance system
11496617 · 2022-11-08 ·

A safety system particularly for a transport service software application. The system integrates into an existing transport service software application communicates with a cellular network mobile device of a transport service driver or at least one passenger located in a vehicle offering rideshare services. In an emergency, a user of the system may press a panic button on the application. The mobile device activates its camera and microphone turns on to livestreams audio and video to emergency personnel, and transmits GPS data to said emergency personnel.

PORTABLE DEVICE
20230101985 · 2023-03-30 ·

The portable device is provided with a photograph unit to acquire, at a first time interval, images with a super wide angle of which a maximum value reaches all peripheral directions around the portable device; a traveling direction calculation unit to calculate a traveling direction of the portable device at a second time interval; an image segmentation unit to segment, as a traveling direction image, an image of a predetermined range where the traveling direction is positioned at a center thereof, from a latest acquired image every time the traveling direction calculation unit calculates the traveling direction; and a display control unit to display the traveling direction image on a predetermined traveling direction display region of a display of the portable device. The display control unit displays a video image designated by a user on a first display region, which is different from the traveling direction display region.

Integration of user emotions for a smartphone or other communication device environment

Methods of real-time emoji and emoticon production are disclosed that include: determining, by a computing device, at least one first emotional state of a user from information, wherein the at least one first emotional state is a presently-identified emotional state of the user; providing an emoji or emoticon production template system, wherein the template system includes at least one physical attribute of the user; and utilizing the emoji or emoticon production template system to: analyze the presently-identified emotional state of the user; determine a suitable map of the presently-identified state of the user; map the presently-identified state of the user on an emoji or emoticon production template; produce at least one unique emoji or emoticon based on the map; provide the at least one unique emoji or emoticon to the user, wherein the user selects the at least one unique emoji or emoticon and includes the at least one unique emoji or emoticon in a text message, a direct message, an electronic mail message, or a combination thereof.

VOICE CALL METHOD AND APPARATUS, TERMINAL, AND STORAGE MEDIUM
20230095163 · 2023-03-30 ·

Provided are a voice call method, an apparatus, a terminal, and a storage medium, relating to the technical field of the terminal. The method includes: obtaining, in response to a voice call instruction, a display screen state of the flexible display screen, the display screen state comprising at least one of a roll-up state and a spread-out state; controlling, in response to the flexible display screen being in the roll-up state, the flexible display screen to spread out, wherein the first housing and the second housing move relative to each other while spreading out the flexible display screen, and wherein a distance between the sound receiving hole and a sound source after the spreading out is smaller than a distance between the sound receiving hole and the sound source before the spreading out; and collecting call voice through the radio microphone.

CROSS SECTION VIEWS OF WOUNDS
20230094442 · 2023-03-30 · ·

A non-transitory computer readable medium storing data and computer implementable instructions that, when executed by at least one processor, cause the at least one processor to perform operations for generating cross section views of a wound, the operations including receiving 3D information of a wound based on information captured using an image sensor associated with an image plane substantially parallel to the wound; generating a cross section view of the wound by analyzing the 3D information; and providing data configured to cause a presentation of the generated cross section view of the wound.