Screen Display Control Method and Electronic Device
20220327190 · 2022-10-13
Inventors
Cpc classification
G06F3/04886
PHYSICS
H04M1/0216
ELECTRICITY
G06F1/1652
PHYSICS
G06F1/1649
PHYSICS
G06F1/1616
PHYSICS
G06F2203/04803
PHYSICS
G06F1/1641
PHYSICS
G06F21/32
PHYSICS
G06F1/1684
PHYSICS
H04M1/72454
ELECTRICITY
International classification
G06F21/32
PHYSICS
H04M1/72454
ELECTRICITY
Abstract
A method that is performed by an electronic device provided with a foldable screen that is divided into a first area and a second area. When the screen is folded, the first area corresponds to a first sensor, and the second area corresponds to a second sensor. The method includes displaying an interface of a first application in the first area, detecting first user identification information by using the first sensor, storing a correspondence between the first application and the first user identification information, and controlling display in the first area and the second area based on user identification information detected by the first sensor and the second sensor. In the foregoing method, an application is bound to user identification information.
Claims
1. A method implemented by an electronic device, wherein the method comprises: displaying a first interface of a first application in a first area of a foldable screen of the electronic device; detecting first user identification information by using a first sensor for the first area; storing a first correspondence between the first application and the first user identification information; and displaying, when detecting the first user identification information by using a second sensor for a second area of the foldable screen, the first interface in the second area based on the first correspondence.
2. The method according to claim 1, wherein the method further comprises turning off the first area when detecting the first user identification information by using the second sensor.
3. The method according to claim 1, wherein the method further comprises: displaying a second interface of a second application in the second area; detecting second user identification information by using the second sensor; storing a second correspondence between the second application and the second user identification information; and displaying, when the second user identification information by using the first sensor and the first user identification information is not detected, the second interface in the first area based on the second correspondence.
4. The method according to claim 3, wherein when detecting the second user identification information by using the first sensor and the first user identification information using the second sensor, the method further comprises: displaying the first interface in the second area; and displaying the second interface in the first area.
5. The method according to claim 3, wherein the method further comprises turning off the first area when any user identification information, including the first user identification information and the second user identification information, is not detected by using the first sensor.
6. The method according to claim 3, wherein the method further comprises turning off the first area when third user identification information detected by using the first sensor does not correspond to any application, including the first application and the second application, in the electronic device.
7. The method according to claim 3, wherein the method further comprises displaying the first interface in the first area when the first user identification information and the second user identification information are detected by using the first sensor.
8. The method according to claim 3, wherein the method further comprises displaying the first interface in the first area when the first user identification information and third user identification information are detected by using the first sensor and when the third user identification information does not correspond to any application, including the first application and the second application, in the electronic device.
9. The method according to claim 8, wherein the method further comprises: prompting a user whether to store a third correspondence between the first application and the third user identification information; detecting an operation in the first area; and in response to detecting the operation, storing the third correspondence.
10. The method according to claim 3, wherein when the first user identification information is detected by using both the first sensor and the second sensor, the method further comprises: displaying the second interface in the first area; and displaying the first interface in the second area.
11. The method according to claim 3, wherein when the first user identification information is detected by using both the first sensor and the second sensor, the method further comprises: displaying the first interface in the first area; and displaying the second interface in the second area.
12. The method according to claim 3, wherein the method further comprises: detecting a first operation in the first area; and in response to detecting the first operation: closing the second application; and displaying a desktop interface.
13. The method according to claim 3, wherein the method further comprises: detecting an operation in the first area; and in response to detecting the operation: closing the second application; and displaying a third interface that was displayed before the second application was started in the first area.
14. The method according to claim 12, wherein after the closing the second application, the method further comprises: detecting a second operation in the first area; in response to detecting the second operation: starting a third application; and displaying a third interface of the third application in the first area; and storing a third correspondence between the third application and the second user identification information.
15. The method according to claim 3, wherein the first user identification information and the second user identification information comprise face information, fingerprint information, and iris information.
16. The method according to claim 1, wherein before detecting first user identification information, the method further comprises prompting a user to enter user identification information corresponding to the first application.
17. The method according to claim 1, wherein the first application is a displayed application displayed in the first area before the first user identification information is detected by using the first sensor; or a selected application selected by a user from at least two available applications currently displayed in the first area.
18. The method according to claim 1, wherein before the detecting first user identification information by using the first sensor, the method further comprises: determining that the electronic device is in a folded form or a support form.
19. An electronic device, comprising: a foldable screen configured to divide into a first area and a second area when the foldable screen is folded; a first sensor configured to operate for the first area; a second sensor configured to operate for the second area; and a processor coupled to the foldable screen, the first sensor, and the second sensor and configured to: display a first interface of a first application in the first area; detect first user identification information by using the first sensor; store a first correspondence between the first application and the first user identification information; and display, when the first user identification information is detected by using the second sensor, the first interface in the second area based on the first correspondence.
20. A computer program product comprising computer-executable instructions that are stored on a non-transitory computer-readable medium and that, when executed by a processor, cause an electronic device comprising a foldable screen to: display a first interface of a first application in a first area of the foldable screen; detect first user identification information by using a first sensor for the first area; store a first correspondence between the first application and the first user identification information; and display, when the first user identification information is detected by using a second sensor for a second area of the foldable screen, the first interface in the second area based on the first correspondence.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
DESCRIPTION OF EMBODIMENTS
[0060] The following describes implementations of embodiments in detail with reference to accompanying drawings. In descriptions of embodiments of this application, “I” means “or” unless otherwise specified. For example, AB may represent A or B. In this specification, “and/or” describes only an association relationship for describing associated objects and represents that three relationships may exist. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists. In addition, in the descriptions in embodiments of this application, “a plurality of” means two or more than two.
[0061] The following terms “first” and “second” are merely intended for description, and shall not be understood as an indication or implication of relative importance or implicit indication of a quantity of indicated technical features. Therefore, a feature limited by “first” or “second” may explicitly or implicitly include one or more features.
[0062] A screen display control method provided in embodiments of this application may be performed by an electronic device having a flexible screen, such as, a mobile phone, a tablet computer, a notebook computer, an ultra-mobile personal computer (UMPC), a handheld computer, a netbook, a personal digital assistant (PDA), a wearable device, or a virtual reality device. This is not limited in embodiments of this application.
[0063]
[0064] The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a radio frequency module 150, a communications module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headset jack 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display 194, a subscriber identification module (SIM) card interface 195, and the like.
[0065] It may be understood that the structure shown in embodiments of this application does not constitute a specific limitation on the electronic device 100. In some other embodiments of this application, the electronic device 100 may include more or fewer components than the components shown in the figure, some components may be combined, or some components may be split, or different component arrangements may be used. The components shown in the figure may be implemented through hardware, software, or a combination of software and hardware.
[0066] The processor 110 may include one or more processing units. For example, the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU). The controller may be a nerve center and a command center of the electronic device 100. The controller may generate an operation control signal based on instruction operation code and a time sequence signal, to complete control of instruction reading and instruction execution. The memory may be further disposed in the processor 110, and is configured to store instructions and data. In some embodiments, the processor 110 may include one or more interfaces.
[0067] The charging management module 140 is configured to receive a charging input from a charger.
[0068] The power management module 141 is configured to connect to the battery 142, the charging management module 140, and the processor 110. The power management module 141 receives an input of the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, an external memory, the display 194, the camera 193, the communications module 160, and the like.
[0069] A wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the radio frequency module 150, the communications module 160, the modem processor, the baseband processor, and the like.
[0070] The antenna 1 and the antenna 2 are configured to transmit and receive electromagnetic wave signals. In some other embodiments, the antenna may be used in combination with a tuning switch. The radio frequency module 150 may provide a solution that is applied to the electronic device 100 and that includes wireless communications technologies such as 2G, 3G, 4G, and 5G. The modem processor may include a modulator and a demodulator. The modulator is configured to modulate a to-be-sent low-frequency baseband signal into a medium-high frequency signal. The demodulator is configured to demodulate a received electromagnetic wave signal into a low-frequency baseband signal. Then, the demodulator transmits the low-frequency baseband signal obtained through demodulation to the baseband processor for processing. The communications module 160 may provide a wireless communication solution that is applied to the electronic device 100 and that includes a wireless local area network (WLAN) (for example, a Wi-Fi network), Bluetooth (BT), a global navigation satellite system (GNSS), frequency modulation (FM), a near field-communication (NFC) technology, and an infrared (IR) technology. The communications module 160 may be one or more components integrating at least one communication processor module. The communications module 160 receives an electromagnetic wave through the antenna 2, performs frequency modulation and filtering processing on an electromagnetic wave signal, and sends a processed signal to the processor 110. The communications module 160 may further receive a to-be-sent signal from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into an electromagnetic wave for radiation through the antenna 2.
[0071] In some embodiments, the antenna 1 of the electronic device 100 is coupled to the radio frequency module 150, and the antenna 2 is coupled to the communications module 160, so that the electronic device 100 may communicate with a network and another device by using a wireless communications technology. The wireless communications technology may include a Global System for Mobile Communications (GSM), a General Packet Radio Service (GPRS), code-division multiple access (CDMA), wideband code-division multiple access (WCDMA), time-division code-division multiple access (TD-SCDMA), Long-Term Evolution (LTE), new radio (NR) in a 5th generation (5G) mobile communications system, a future mobile communications system, BT, a GNSS, a WLAN, NFC, FM, an IR technology, and/or the like. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a BeiDou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a satellite based augmentation system (SBAS).
[0072] The electronic device 100 implements a display function by using the GPU, the display 194, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is configured to perform mathematical and geometric calculation, and render an image. The processor 110 may include one or more GPUs, which execute program instructions to generate or change display information. Optionally, the display 194 may include a display and a touch panel. The display is configured to output display content to the user, and the touch panel is configured to receive a touch event entered by the user on the flexible display 194.
[0073] The display 194 is configured to display an image, a video, or the like. The display 194 includes a display panel. The display panel may be a liquid-crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a mini-LED, a micro-LED, a micro-OLED, a quantum dot light emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include one or more displays 194.
[0074] In some embodiments, when the display panel is made of a material such as an OLED, an AMOLED, or an FLED, the display 194 shown in
[0075] For an electronic device configured with a foldable display, the foldable display of the electronic device may be switched between a small screen in a folded form and a large screen in an expanded form at any time.
[0076] A mobile phone is used as an example. As shown in
[0077] As shown in
[0078] It should be noted that, after the user folds the flexible display 194 along the folding line AB, the first area and the second area may be disposed opposite to each other, or the first area and the second area may be disposed back to back. As shown in
[0079] In some embodiments, as shown in
[0080] It should be understood that the folded line AB may alternatively be distributed horizontally, and the display 194 may be folded up and down. In other words, the first area and the second area of the display 194 may correspond to upper and lower sides of the middle folding line AB. In this application, an example in which the first area and the second area are distributed left and right is used for description.
[0081] For example, as shown in
[0082] Embodiments of this application provide a method for controlling display on the first area and the second area. The third area may be independently used for display, or may be used for display following the first area or the second area, or may not be used for display. This is not specifically limited in embodiments of this application.
[0083] Because the display 194 can be folded, a physical form of the electronic device may also change accordingly. For example, when the display 194 is fully expanded, a physical form of the electronic device may be referred to as an expanded form. When a part of an area of the display 194 is folded, a physical form of the electronic device may be referred to as a folded form. It may be understood that, in the following embodiments of this application, a physical form of the display 194 may refer to a physical form of the electronic device.
[0084] After the user folds the display 194, there is an included angle between the first area and the second area that are obtained by division.
[0085] In some embodiments, based on a size of the included angle between the first area and the second area, the display 194 of the electronic device may include at least three physical forms: an expanded form, a folded form, and a half-folded form (or referred to as a support form) in which the display is folded at a specific angle.
[0086] Expanded form: When the display 194 is in the expanded form, the display 194 may be shown in
[0087] Folded form: When the display 194 is in the folded form, the display 194 may be shown in
[0088] Support form: When the display 194 is in the folded form, the display 194 may be shown in
[0089] In addition, the support form of the display 194 may further include an unstable support form and a stable support form. In the stable support form, a range of the second angle β is a.sub.4≤β≤a.sub.3, a.sub.4 is less than or equal to 90 degrees, and a.sub.3 is greater than or equal to 90 degrees and less than 180 degrees. In the support form of the display 194, a form other than the stable support form is the unstable support form of the display 194.
[0090] In some other embodiments, a physical form of the display 194 may be divided into only a folded form and an expanded form. As shown in
[0091] It should be understood that division of physical forms of the display 194 and a definition of each physical form are not limited in this application.
[0092] The sensor module 180 may include one or more of a pressure sensor, a gyroscope sensor, a barometric pressure sensor, a magnetic sensor (for example, a Hall effect sensor), an acceleration sensor, a distance sensor, an optical proximity sensor, a fingerprint sensor, a structured light sensor, an iris sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like. This is not limited in embodiments of this application.
[0093] The pressure sensor is configured to sense a pressure signal, and may convert the pressure signal into an electrical signal. When a touch operation is performed on the display 194, the electronic device 100 detects a strength of the touch operation based on the pressure sensor. The electronic device 100 may also calculate a touch position based on a detection signal of the pressure sensor.
[0094] The gyroscope sensor may be configured to determine a motion posture of the electronic device 100. In embodiments of this application, a gyroscope sensor on each screen may also determine the included angle between the first area and the second area after the electronic device 100 is folded, to determine a physical form of the electronic device 100.
[0095] The fingerprint sensor is configured to collect a fingerprint. The electronic device 100 may use a feature of the collected fingerprint to implement fingerprint-based unlocking, application lock access, fingerprint-based photographing, fingerprint-based call answering, and the like. In embodiments of this application, the electronic device 100 may collect fingerprint information of users in the first area and the second area by using fingerprint sensors, to determine a user that currently uses a screen on this side.
[0096] The structured light sensor may be configured to collect face information of the user. The electronic device 100 may use the collected face information to implement face-based unlocking, application lock access, photo beautification, and the like. In embodiments of this application, the electronic device 100 may collect face information of users in the first area and the second area by using structured light sensors, to determine a user that currently uses a screen on this side.
[0097] The iris sensor may be configured to collect iris information of the user. The electronic device 100 may use the collected iris information to implement iris-based unlocking, application lock access, iris-based photographing, and the like. In embodiments of this application, the electronic device 100 may collect iris information of users in the first area and the second area by using iris sensors, to determine a user that currently uses a screen on this side.
[0098] The touch sensor is also referred to as a “touch panel”. The touch sensor may be disposed on the display 194, and the touch sensor and the display 194 form a touchscreen, which is also referred to as a “touchscreen”. The touch sensor is configured to detect a touch operation performed on or near the touch sensor. The touch sensor may transfer the detected touch operation to the application processor, to determine a type of a touch event. A visual output related to the touch operation may be provided on the display 194. In some other embodiments, the touch sensor may be alternatively disposed on a surface of the electronic device 100, and is at a position different from that of the display 194.
[0099] It should be understood that the foregoing merely shows some sensors in the electronic device 100 and functions of the sensors. The electronic device may include more or fewer sensors. For example, the electronic device 100 may further include an acceleration sensor, a gravity sensor, and the like. In embodiments of this application, a foldable electronic device may include a first area and a second area that form a particular angle in a foldable form. The electronic device may determine a folding direction of the electronic device and an included angle between the first area and the second area by using an acceleration sensor and a gravity sensor after folding.
[0100] The electronic device 100 can implement a photographing function by using the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
[0101] The ISP is configured to process data fed back by the camera 193. In some embodiments, the ISP may be disposed in the camera 193. The camera 193 is configured to capture a static image or a video. In some embodiments, the electronic device 100 may include one or N cameras 193, where N is a positive integer greater than 1. The digital signal processor is configured to process a digital signal, and may process another digital signal in addition to a digital image signal.
[0102] The video codec is configured to compress or decompress a digital video. The mobile phone 100 may support one or more video codecs.
[0103] The external memory interface 120 may be configured to connect to an external storage card such as a micro SD card, to extend a storage capability of the mobile phone 100. The internal memory 121 may be configured to store computer-executable program code. The executable program code includes instructions.
[0104] The processor 110 performs various function applications and data processing of the electronic device 100 by running the instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The program storage area may store an operating system, an application required by at least one function (for example, a sound playing function and an image playing function), and the like. The data storage area may store data (such as audio data and a phone book) and the like created during use of the mobile phone 100. In addition, the internal memory 121 may include a high-speed random access memory, or may include a nonvolatile memory, for example, at least one magnetic disk storage device, a flash memory, or a universal flash storage (UFS).
[0105] The electronic device 100 may implement an audio function such as music playing and recording through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headset jack 170D, the application processor, and the like.
[0106] The button 190 includes a power button, a volume button, and the like. The button 190 may be a mechanical button, or may be a touch button. The electronic device 100 may receive a button input, and generate a button signal input related to user setting and function control of the electronic device 100.
[0107] The motor 191 may generate a vibration prompt. The motor 191 may be configured to produce an incoming call vibration prompt and a touch vibration feedback.
[0108] The indicator 192 may be an indicator light, and may be configured to indicate a charging status and a power change, or may be configured to indicate a message, a missed call, a notification, and the like.
[0109] The SIM card interface 195 is configured to connect to a SIM card. The SIM card may be inserted into the SIM card interface 195 or removed from the SIM card interface 195, to implement contact with or separation from the electronic device 100. The electronic device 100 may support one or N SIM card interfaces, where N is a positive integer greater than 1. In some embodiments, the electronic device 100 uses an eSIM, namely, an embedded SIM card. The eSIM card may be embedded in the electronic device 100, and cannot be separated from the electronic device 100.
[0110] A layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture may be used for a software system of the electronic device 100. It should be understood that the method for establishing a connection between devices provided in embodiments of this application is applicable to systems such as Android and iOS, and the method has no dependency on a system platform of a device. In embodiments of this application, an Android system with a layered architecture is used as an example to describe a software structure of the electronic device 100.
[0111]
[0112] In the layered architecture, software is divided into several layers, and each layer has a clear role and task. The layers communicate with each other through a software interface. In some embodiments, an Android system is divided into four layers from top to bottom: an application layer, an application framework layer, an Android runtime and system library, and a kernel layer.
[0113] The application layer may include a series of application packages. As shown in
[0114] The application framework layer provides an application programming interface (application programming interface, API) and a programming framework for an application at the application layer. The application framework layer includes some predefined functions. As shown in
[0115] The keyguard service may be used to obtain, from an underlying display system, user identification information detected on the first area side and user identification information detected on the second area side. Further, the keyguard service may generate or update, based on the obtained user identification information, a binding relationship stored in a directory of the keyguard service, and determine specific content displayed in the first area and the second area. Further, the keyguard service may display, in the first area and the second area by using the window manager, content corresponding to the user identification information detected on the sides.
[0116] The binding relationship may be a correspondence between user identification information, screen content, and a display area. The user identification information is information that can uniquely determine a user identity. For example, the user identification information may be face information of a user collected by the structured light sensor, fingerprint information of a user collected by the fingerprint sensor, or iris information of a user collected by the iris sensor.
[0117] The system library, the kernel layer, and the like below the application framework layer may be referred to as an underlying system. The underlying system includes the underlying display system configured to provide a display service. For example, the underlying display system includes a display driver at the kernel layer and a surface manager in the system library.
[0118] The Android runtime includes a core library and a virtual machine. The Android runtime is responsible for scheduling and management of the Android system.
[0119] The kernel library includes two parts: a function that needs to be called in Java language, and a kernel library of Android.
[0120] The application layer and the application framework layer run on the virtual machine. The virtual machine executes Java files of the application layer and the application framework layer as binary files. The virtual machine is configured to implement functions such as object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
[0121] The system library may include a plurality of function modules, for example, a surface manager, a media library, a three-dimensional graphics processing library (for example, Open Graphics Library Embedded Systems (OpenGL ES)), and a two dimensional (2D) graphics engine (for example, Scalable Graphics Library (SGL)).
[0122] The surface manager is configured to manage a display subsystem and provide fusion of 2D and three-dimensional (3D) layers for a plurality of applications.
[0123] The media library supports playing and recording of a plurality of commonly used audio and video formats, static image files, and the like. The media library may support a plurality of audio and video coding formats such as Moving Picture Experts Group (MPEG)-4, G.264, MP3, AAC, AMR, JPG, and PNG.
[0124] The three-dimensional graphics processing library is configured to implement three-dimensional graphics drawing, image rendering, compositing, layer processing, and the like.
[0125] The 2D graphics engine is a drawing engine for 2D drawing.
[0126] The kernel layer is a layer between hardware and software. The kernel layer includes at least a display driver, a camera driver, an audio driver, a sensor driver, and the like. This is not limited in embodiments of this application.
[0127]
[0128] For ease of understanding, in the following embodiments of this application, a mobile phone having the structures shown in
[0129] As described in the background, because a foldable electronic device has usable areas on two sides, a user may change a used display area. Currently, when the user changes from facing one display area to facing the other display area for viewing, content currently viewed by the user is still displayed in the original display area. This is inconvenient for the user to view and operate.
[0130] In embodiments of this application, when the electronic device is in a support form or a folded form, a display includes at least two areas, and the two areas may display content of different applications. The electronic device may bind applications corresponding to the two areas to user identification information collected by using the sensor module, to display, in an area currently used by the user, content that matches the user. This implements “a screen change following a user”, and is convenient for the user to view and operate.
[0131] For example, when the electronic device is in the folded form, the display is divided into a first area and a second area shown in
[0132] For another example, when the electronic device is in the support form, the display is divided into a first area and a second area shown in
[0133] The technical solutions in embodiments of this application may be used in a scenario in which the electronic device is used in split-screen mode, for example, a scenario in which the electronic device is in the folded form or the support form. The following describes the technical solutions in embodiments in this application in detail with reference to accompanying drawings.
[0134] In some embodiments, if a user wants to use a screen switching function, the user needs to set an electronic device in advance.
[0135]
[0136]
[0137] It should be understood that the foregoing interfaces may include more or fewer setting icons. This is not specifically limited in embodiments of this application.
[0138]
[0139]
[0140] It should be understood that the foregoing interfaces may include more or fewer setting icons or options. This is not specifically limited in embodiments of this application.
[0141] In some embodiments, the user may simultaneously enable a plurality of the screen switching manners shown in
[0142] It may be understood that the interfaces shown in
[0143] It may be understood that the user may perform the setting operation before using the electronic device in split-screen mode, or may perform the setting operation on a screen on one side when using the electronic device in split-screen mode. This is not specifically limited in embodiments of this application.
[0144] When the electronic device is in the support form or the folded form, the electronic device may automatically start a screen switching process. A manner of determining a form of the electronic device by the electronic device is not specifically limited in embodiments of this application. For example, the electronic device may determine the form of the electronic device based on an included angle between the first area and the second area.
[0145] After starting the screen switching process, the electronic device determines whether the electronic device enables a screen switching function.
[0146] When determining that the screen switching function of the electronic device is enabled, the electronic device may pop up a selection interface in the first area and/or the second area of the display, to prompt the user to perform screen binding.
[0147] In some embodiments, the electronic device may automatically pop up a selection interface, to prompt the user to perform screen binding.
[0148] In an example, when detecting that only one application is displayed in the first area, the electronic device automatically pops up a selection interface in the first area; and/or when detecting that only another application is displayed in the second area, the electronic device automatically pops up a selection interface in the second area, to prompt the user to perform screen binding. For example, when the electronic device detects that an application 1 is displayed in full screen in the first area, the electronic device automatically pops up a selection interface in the first area; and/or when the electronic device detects that an application 2 is displayed in full screen in the second area, the electronic device automatically pops up a selection interface in the second area. For example, the application 1 is displayed in full screen in the first area, the electronic device may pop up a selection interface shown in
[0149] Similar to the first area, when the application 2 is displayed in full screen in the second area, a user 2 facing the second area may also be prompted to perform screen binding. The electronic device may generate a binding relationship shown in the second row in Table 1. The second row in Table 1 indicates that second user identification information is detected in the second area, the second user identification information is bound to the application 2, and the application 2 is an application currently displayed in full screen in the second area.
[0150] After the foregoing binding process is completed, the first area and/or the second area may display a prompt window shown in
TABLE-US-00001 TABLE 1 User identification information Area Display content First user identification information First area Interface of the application 1 Second user identification information Second area Interface of the application 2
[0151] It should be noted that the electronic device may directly display the interface shown in
[0152] In another example, when determining that the screen switching function of the electronic device is enabled, the electronic device may automatically pop up the selection interface shown in
[0153] In some other embodiments, after receiving a binding instruction of the user, the electronic device may pop up a selection interface, to prompt the user to perform screen binding.
[0154] In an example, when determining that the electronic device starts the screen switching process and enables the screen switching function, the electronic device may display a binding button in the first area and/or the second area, and the user may indicate, by using the button, the electronic device to start binding. For example, the electronic device may display a binding start button shown in
[0155] For example, after starting, in the first area, an application 1 to be bound, the user 1 may tap the binding start button. After receiving a binding start instruction from the user 1, the electronic device may pop up a selection interface shown in
[0156] For another example, when the electronic device has a plurality of bondable applications in the first area, after receiving a binding start instruction of the user, the electronic device may pop up a selection interface shown in
[0157] For
[0158] It should be noted that the foregoing screen binding process may alternatively be in another sequence. This is not specifically limited in embodiments of this application. For example, the electronic device may further prompt the user to enter user identification information, and then prompt the user to select a to-be-bound application.
[0159] It should be understood that forms of interfaces, windows, prompts, and binding relationships shown in
[0160] The following describes a screen switching method in embodiments of this application by using the electronic device in the support form as an example.
[0161] For example, the electronic device controls, based on collected face information, switching of display interfaces of areas on two sides of the display.
[0162] After the electronic device enters the support form, the screen switching process is started, and the user 1 may be prompted, in a manner shown in
[0163] As shown in
TABLE-US-00002 TABLE 2 User identification information Area Display content First face information First area Interface of the application 1
[0164] As shown in
TABLE-US-00003 TABLE 3 User identification information Area Display content First face information Second area Interface of the application 1
[0165] The electronic device may control, based on the updated binding relationship, the second area to display the interface of the application 1. In this case, the first area may enter a screen-off state, or may continue to display another interface, for example, a desktop interface. This is not specifically limited in embodiments of this application.
[0166] For example, the electronic device controls, based on collected fingerprint information, switching of display interfaces of areas on two sides of the display.
[0167] After the electronic device enters the support form, the screen switching process is started, and the user 1 may be prompted, in a manner shown in
[0168] As shown in
[0169] After collecting fingerprint information of the user 1, the electronic device may generate a binding relationship similar to that in Table 2, except that the user identification information is the fingerprint information.
[0170] As shown in
[0171] Similar to controlling, based on the collected face information, switching of display interfaces of display areas on two sides of the display, the electronic device may further control, based on collected iris information, switching of display interfaces of areas on two sides of the display. Specifically, as shown in
[0172] In this way, when a location of the user relative to the electronic device changes, the electronic device may display, on a screen currently used by the user, content associated with the user, and the user does not need to perform an additional operation to perform screen switching. This is convenient for the user to view and operate.
[0173] In addition, it should be understood that when same user identification information is detected in both the first area and the second area, the electronic device may switch display of the first area and the second area, or may not switch display of the first area and the second area. This is not specifically limited in embodiments of this application. For example, the user switches, by using two fingers of which fingerprint information is entered in advance, display of the first area and display of the second area of the electronic device.
[0174] In this application, when the user performs an operation in the first area or the second area to close an application bound to the user, after detecting the user's operation of closing the application, the electronic device may delete the binding relationship shown in Table 2 or Table 3. In this case, the electronic device may control the first area or the second area to display an interface displayed before the user opens the application 1, or the electronic device may control the first area or the second area to display a specific interface, for example, display a desktop interface.
[0175] Further, the user 1 opens the application 2.
[0176] In some embodiments, after the electronic device detects the opening operation, a binding prompt may pop up in the first area or the second area, to prompt the user 1 to perform screen binding again. For example, a prompt window shown in
TABLE-US-00004 TABLE 4 User identification information Area Display content First user identification information First area Interface of the application 2
TABLE-US-00005 TABLE 5 User identification information Area Display content First user identification information Second area Interface of the application 2
[0177] In some other embodiments, the electronic device may automatically perform screen binding again without prompting the user equipment, to generate the binding relationship shown in Table 4 or Table 5.
[0178] In this way, even if the user changes an application, “a screen change following a user” can still be implemented, to improve viewing and operation experience of the user.
[0179]
[0180] For example, the electronic device controls, based on collected face information, switching of display interfaces of areas on two sides of the display.
[0181] As shown in
[0182] After the electronic device enters the support form, the screen switching process is started, and a user 1 and a user 2 may be prompted, in a manner shown in
[0183] As shown in
TABLE-US-00006 TABLE 6 User identification information Area Display content First face information First area Interface of the application 1 Second face information Second area Interface of the application 2
[0184] The electronic device updates the binding relationships based on a status of the collected face information.
[0185] In a possible case shown in
TABLE-US-00007 TABLE 7 User identification information Area Display content First face information Second area Interface of the application 1 Second face information First area Interface of the application 2
[0186] Based on the updated binding relationships, the electronic device may control the first area to display the interface of the application 2, and the second area to display the interface of the application 1.
[0187] In another possible case shown in
[0188] In some embodiments, a selection interface may pop up in the second area, to prompt the user to perform screen binding again. For example, a selection interface shown in
TABLE-US-00008 TABLE 8 User identification information Area Display content Second face information Second area Interface of the application 2 First face information Second area Interface of the application 2
[0189] The electronic device may control, based on the updated binding relationships, the second area to display the interface of the application 2. In this case, the first area may enter a screen-off state, or continue to display the interface of the application 1. This is not limited in embodiments of this application.
[0190] In some other embodiments, because the first face information has been bound to the interface of the application 1, when the user 1 faces the second area, the electronic device may determine that the first face information has a binding relationship, and does not bind the first face information again. That is, the electronic device does not update the binding relationships, and the binding relationships are still those shown in Table 6. In this way, the electronic device still controls, based on the binding relationships, the second area to display the interface of the application 2. Optionally, because the electronic device does not detect the first face information in the first area, the electronic device may pause and exit a process of an application corresponding to the first area, and control the first area to enter the screen-off state. Optionally, the electronic device may also control the first area to continue to display the interface of the application 1.
[0191] That is, if a current display area is bound to the user, when a new user appears on the side of the display area, even if the new user is bound to an interface of an application, display content of the display area is not switched to the interface of the application bound to the new user. That is, content displayed in the display area does not change.
[0192] In still another possible case shown in
TABLE-US-00009 TABLE 9 User identification information Area Display content First face information First area Interface of the application 1 Second face information Second area Interface of the application 2 Third face information First area Interface of the application 1
[0193] Based on the updated binding relationships, the electronic device may control the first area to display the interface of the application 1, and the first area to display the interface of the application 2.
[0194] When a new user (for example, the user 1 in
[0195] Similarly, as shown in
[0196] Similarly, as shown in
[0197] It should be understood that forms of interfaces, windows, and prompts shown in
[0198] In this way, when a plurality of users use the foldable electronic device in split-screen mode, when a location of a user relative to the electronic device changes, the electronic device may display, on a screen currently used by the user, content associated with the user, and the user does not need to perform an additional operation to switch screens, to improve viewing and operation experience of the user.
[0199] It may be understood that, that the location of the user changes in this embodiment of this application means that a location of the user relative to the electronic device changes. In other words, the location of the user may change, or a location or a direction of the electronic device may change. For example, that the user 1 moves from the side of the first area to the side of the second area may be that the user 1 changes a location, or may be that the user 1 rotates the electronic device, so that the second area faces the user 1. For another example, that the user 1 and the user 2 exchange locations may be that the user 1 moves to a location of the user 2 and the user 2 moves to a location of the user 1, or may be that the user rotates the electronic device, so that the first area faces the user 2 and the second area faces the user 1.
[0200] In this embodiment of this application, the electronic device may further determine a status of the user, such as “present” or “absent”, based on whether a sensor collects user identification information, to control screen display.
[0201] For example, the electronic device controls, based on collected face information, switching of display interfaces of areas on two sides of the display.
[0202] After the electronic device enters the support form, the screen switching process is started, and the user 1 may be prompted, in a manner shown in
[0203] As shown in
TABLE-US-00010 TABLE 10 User identification information Area Display content Status First face information First area Interface of the Present application 1
[0204] As shown in
[0205] There are many manners in which the electronic device determines that the user 1 is absent. For example, when the first face information is not detected in the first area, it is determined that the user 1 is absent. For another example, when the first face information is not detected in the first area in a preset period of time, it is determined that the user 1 is absent. For another example, when the first face information is not detected in the first area and the second area, it is determined that the user 1 is absent. For another example, when the first face information is not detected in the first area and the second area in a preset period of time, it is determined that the user 1 is absent. For another example, when a quantity of periods in which the first face information is not detected in the first area is greater than or equal to a preset value, it is determined that the user 1 is absent. For another example, when a quantity of periods in which the first face information is not detected in the first area and the second area is greater than or equal to a preset value, it is determined that the user 1 is absent. For another example, when any face information is not detected in the first area or detected face information does not correspond to any application in the electronic device, it is determined that the user 1 is absent.
[0206] When the electronic device determines that the user 1 leaves, the electronic device updates the user status to “absent”, as shown in Table 11.
TABLE-US-00011 TABLE 11 User identification information Area Display content Status First face information First area Interface of the Absent application 1
[0207] When the user status is “absent”, the electronic device may control the first area to turn off the screen, and pause a process of the application corresponding to the first area. For example, when the user 1 plays a video by using the electronic device, the electronic device pauses video playing, and controls the first area to turn off the screen.
[0208] The electronic device continues to detect the face information of the user 1.
[0209] In a possible case shown in
TABLE-US-00012 TABLE 12 User identification information Area Display content Status First face information First area Interface of the Present application 1
[0210] The electronic device may turn on and unlock the first area, and continue each process of the application corresponding to the first area. For example, when the user equipment plays a video by using the electronic device, the electronic device turns on the first area, and continues to play the video.
[0211] In another possible case shown in
TABLE-US-00013 TABLE 13 User identification information Area Display content Status First face information Second area Interface of the Present application 1
[0212] The electronic device may turn on and unlock the second area, and continue each process of an application corresponding to the second area. For example, when the user equipment plays a video by using the electronic device, the electronic device turns on the second area, and continues to play the video.
[0213] When the electronic device is set to a fingerprint-based screen switching manner or an iris-based screen switching manner, a manner of determining whether a user is present or absent and a screen display manner of the electronic device are similar to those shown in
[0214] In this way, when the user is absent, the electronic device may turn off the screen. This helps reduce power consumption of the electronic device. When the user is present again, content previously viewed by the user is automatically displayed, and the user does not need to perform an additional operation. This helps improve viewing and operation experience of the user.
[0215] It should be further understood that a disposing position of the sensor in
[0216] The foregoing describes, by using
[0217] With reference to
[0218] It should be understood that the first sensor and the second sensor may be any sensor that can detect user identification information, for example, may be a fingerprint sensor, an iris sensor, or a structured light sensor.
[0219] Disposing positions of the first sensor and the second sensor are not specifically limited in this application, provided that the first sensor can detect user identification information entered by a user in the first area and the second sensor can detect user identification information entered by a user in the second area.
[0220] For example, the first sensor may be disposed in the first area, and the second sensor may be disposed in the second area.
[0221] For another example, the first sensor and the second sensor may also be disposed on a same side, but are respectively configured to detect the user identification information entered by the user in the first area and the user identification information entered by the user in the second area.
[0222] The user identification information is information that can uniquely determine a user identity. For example, the user identification information may be face information of a user collected by the structured light sensor, fingerprint information of a user collected by the fingerprint sensor, or iris information of a user collected by the iris sensor.
[0223] The method 2000 includes the following steps.
[0224] 2010: Display an interface of a first application in the first area.
[0225] For example, as shown in
[0226] For example, as shown in
[0227] For example, as shown in
[0228] For example, the first application is an application displayed in the first area before first user identification information is detected by using the first sensor.
[0229] For example, the first application is an application selected by the user from at least two applications currently displayed in the first area.
[0230] 2020: Detect the first user identification information by using the first sensor.
[0231] For example, as shown in
[0232] For example, as shown in
[0233] For example, as shown in
[0234] Optionally, before the first user identification information is detected by using the first sensor, it is determined that the electronic device is in a folded form or a support form, and a screen switching process is started.
[0235] Optionally, before the first user identification information is detected by using the first sensor, the electronic device is set, to enable a screen switching function.
[0236] For example, as shown in
[0237] Optionally, when the screen switching process is started and it is determined that the screen switching function of the electronic device is enabled, the electronic device may pop up a selection interface in the first area of the display, to prompt the user to perform screen binding.
[0238] For example, as shown in
[0239] 2030: Store the correspondence between the first application and the first user identification information.
[0240] In some scenarios, the second area is also used by a user. For the second area, a correspondence between a second application and second user identification information may also be generated and stored by using steps similar to the foregoing steps, and details are not described herein again.
[0241] Because a correspondence between an application and user identification information has been stored, when a screen facing a user changes, based on user identification information detected by the first sensor and the second sensor, an interface of an application corresponding to the user can be displayed on a screen currently used by the user. 2040: Control display of the first area and the second area based on the user identification information detected by the first sensor and the second sensor.
[0242] For example, as shown in
[0243] For example, as shown in
[0244] For example, as shown in
[0245] For example, as shown in
[0246] For example, as shown in
[0247] For example, as shown in
[0248] For example, as shown in
[0249] For example, as shown in
[0250] For example, as shown in
[0251] For example, as shown in
[0252] For example, as shown in
[0253] For example, as shown in
[0254] For example, as shown in
[0255] For example, as shown in
[0256] For example, if the first user identification information is detected by using both the first sensor and the second sensor, an interface of the second application is displayed in the first area, and the interface of the first application is displayed in the second area; or the interface of the first application is displayed in the first area, and the interface of the second application is displayed in the second area.
[0257] The method 2000 further includes: detecting a second operation in the first area; and in response to the second operation, closing the second application, and displaying a desktop interface or an interface displayed before the second application is started in the first area; and after closing the second application, detecting a third operation in the first area; in response to the third operation, starting a third application and displaying an interface of the third application in the first area; and storing a correspondence between the third application and the second user identification information.
[0258] It may be understood that, to implement the foregoing functions, the electronic device includes corresponding hardware and/or software modules for performing the functions. Algorithm steps in examples described with reference to embodiments disclosed in this specification can be implemented in a form of hardware or a combination of hardware and computer software in this application. Whether a function is performed by hardware or hardware driven by computer software depends on particular applications and design constraints of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application with reference to embodiments, but it should not be considered that the implementation goes beyond the scope of this application.
[0259] In embodiments, the electronic device may be divided into functional modules based on the foregoing method examples. For example, each functional module corresponding to each function may be obtained through division, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware. It should be noted that, in embodiments, division into modules is an example and is merely logical function division. During actual implementation, there may be another division manner.
[0260] When each function module is obtained through division based on each corresponding function,
[0261] The display unit 2110 may be configured to support the electronic device 2100 in performing step 2010, step 2040, and/or another process of the technology described in this specification.
[0262] The detection unit 2120 may be configured to support the electronic device 2100 in performing step 2020 and/or another process of the technology described in this specification.
[0263] The storage unit 2130 may be configured to support the electronic device 2100 in performing step 2030 and/or another process of the technology described in this specification.
[0264] It should be noted that the related content of the steps in the foregoing method embodiments may be cited in function descriptions of corresponding functional modules. Details are not described herein again.
[0265] A person of ordinary skill in the art may be aware that, in combination with the examples described in embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware or a combination of computer software and the electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraints of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application.
[0266] A person skilled in the art may clearly understand that, for the purpose of convenient and brief description, for detailed working processes of the foregoing system, apparatus, and unit, refer to corresponding processes in the foregoing method embodiments, and details are not described herein again.
[0267] In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the foregoing apparatus embodiment is merely an example. For example, division into the units is merely logical function division and may be other division during actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be implemented by using some interfaces. The indirect coupling or communication connection between the apparatuses or units may be implemented in electrical, mechanical, or another form.
[0268] The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one location, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of embodiments.
[0269] In addition, functional units in embodiments of this application may be integrated into one processing unit, each of the units may exist alone physically, or two or more units may be integrated into one unit.
[0270] When the functions are implemented in a form of a software function unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of this application essentially, or the part contributing to the technology, or some of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be, for example, a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in embodiments of this application. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or a compact disc.
[0271] The foregoing descriptions are merely specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.