APPARATUS AND METHOD FOR ENTERING LOGOGRAMS INTO AN ELECTRONIC DEVICE

20230315217 · 2023-10-05

    Inventors

    Cpc classification

    International classification

    Abstract

    A text entry apparatus for entering characters of a logographic character set on an electronic device such as a smartphone comprises a user interface having a plurality of discrete virtual or actual strings made contact regions which can be arranged in rows, the interface being configured to generate an intermediate signal each time a user contacts a contact region with at least one finger or thumb, the value of the intermediate signal being dependent on the type of contact made by the finger or thumb on the contact region and a processing circuit which is configured to receive a temporal sequence of intermediate signals and from the values of the intermediate signals in the sequence generate a corresponding temporal sequence of fundamental strokes in which each value of each intermediate signal is mapped to a fundamental stroke, each fundamental stroke being a part of the characters of the logographic character set. The user can interact with the apparatus in a very natural and easy to learn way, similar to playing a traditional stringed instrument.

    Claims

    1. A text entry apparatus for entering characters of a logographic character set on an electronic device, the apparatus comprising: a user interface having a plurality of discrete contact regions, the interface being configured to generate an intermediate signal each time a user contacts a contact region with at least one finger or thumb or pointing device, the value of the intermediate signal being dependent on the type of contact made on the contact region and a processing circuit which is configured to receive a temporal sequence of intermediate signals and from the values of the intermediate signals in the sequence generate a corresponding temporal sequence of fundamental strokes in which each value of each intermediate signal is mapped to a fundamental stroke, each stroke defining a visual part of a character of the logographic character set.

    2. A text entry apparatus according to claim 1 in which the interface includes multiple contact regions grouped into two or more elongate rows of contact regions with each row providing an elongate “virtual string” that the user can interact with.

    3. A text entry apparatus according to claim 1 in which the user interface comprises a touch sensitive surface.

    4. A text entry apparatus according to claim 3 which the display comprises a display of a smartphone or tablet or other device and in which the rows are offset from each other and extend from one side of user interface of the apparatus to the other.

    5. A text entry apparatus according to claim 2 in which there are multiple contact regions in each row of sensing regions that abut adjacent regions in the row so that the whole to form a continuous elongate element that is responsive to a finger being dragged across the element at any position along its length.

    5. (canceled)

    6. A text entry apparatus according to claim 2 in which each row comprises two contact regions of different sizes, a first contact region comprising a first sub-set of touch sensitive regions that define a contact region that is touch sensitive over a wider region each side of a central axis of the row, and a second contact region comprising a second sub-set of touch sensitive regions that define a contact region that is touch sensitive over a narrower region each side of a central axis, the central axes of both first and second contact regions being the same or substantially the same so that the narrower rows fit within the wider rows.

    7. A text entry apparatus according to claim 2 in which a user can interact anywhere along a row when entering a fundamental stroke.

    8. A text entry apparatus according to claim 1 in which the location of the rows is visible to a user by an elongate indicia that is displayed or physically incorporated into the user interface, the indicia aligned with a row of contact regions.

    9. A text entry apparatus according to 1 in which the processing circuit is configured such that a user contacting a contact region will generate an intermediate signal that identifies the contact region, preferably uniquely, and one or more properties of the gesture that is used.

    10. A text entry apparatus according to claim 1 in which the interface assigns a different value to an intermediate signal for one or more of the following types of gesture: An up down gesture across a contact region; A side-to-side gesture along a contact region orthogonal to the up down direction; A diagonal gesture that crosses a contact region; The pressure applied to the contact region; The duration of the contact and The speed of movement of the contact across or along the contact region.

    11. A text entry apparatus according to claim 1 in which the interface is configured to generate an intermediate signal that encodes at least one property of the movement across the contact region made by a finger where the intermediate signal uniquely identifies the contact region and the type of the movement, or where the intermediate signal uniquely identifies a row of contact regions and the type of movement.

    12. A text entry apparatus according to claim 1 in which the processing circuit is configured to map a group of fundamental strokes to a combination of simultaneous intermediate signals in a sequence.

    13. A text entry apparatus according to claim 1 in which the processing circuit is configured to map a single fundamental stroke to the property or combination of properties of each gesture represented by an intermediate signal in the temporal sequence.

    14. A text entry apparatus according to claim 1 in which the processing circuit is configured to map a character composition to a row or column of contact regions, or to specific contact regions of the user input device.

    15. A text entry apparatus according to claim 1 which includes a display which is configured to render a visual representation of each fundamental stroke in a temporal sequence to build up a logogram.

    16. A method of entering a character of a logographic character set into an electronic device, which is characterized by the steps of: using a finger or thumb or pointing device to make a gesture on a contact region of a user interface, the interface having a plurality of discrete contact regions, generating an intermediate signal each time a user contacts a contact region with at least one finger or thumb or pointing device, the value of the intermediate signal being dependent on the type of contact made by the finger or thumb on the contact region, receiving a temporal sequence of intermediate signals and from the values of the intermediate signals in the sequence generating a corresponding temporal sequence of fundamental strokes or groups of fundamental strokes in which each value of each intermediate signal is mapped to a fundamental stroke, each fundamental stroke being a part of the characters of the logographic character set.

    17. The method of claim 16 further comprising, on a user entering an end command, mapping the inputted as set of fundamental strokes to a logographic character and outputting a corresponding code of a character encoding system.

    18. A computer program which comprises a set of instructions which when executed on a computer device causes the device to carry out the method of claim 16.

    19. A computer program which comprises a set of instructions which when executed on a computer device provide the apparatus of claim 1.

    20. A text entry apparatus according to claim 2 in which each row comprises a single elongate contact region that reaches from one end of the row to the other.

    Description

    [0122] There will now be described by way of example only several embodiments of the present invention with reference to and as illustrated in the accompanying drawings of which:

    [0123] FIG. 1 is a schematic of a complete text entry apparatus in accordance with an aspect of the invention;

    [0124] FIG. 2 is a plan view of a first arrangement of a user interface designed to give the overall look and feel of a string instrument;

    [0125] FIG. 3 is a plan view corresponding to FIG. 2 showing how each of the elongate strings of the user interface defines a single contact region;

    [0126] FIG. 4a is a plan view of an alternative implementation of a user interface where virtual strings are formed using rows of discrete contact regions:

    [0127] FIG. 4b is a plan view of another alternative implementation of a user interface where the virtual strings are indicia and are separate from the contact regions so that each row consists of two contact regions that wholly overlap.

    [0128] FIG. 5 is a view in cross section of a user interface showing how contact regions are raised above a base;

    [0129] FIG. 6 is a view corresponding to FIG. 5 showing how contact regions are formed as grooves in the upper surface of the base;

    [0130] FIG. 7a is an illustration of a set of fundamental strokes a-j that can be used to write out logographic characters of the Chinese language and other similar logographic languages;

    [0131] FIG. 7b is an illustration of an alternative set of fundamental strokes a-k that can be used to write out logographic characters of the Chinese language and other similar logographic languages;

    [0132] FIG. 8 shows how a logogram can be classified by different character composition types which guide where strokes are written in a character;

    [0133] FIG. 9a is an exemplary notation that is used in this document to identify different ways that a user may contact a contact region. I.e., different gestures;

    [0134] FIG. 9b is an illustration showing how each of multiple contact regions can be represented along with the gestures that can be used on each region;

    [0135] FIG. 10a is a second exemplary notation that may be used to identify different ways that a user may contact a region that includes recognition of diagonal gestures.

    [0136] FIG. 10b is an illustration showing how each of multiple contact regions can be represented along with the gestures that can be used on each region for the second exemplary notation in FIG. 10a;

    [0137] FIG. 11 is an illustration of the grouping of the fundamental strokes in FIG. 7a used to form Chinese characters into four groups with each group assigned to one of four rows or strings of input devices, the grouping forming the basis of a first exemplary mapping method of strokes to gestures;

    [0138] FIG. 12 is an illustration using the notation of FIG. 9a of the total set of gestures a user can make to form the character “Han” in Chinese using the mapping of FIG. 11,

    [0139] FIG. 13 is a representation of the first three of the fundamental strokes that a user will enter consecutively or simultaneously to form the left side of the character “Han” using the first mapping of FIG. 11;

    [0140] FIG. 14 is a representation of the second three of the fundamental strokes that a user will enter consecutively or simultaneously to form the right side of the character “Han” using the first mapping of FIG. 11;

    [0141] FIG. 15 is an illustration of a second alternative mapping of fundamental strokes to gestures and their locations on an interface where each stroke is assigned to a unique gesture regardless of which contact region is contacted, and in with the contact regions are then mapped to character compositions in FIG. 8;

    [0142] FIG. 16 is an illustration using the notation of FIG. 9a of the total set of gestures a user can make to form the character “Han” in Chinese using the mapping of FIG. 15,

    [0143] FIG. 17 is a representation of the first three of the fundamental strokes that a user will enter consecutively or simultaneously to form the left side of the character “Han” using the mapping of FIG. 15;

    [0144] FIG. 18 is a representation of the second three of the fundamental strokes that a user will enter consecutively or simultaneously to form the right side of the character “Han” using the mapping of FIG. 15;

    [0145] FIG. 19 is a plan view of a smartphone device configured as an apparatus that falls within the scope of the first aspect of the invention;

    [0146] FIG. 20 is a schematic representation of an embodiment of an apparatus in accordance with a first aspect of the invention in which the interface has relatively little processing circuitry and some of the processing circuitry is provided by a connected electronic device:

    [0147] FIG. 21 is schematic of a user interface which has enhanced processing circuitry compared with the interface of FIG. 20 that can output an encoded text such as Unicode;

    [0148] FIG. 22 is schematic showing the key function components of an all-in-one device such as a smartphone of the kind shown in FIG. 19 configured to carry out a method of the second aspect of the invention;

    [0149] FIG. 23 is a flowchart showing the method used by the exemplary apparatus described in the preceding figures to generate logograms from the user gestures entered via the interface;

    [0150] FIGS. 24(i) and (ii) show two still further alternative examples of a user interface;

    [0151] FIG. 25a shows an implementation of the invention on a mobile phone device which displays two rows of contact regions;

    [0152] FIG. 25b is an illustration of another exemplary mapping method of strokes to gestures and contact regions displayed as 2 rows of strings:

    [0153] FIG. 26a shows the 2 sets of contact regions, which define narrow and wider rows and are triggered depending on the direction of the gesture:

    [0154] FIG. 26b shows an exemplary gesture interacting with the 2 sets of contact regions shown in FIG. 26a:

    [0155] FIGS. 27a, 27b and 27c is a table of exemplary ‘gesture shortcuts’ that can be used to enter radicals, common stroke combinations or common use logograms, using the mapping in FIG. 25b;

    [0156] FIG. 28 is a flowchart showing the method used by the exemplary apparatus described in FIGS. 25 to 29 to generate logograms from the user gestures entered via the interface;

    [0157] FIGS. 29a to 29d illustrate an example of the mapping in FIGS. 7a and 25b, where two individual strokes are used to enter the character “custom-character” (Ni),

    [0158] FIGS. 30a to 30f illustrate an example of the mapping in FIGS. 7b and 25b, where the user deletes one of the strokes in the input history, midway through entering a character and

    [0159] FIGS. 31a to 31f illustrate an example of the mapping in FIGS. 7b and 25b, and the use of gesture shortcut 1b in FIG. 27a.

    [0160] A complete exemplary system 100 for entering logographic characters in an electronic device is shown in FIG. 1. In this example, the written characters form a Chinese character set, but this should not be seen to limit the invention. The system is based on the Gu Qin, an ancient Chinese stringed instrument, with strings that sits perpendicular to the player and is plucked by hand.

    [0161] The system 100 comprises a user interface 110 which a user can physically interact with by making gestures, a processing circuit 120, and an optional display 130. The processing circuit 120 executes an executable code 140 store in a memory 150. The memory also stores two sets of mappings- one mapping gestures and contact regions identities to fundamental strokes and/or groups of fundamental strokes and the other mapping a sequence of fundamental strokes and/or groups of fundamental strokes to a set of logographic characters.

    [0162] The user interface 110 may replace a standard keyboard that would be used for alphabetical character entry to a personal computer. As will be explained later, the processing circuit 120 may use the processor and memory of a personal computer, or a dedicated processing circuit and memory may be provided for the purpose of doing the mapping and generating sequences.

    [0163] The system 100 also includes a display 130, which in this example is a standard monitor that is connected to the personal computer. Other displays could be used and the display could be a dedicated display of a user interface.

    [0164] FIGS. 20 to 22 show different arrangements of these components which can be implemented. FIG. 20 uses a simple interface 200 which does not perform any mapping and outputs a sequence of intermediate signals to a device into which the user wants to enter that text that includes the processing circuit. In the system of FIG. 21, the interface 210 has more processing circuits and does the mapping of intermediate signals to fundamentals strokes and/or groups of strokes and their mapping to logographic characters. This is output as a sequence of encoded characters to another device into which the user wants to enter the text. Finally, FIG. 22 shows how the whole system 220 can be implemented on a single device such as a smartphone.

    [0165] As shown in FIGS. 2 to 6, one arrangement of a generic user interface 110 comprises a relatively shallow base portion 20 having a rectangular and generally planar upper surface 21. It does not need to be rectangular of course but that is a convenient form factor for placing on a desk in front of a user. A plurality of contact regions 22 are defined on the upper surface that can be thought of as strings. These may take many different forms and there may be many different combinations of contact regions. Some exemplary formats are shown in FIGS. 2 to 6. In FIG. 5 contact regions 50 are in the form of raised ridges, and in FIG. 6 contact regions 60 comprise grooves.

    [0166] The contact regions may be arranged to form rows so that each row extends generally continuously transversely across the upper surface of the base portion. FIG. 3 shows how five parallel rows can be provided. FIG. 3 shows how each row is formed from one single elongate contact region, and FIG. 4a shows an alternative in which each row is formed from a plurality of contact regions. In the example of FIG. 4a each row comprises four contact regions, to give 16 in total. FIG. 4b shows another alternative where each row is formed from multiple overlapping contact regions, for example a wide region 23 and a narrow region 24, to give two sets of contact regions in total.

    [0167] Each contact region is sensitive to contact and motion of a user’s finger, a thumb, or a pointing device, or has an associated sensor that is sensitive to contact or can otherwise detect a contact (for instance a camera-based system) and produces an intermediate output signal. The output signal from each contact region comprises a string of binary digits, encoding optionally a unique ID for the contact region and /or a set of four properties of the gesture in the contact region made by a user. In this example these properties are the pressure (hard or light), an up/down component, a left/right component, and a short/long component. These can be encoded in four binary digits of a string and provide 32 possible output values for each region as shown in FIG. 9a: [0168] a. short, hard pressure, up, fast gesture [0169] short, hard pressure, up, slow gesture [0170] short, hard pressure, down, fast gesture [0171] short, hard pressure, down, slow gesture [0172] b. short, light pressure, up, fast gesture [0173] short, light pressure, up, slow gesture [0174] short, light pressure, down, fast gesture [0175] short, light pressure, down, slow gesture [0176] c. long, hard pressure, up, fast gesture [0177] long, hard pressure, up, slow gesture [0178] long, hard pressure, down, fast gesture [0179] long, hard pressure, down, slow gesture [0180] d. long, light pressure, up, fast gesture [0181] long, light pressure, up, slow gesture [0182] long, light pressure, down, fast gesture [0183] long, light pressure, down, slow gesture [0184] e. short, hard pressure, left, fast gesture [0185] short, hard pressure, left, slow gesture [0186] short, hard pressure, right, fast gesture [0187] short, hard pressure, right, slow gesture [0188] f. short, light pressure, left, fast gesture [0189] short, light pressure, left, slow gesture [0190] short, light pressure, right, fast gesture [0191] short, light pressure, right, slow gesture [0192] g. long, hard pressure, left, fast gesture [0193] long, hard pressure, left, slow gesture [0194] long, hard pressure, right, fast gesture [0195] long, hard pressure, right, slow gesture [0196] h. long, light pressure, left, fast gesture [0197] long, light pressure, left, slow gesture [0198] long, light pressure, right, fast gesture [0199] long, light pressure, right, slow gesture

    [0200] FIG. 9a sets out a nomenclature for representing the different gestures of gesture types a to h. based on the type of gesture that is made.

    [0201] In an alternative, the system may also recognise diagonal gestures as inputs and may also recognise a tap of a string as an input. FIG. 10a sets out a nomenclature for a system including these additional diagonal gestures. In this arrangement there are a total of 66 possible outputs: [0202] a. Tap, light pressure [0203] b. Tap, hard pressure [0204] c. Long, light pressure, up, fast gesture [0205] Long, light pressure, up, slow gesture [0206] Long, light pressure, down, fast gesture [0207] Long, light pressure, down, slow gesture [0208] Long, light pressure, left, fast gesture [0209] Long, light pressure, left, slow gesture [0210] Long, light pressure, right, fast gesture [0211] Long, light pressure, right, slow gesture [0212] Long, light pressure, diagonal up right, fast gesture [0213] Long, light pressure, diagonal up right, slow gesture [0214] Long, light pressure, diagonal up left, fast gesture [0215] Long, light pressure, diagonal up left, slow gesture [0216] Long, light pressure, diagonal down right, fast gesture [0217] Long, light pressure, diagonal down right, slow gesture [0218] Long, light pressure, diagonal down left, fast gesture [0219] Long, light pressure, diagonal down left, slow gesture [0220] d. Long, hard pressure, up, fast gesture [0221] Long, hard pressure, up, slow gesture [0222] Long, hard pressure, down, fast gesture [0223] Long, hard pressure, down, slow gesture [0224] Long, hard pressure, left, fast gesture [0225] Long, hard pressure, left, slow gesture [0226] Long, hard pressure, right, fast gesture [0227] Long, hard pressure, right, slow gesture [0228] Long, hard pressure, diagonal up right, fast gesture [0229] Long, hard pressure, diagonal up right, slow gesture [0230] Long, hard pressure, diagonal up left, fast gesture [0231] Long, hard pressure, diagonal up left, slow gesture [0232] Long, hard pressure, diagonal down right, fast gesture [0233] Long, hard pressure, diagonal down right, slow gesture [0234] Long, hard pressure, diagonal down left, fast gesture [0235] Long, hard pressure, diagonal down left, slow gesture [0236] e. Short, light pressure, up, fast gesture [0237] Short, light pressure, up, slow gesture [0238] Short, light pressure, down, fast gesture [0239] Short, light pressure, down, slow gesture [0240] Short, light pressure, left, fast gesture [0241] Short, light pressure, left, slow gesture [0242] Short, light pressure, right, fast gesture [0243] Short, light pressure, right, slow gesture [0244] Short, light pressure, diagonal up right, fast gesture [0245] Short, light pressure, diagonal up right, slow gesture [0246] Short, light pressure, diagonal up left, fast gesture [0247] Short, light pressure, diagonal up left, slow gesture [0248] Short, light pressure, diagonal down right, fast gesture [0249] Short, light pressure, diagonal down right, slow gesture [0250] Short, light pressure, diagonal down left, fast gesture [0251] Short, light pressure, diagonal down left, slow gesture [0252] f. Short, hard pressure, up, fast gesture [0253] Short, hard pressure, up, slow gesture [0254] Short, hard pressure, down, fast gesture [0255] Short, hard pressure, down, slow gesture [0256] Short, hard pressure, left, fast gesture [0257] Short, hard pressure, left, slow gesture [0258] Short, hard pressure, right, fast gesture [0259] Short, hard pressure, right, slow gesture [0260] Short, hard pressure, diagonal up right, fast gesture [0261] Short, hard pressure, diagonal up right, slow gesture [0262] Short, hard pressure, diagonal up left, fast gesture [0263] Short, hard pressure, diagonal up left, slow gesture [0264] Short, hard pressure, diagonal down right, fast gesture [0265] Short, hard pressure, diagonal down right, slow gesture [0266] Short, hard pressure, diagonal down left, fast gesture [0267] Short, hard pressure, diagonal down left, slow gesture

    [0268] In use, as shown in FIG. 23, a user can contact the regions in a sequence to generate a temporal sequence of intermediate signals from the regions. The processing circuit maps the string value of intermediate signals to fundamental strokes and/or groups of strokes. It first determines if there is one intermediate signal that has been input at a given time, and if so generates one associated fundamental stroke using a mapping. This is then displayed on a display and the system returns to wait for the next intermediate signal If there is a chord of gestures that are input, meaning two or more simultaneously, a mapping to a group of strokes or a whole character can be made. The mapped group is then displayed and the system returns to wait for the next intermediate signal. In each case, before returning the system performs an autocomplete check to determine if all the strokes entered so far correspond to a logographic character and if it does this is output as an associated encoding from an encoding set such as Unicode or CJK.

    [0269] The mapping can be done in several ways but in this first example, where the character set is a Chinese character set, the mapping can be performed quite efficiently by observing the following rules about the strokes used to generate a Chinese character. As strokes are entered, the apparatus displays them onto a screen so a user can check they are correct, and once a sequence corresponding to a whole target completed logogram is displayed the user can enter an end command through the interface and the logogram is then identified from a database of logograms.

    [0270] Firstly, the strokes can be grouped according to type as shown in FIG. 7a: [0271] a. “Dian” - dot [0272] b. “Pie” - left throw [0273] c. “Ti” - rise, jump [0274] d. “Shu” - vertical stroke [0275] e. “Shu gou” - vertical hook [0276] f. “Heng” - horizontal stroke [0277] g. “Heng gou” - horizontal hook [0278] h. “Xie” - right curve [0279] i. “Na” - right sweep [0280] j. “Wan” - left curve

    [0281] Other groupings of strokes are possible, such as the alternative shown in FIG. 7b which has 11 strokes rather than the 10 of the first grouping set out above: [0282] a. “Ti” - rise, jump [0283] b. “Heng” - horizontal stroke [0284] c. “Heng gou” - horizontal hook [0285] d. “Dian” - dot [0286] e. “Shu” - vertical stroke [0287] f. “Shu gou” - vertical hook [0288] g. “Pie” - left throw [0289] h. “Na” - right sweep [0290] i. “Wan gou” - left curve hook [0291] j. “Xie gou” - right curve hook [0292] k. “Shu wan gou” - vertical stroke, bend, hook:

    [0293] Next, the form of each character can be allocated to the following groups shown in FIG. 8: [0294] a. Upper/lower separation [0295] b. Upper/middle/lower separation [0296] c. Left/right separation [0297] d. “Left/middle/right separation” [0298] e. Full enclosure [0299] f. Upper three-sided enclosure [0300] g. Left three-sided enclosure [0301] h. Lower three-sided enclosure [0302] i. Upper left corner enclosure [0303] j. Upper right corner enclosure [0304] k. Lower left corner enclosure [0305] l. Whole

    [0306] FIG. 11 illustrates a first exemplary method of mapping that may be used by the processing circuit. FIGS. 12 to 14 illustrate a method of use of the apparatus to enter a Chinese character using that mapping.

    [0307] FIG. 12 shows the total sequence of gestures on the contact regions that can be made to write the character ‘Han’ as in Han Chinese using the mapping of FIG. 11 and an interface of the type shown in FIG. 3 with four strings.

    [0308] FIG. 13 shows how a first ‘chord’ combination input for 3 strokes on the left side of ‘Han’ character can be made using the user input device of FIG. 3, and FIG. 14 shows how a second ‘chord’ combination input for 3 strokes on the right side of ‘Han’ character using the same mapping.

    [0309] FIG. 15 illustrates a second exemplary method of mapping that may be used separate from or in conjunction with the first exemplary mapping method, in which fundamental strokes are assigned to a unique property or combination of properties of a gesture independent of string or location In addition, the identity of a specific contact region on which a gesture is made is used to identify a character composition.

    [0310] FIG. 16 shows the total sequence of strokes to write the character ‘Han’ using this second mapping method. The character composition is left/right separation (see FIG. 8).

    [0311] FIG. 17 shows the first ‘chord’ combination input for 3 strokes on the left side of the ‘Han’ character using the second mapping method and FIG. 18 shows a second ‘chord’ combination input for 3 strokes on the right side of ‘Han’ character using the second mapping method.

    [0312] The skilled person will understand that the many alternative user input devices can be provided within the scope of this invention and configuration of user interfaces and processing circuits.

    [0313] The first mapping of FIGS. 11 to 14 and the second mapping of FIGS. 15 to 18 can be combined within the same instance of input via the interface. The system can be made intuitive because input property/properties are linked to the physical act of writing a stroke. E.g., Upward gesture of a contact = upward physical pen stroke. 3 short plucks = 3 dotted radical (known as 3 ‘dian’ or ‘drops’ of water in Chinese) and so on

    [0314] FIGS. 24(i) and (ii) show two still further alternative examples of user interfaces 240 and 241, in which the contact regions comprise continuous strings 242 that are held in tension over a base portion 250 in the manner of the strings of a guitar or other musical instrument, the user contacting the string generating a unique vibration of the string. In FIG. 24(i) the interface includes a pickup 260 which detects the vibration and outputs an intermediate signal having a value that encodes the position of the contact and how the contact is made. In the interface of FIG. 24(ii) a microphone 270 picks up the sounds made by the strings as they vibrate, the sounds being converted into intermediate values. The microphone removes the need for a pickup.

    [0315] The strings 242 may be made of nylon or steel, and could be standard guitar strings which are widely available and easily fixed in tension.

    [0316] With the provision of strings 242 a two-handed input method can be easily implemented, the user using one hand- perhaps their non-dominant hand, to hold the string to the upper surface of the base portion and the other hand plucking the string. By holding the string down the length that is plucked will vary, altering the frequency. This frequency and the change in frequency over time may be mapped to multiple stroke types.

    [0317] The note may be used to identify a character composition, and the way in which the note is played (hard or soft, bending up or down) may denote the stroke type. Or this may be reversed. For example, a middle C tone could be assigned to one stroke type when played as a short note, and assigned to a different stroke if played as a long note. If the note is bent, by pushing the string, or the finger is slid along the string, this will create other sounds that can be assigned to other intermediate signal values for middle C, the same can be done for other notes that may be played. In this way each note will provide the functionality of a contact region.

    [0318] Instead of identifying the note, the interface may simply determine which string is being played and the way the note is played to generate the intermediate signals and the values of those signals

    [0319] This can enable any stringed instrument that a user is familiar with and that can produce a predefined set of notes to be used together with a microphone as the interface.

    Mobile Phone Embodiment of System

    [0320] FIG. 25a illustrates a system in accordance with the present invention that leverages the functionality of a smart phone for the hardware components, the phone being provided with a customised app stored within the phone’s memory that causes the processing circuit of the smartphone to implement the required functionality by which a user can enter characters and by which they are displayed on the screen of the device.

    [0321] The screen of the phone is touch sensitive, whereby a user can touch any part of the screen and the processing circuit receives a data signal that indicates the location, direction, time, and pressure of a user’s touch. Touch screen technology is well known in the smart phone industry, and it is also well known to analyse the interactions of a user with a screen to detect when a contact is made.

    [0322] The screen is divided up into several regions as described in Table 1 and as can be seen in FIG. 25a.

    TABLE-US-00001 Feature Description Character display 300 - Located in upper half of screen - Displays retrieved logographic characters, such as in an instant messaging conversation log Gesture input region 310 - Located in lower half of screen - Receives gestures - Divided into 2 sets of horizontal contact regions (not visible to user) - Each contact region has 1 horizontal line running across the middle (the ‘string’) - Each line is labelled with the gesture direction and corresponding stroke output - The lines serve as visual guides for both individual swipes and gesture shortcuts made in relation to the lines - ‘Space bar’ at the base of the interface selects the first entry in the character suggestion list Character suggestion list 320 - Displayed as a bar above the gesture input region - Provides list of logographic character query results - Updates after items are added or removed from the input history - Users can tap on a logographic character to select it, it will then display in the character display’ region. - ‘Backspace’ button allows the user to delete the last logographic character selected for display input history 330 - Displayed as a bar above the character suggestion list - Displays fundamental strokes, stroke combinations, and radicals in the order they are entered by the user - ‘Backspace’ button allows the user to delete the last entry in the input history Radical pop-up suggestion list 340 - Is triggered by prolonged contact with the gesture input region in the final position of a gesture shortcut - Appears over the final position of the finger - Contains a list of radical suggestions associated with the gesture made - An item in the list is selected for entry by dragging and releasing the finger over the desired item

    [0323] The user can input logographic characters into the mobile phone through the interface shown in FIG. 25a using the following methods in any combination:

    [0324] a. Enter a single fundamental stroke by tapping or moving their finger across the respective contact region in the respective set and direction (up, down, left, right, diagonal up left, diagonal up right, diagonal down left, diagonal down right).

    [0325] b. Enter a stroke combination of 2 or more fundamental strokes that are adjacent in a logographic character’s stroke order, by moving one or more fingers across 2 or more respective contact regions, in the respective set and directions.

    [0326] c. Enter a stroke combination of 2 or more instances of the same fundamental stroke that are adjacent in a logographic character’s stroke order, by moving one or more fingers across the same respective contact region in the same respective set and direction.

    [0327] d. Enter a radical, common stroke combination or common use logographic character using a ‘gesture shortcut’. The detection of gesture shortcuts is triggered when there is a change in direction in finger movement, the finger moves across 2 or more contact regions within any set of contact regions, or the contact time exceeds a certain threshold. A ‘hold’ function produces a list of further radical suggestions for entry Here, prolonged contact with the gesture input region in the final position of a gesture triggers a pop-up list (340) of suggestions over the position of the finger. An item in the list is selected for entry by dragging and releasing the finger over the desired entry.

    [0328] The processing circuit of the mobile phone, in addition to mapping user swipes or taps on the screen to fundamental strokes includes a program which analyses the sequence of strokes and uses the information to query a database of logographic characters that are stored in a memory of the device.

    [0329] The procedure used to interrogate the database may consist of the following:

    [0330] 1. Receive a sequence of fundamental strokes, stroke combinations, and/or radicals.

    [0331] 2. Convert any stroke combinations or radicals within the inputted sequence to a list of fundamental strokes, so that the input history consists only of fundamental strokes.

    [0332] 3 Query the database using the input history and generate a list of logographic characters (character suggestion list 320) whose stroke order is most like the input history thus far.

    [0333] 4. The database is queried each time a new entry is added to or removed from the input history, and the character suggestion list 320 is continuously updated and displayed.

    [0334] 5. The last entry displayed in the input history 330 can be manually deleted by the user, using a ‘backspace’ button.

    [0335] 6. The input history 330 is cleared when the user selects a character from the suggestion list, or when the user has deleted all entries in the input history.

    [0336] The applicant has appreciated that providing very narrow contact regions may present difficulties when a user is writing quickly and makes a horizontal stroke. This may be offset slightly from the touch sensitive sub-regions and not be recorded.

    [0337] As exemplified in FIG. 26a, to ameliorate this each contact region may be made up of two set of sub-regions 23 and 24, each sub-region being a touch sensitive pixel on a screen.

    [0338] As shown in FIG. 26b, when a gesture is not horizontal, it is beneficial to use a narrower set of contact regions, to identify which ‘string’ more clearly was hit. If the same set of sensors was used for all gestures, then any non-horizontal gestures that are too close to the other string will be identified as hitting both strings (indicated in the scenario on the left). A narrower set of sensors allows the user more vertical space to ‘swipe’ across a string, without touching other strings (indicated in the scenario on the right).

    [0339] Once the correct sub-set of regions is determined, the processing then proceeds in the same way it would if there was only one set of sub regions defined in the system.

    [0340] An implementation of this arrangement of narrow and wide touch sensitive regions for each row in the mobile phone device shown in FIG. 25a will now be described.

    [0341] FIG. 26a shows the 2 sets of contact regions, which are triggered depending on the direction of the gesture. Each comprises a row which is overlaid with a line on the screen to appear as a string. The two strings are offset vertically. Each contact region in each row of contact regions comprises two different subsets of touch sensitive sub-regions, where a first set comprises a sub-set of touch sensitive regions that define a row of contact regions which are touch sensitive over a wider region 23 each side of a central axis, and a second set that comprises a sub-set of touch sensitive regions that define a row which is touch sensitive over a narrower region 24 each side of a central axis, the central axes of both first and second sets being the same or substantially the same so that the narrower rows fit within the wider rows.

    [0342] If the gesture is in a horizontal direction, the wider set of contact regions 23 are used to map the strokes. For all other gesture directions, the narrower set of contact regions 24 are used to map the strokes. Within either sets of contact regions, when the gesture changes direction, includes a hold at the end or touches another contact region, gesture shortcut mode is enabled.

    [0343] FIGS. 27a, 27b and 27c together form a table (Table 2) of ‘gesture shortcuts’ that can be used to enter radicals, common stroke combinations or common use logograms. The horizontal lines act as a visual reference for the user and they provide guides for the position and shape of the different gestures.

    [0344] FIG. 28 is a flow chart that illustrates the use in the system of an ‘input history’ list, where the user can manually delete the last entry.

    [0345] FIGS. 29a to 29d illustrate an example of the mapping in FIGS. 7b and 25b, where two individual strokes are used to enter the character custom-character(Ni). In FIGS. 29a and 29b, the user enters the two strokes, before selecting from a list of suggestions in FIG. 29c. The character chosen is then displayed in FIG. 29d.

    [0346] FIGS. 30a to 30f illustrate an example of the mapping in FIGS. 7b and 25b, where the user deletes one of the strokes in the input history, midway through entering a character. The user makes an error in stroke entry in FIG. 30b. The stroke is then deleted from the input history in FIG. 30c, and replaced with a different stroke entered in FIG. 30d. The list of character suggestions updates accordingly in FIG. 30e, and the user selects the correct character. The selected character is displayed in FIG. 30f, custom-character(Hou).

    [0347] FIGS. 31a to 31f illustrate an example of the mapping in FIGS. 7b and 25b, and the use of gesture shortcut 1b in FIG. 27a. The user makes a gesture shortcut in FIG. 31a and holds their finger down in FIG. 31b to trigger a pop-up menu of radical suggestions. The user then slides and releases their finger over the chosen radical in 31c to select the radical for entry. The selected radical is included in the input history in FIG. 31d, and the character suggestion list updates accordingly. The user then selects the correct character. custom-character(Hen) in FIG. 31e. Finally the character is displayed as shown in FIG. 31f.

    [0348] In summary, the smartphone-based system shown in FIGS. 25a to 31f can be considered to define the following core set of features and functionality:

    [0349] a. Multiple contact regions arranged in rows. Each row of contact regions is associated with two sets of touch sensitive sub-regions so the row can be interpreted as a wide row or a narrow row, the two sharing a common axis. The invention may also apply to rows where the contact regions are associated with only the one set of narrow or wide regions but the two sets provide some benefits.

    [0350] b. Virtual ‘strings’, visually indicated by lines, that run across the centre of each of the rows of contact regions.

    [0351] c.. Vector analysis of the outputs from the sub-sets defining the wide and narrow rows is made and from this only one of the two contact regions is retained for further analysis and generation of an intermediate signal and the other discarded, dependent primarily on the direction of the gesture that has been made;

    [0352] d. An intermediate signal is generated from the vector analysis of the non-discarded contact region that encodes a set of properties, including the type of contact (tap versus swipe), the direction associated with movement across the contact region made by the finger or pointing device, the duration of contact, and the identity of the contact region or entire row that the finger interacted with.

    [0353] e. A database is used to map a fundamental stroke, stroke combination, radical or complete logogram to each intermediate signal made in a temporal sequence.

    [0354] f.. A database is used to map the fundamental stroke, stroke combination or radical to a set of characters, or a character encoding system such as Unicode.

    [0355] g.. A display (input history) is provided and used to render a visual representation of each fundamental stroke, stroke combination or radical the user enters to build up a logogram.

    [0356] h. A display is provided that is used to render a visual representation of a complete character or a list of suggested characters after each input instance.

    GLOSSARY OF TERMS

    [0357] Logogram- a written or pictorial symbol that represents a word or morpheme.

    [0358] Logographic character- typically logograms used in writing systems, including but not limited to Chinese. In computing, they could also be parts of characters (such as radicals or CJK strokes) that can be displayed on a computer. These are assigned a unique code in an encoding system such as Unicode (the most common encoding system).

    [0359] Fundamental stroke- the smallest component of a logogram. A unidirectional motion of continuous contact with a writing surface that produces a given part of a logogram. While there is no consensus on a single list of fundamental strokes, for the purposes and optimization of this invention, a set of 10 strokes and an alternative set of 11 strokes were identified.

    [0360] Stroke combination- any sequence of strokes used to write a logographic character.

    [0361] Stroke order- the total sequence of strokes needed to write a logographic character

    [0362] Radical- a stroke combination that forms a graphical component of a logogram, often an indicator of meaning or pronunciation. While some logograms can be visually broken down into more than one radical, they are officially listed under one radical in the Chinese dictionary. For example, the logogram custom-charactercan be broken down into the two radicalscustom-character andcustom-characterbut is listed under the radical custom-characterin the dictionary. Regardless of this, the algorithm breaks down radicals and groups of strokes in the same way, so it is possible to simply enter a logogram according to its graphical components.

    [0363] Calligraphic knowledge- knowledge of stroke type and stroke order needed to write a logogram by hand.

    [0364] Pinyin- phonetic notation of Chinese logograms that uses the Roman alphabet, commonly used in China and amongst those who use simplified Chinese logograms

    [0365] Zhuyin- phonetic notation of Chinese logograms, commonly used in Taiwan amongst those who use traditional Chinese logograms.

    [0366] Romanization- the notation of non-Roman writing systems using the Roman alphabet.