G06F40/191

Arranging and/or clearing speech-to-text content without a user providing express instructions

Implementations described herein relate to an application and/or automated assistant that can identify arrangement operations to perform for arranging text during speech-to-text operationswithout a user having to expressly identify the arrangement operations. In some instances, a user that is dictating a document (e.g., an email, a text message, etc.) can provide a spoken utterance to an application in order to incorporate textual content. However, in some of these instances, certain corresponding arrangements are needed for the textual content in the document. The textual content that is derived from the spoken utterance can be arranged by the application based on an intent, vocalization features, and/or contextual features associated with the spoken utterance and/or a type of the application associated with the document, without the user expressly identifying the corresponding arrangements. In this way, the application can infer content arrangement operations from a spoken utterance that only specifies the textual content.

Computerized systems and methods for hierarchical structure parsing and building

Disclosed are systems and methods for a computerized framework that provides a document structure parsing system for requirement engineering documents, where the logical structure of the text is not available, and is to be rebuilt based on the raw textual content. The framework approaches the build of the logical structure according to two phases. The first phase involves creating a list of list of text snippets from the raw text, where sequence labeling is adopted to re-segment and merge initially segmented text snippets. The second phase involves the framework executing computerized techniques including embedding adaptation approach, a hierarchy structure rebuilt algorithm, and a requirement text selection strategy to rebuild the hierarchy structure.