A three-dimensional human head reconstruction method, an electronic device and a non-transient computer-readable storage medium are provided. The method includes: acquiring a target portrait image; inputting the target portrait image into a target model to obtain an output result of the target model; wherein the target model is obtained by pre-training with a plurality of training samples which are generated according to a sample portrait image and a sample three-dimensional human head model, the sample three-dimensional human head model is obtained by iteratively fitting a standard three-dimensional human face statistical model according to two-dimensional feature information related to a portrait in the sample portrait image, and the two-dimensional feature information includes human face feature points and a human head projection contour line; and generating, according to the output result, a target three-dimensional human head model corresponding to the target portrait image.
The embodiments of the present disclosure provide an image scene segmentation and apparatus, and a device and a storage medium. The method includes: obtaining an intermediate scene segmentation image by performing scene initial segmentation and scene initial fusion on an obtained target image; detecting, from the intermediate scene segmentation image, segmentation blocks to be processed; and obtaining a target scene segmentation image of the target image by performing segmentation correction on the segmentation blocks to be processed.
G06V 10/26 - Segmentation de formes dans le champ d’imageDécoupage ou fusion d’éléments d’image visant à établir la région de motif, p. ex. techniques de regroupementDétection d’occlusion
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une formeLocalisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
G06V 10/80 - Fusion, c.-à-d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
3.
METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM FOR IMAGE PROCESSING
The application discloses a method, electronic device, and storage medium for image processing, which includes: in response to determining that the trigger of an effect display function is detected, adding a virtual model to a collected image to be processed to obtain a presentation video frame; magnifying-displaying an image area of the virtual model in the presentation video frame, and processing the virtual model into a target three-dimensional effect model in response to determining that a magnification stop condition is reached; fusing the target three-dimensional effect model and a target object in the image to be processed to present a target video frame.
An effect prop display method, an apparatus, a device, and a storage medium. The method comprises: receiving a trigger operation on an effect prop in an execution device, wherein the execution device currently has first position and orientation information (S101); displaying, in a set display state, an enhancement effect of the effect prop (S102); receiving a position and orientation adjustment operation on the execution device, wherein the first position and orientation information of the execution device is changed into second position and orientation information (S103); and keeping displaying the enhancement effect of the effect prop based on maintaining the set display state (S104).
A task participant adding method and apparatus, an electronic device, and a storage medium. The method comprises: in response to an operation of creating a task in a session, displaying a task creation interface in a session interface, the task creation interface comprising a task participant adding option; in response to a selection operation performed with respect to the task participant adding option, displaying a session member select-all option; and in response to a selection operation performed with respect to the session member select-all option, adding all members of the session as participants of the task, and displaying the session member select-all option as selected
The present disclosure relates to a method, apparatus, electronic device, and storage medium for task processing. The method includes: in response to receiving an input request for task information in a document page, displaying a task panel in the document page, the task panel comprising at least one piece of first task information, the task information comprising the first task information; and in response to a triggering operation on the at least one piece of first task information, displaying a task area in the document page and displaying the first task information in the task area. The method can migrate the task information to different carriers, which is convenient for a user to view.
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
7.
METHOD, APPARATUS, STORAGE MEDIUM, DEVICE AND PROGRAM PRODUCT FOR IMAGE PROCESSING
The disclosure discloses a method, an apparatus, a storage medium, a device and a program product for image processing. The method includes: obtaining a prompt word and a scene mesh, wherein the prompt word is text information provided by a user and represents a scene style, and the scene mesh is a three-dimensional mesh with a real texture generated based on a real scene reconstruction; generating a stylized panoramic texture map at the center position of the scene mesh based on the prompt word; projecting the texture of the stylized panoramic texture map to the visible area of the scene mesh to obtain a first stylized mesh texture mapping; and performing a spatial texture propagation processing on the first stylized mesh texture mapping to fill the non-visible area of the scene mesh to obtain a second stylized mesh texture mapping.
G06T 3/4053 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la super-résolution, c.-à-d. où la résolution de l’image obtenue est plus élevée que la résolution du capteur
G06T 5/30 - Érosion ou dilatation, p. ex. amincissement
Embodiments of the application provide a method, apparatus, device, medium and program for displaying a virtual character. By obtaining a limb image of a human body, a skeleton length of a corresponding limb of the human body is determined based on the limb image. A virtual character is displayed. A skeleton length of a limb of the virtual character is determined based on the skeleton length of the limb of the human body. The method can automatically detect the skeleton length of the limb of the human body, adjust the skeleton length of the limb of the virtual character based on the skeleton length of the limb of the human body, achieve flexible adjustment of the skeleton length of the limb of the virtual character and improve user experience.
The embodiments of the present disclosure relate to an interaction method and apparatus, and a device and a storage medium. The method provided herein comprises: presenting an information component in a dialogue interface of a dialogue conducted between a target user and a virtual entity, wherein the information component presents title content of a plurality of information items; receiving a selection of the target user for a target information item among the plurality of information items; and presenting a viewing window concerning the target information item, wherein the viewing window presents an information message and a citation message of the target information item, the information message comprises text content generated on the basis of a set of reference information content, and the citation message indicates a source of the set of reference information content. In this way, the embodiments of the present disclosure can improve the efficiency of providing information content for a user.
A text checking method and apparatus, and an electronic device and a storage medium. The method comprises: acquiring first text, and first audio data which is obtained after performing audio conversion on the first text (S102); converting the first audio data into second text (S104); acquiring a first pinyin sequence, from which tones are removed, of the first text, and a second pinyin sequence, from which tones are removed, of the second text (S106); and on the basis of the first text, the second text, the first pinyin sequence and the second pinyin sequence, identifying first text which has an audio conversion error, wherein compared with the first text which has the audio conversion error, first audio data corresponding to the first text which has the audio conversion error has at least one of the following errors: redundancy in reading, omission in reading, and incorrect reading (S108).
G10L 15/01 - Estimation ou évaluation des systèmes de reconnaissance de la parole
G10L 13/08 - Analyse de texte ou génération de paramètres pour la synthèse de la parole à partir de texte, p. ex. conversion graphème-phonème, génération de prosodie ou détermination de l'intonation ou de l'accent tonique
G10L 15/26 - Systèmes de synthèse de texte à partir de la parole
11.
PAGE INTERACTION METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
Provided in the embodiments of the present disclosure are a page interaction method and apparatus, and a device and a storage medium. The method comprises: displaying a content presentation page, which is at least used for displaying a media content item; in response to a preset trigger operation having been detected, providing an operation panel in an active layer of the content presentation page, wherein the media content item is displayed in a background layer of the content presentation page, and the operation panel at least comprises a plurality of options which respectively correspond to a plurality of view modes; and in response to a selection operation performed on a first option among the plurality of options having been detected, displaying, in the background layer of the content presentation page, the media content item in a first view mode which corresponds to the first option, and keeping displaying the operation panel in the active layer of the content presentation page. Thus, a user can quickly and directly preview content presentation effects in different views, and before selecting a satisfactory view mode, the user is allowed to easily change to other view modes.
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
12.
AUDIO INTERFACE RENDERING METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
Embodiments of the present disclosure provide an audio interface rendering method and apparatus, a device, and a storage medium. The method comprises: acquiring an audio playback progress; acquiring an interface object texture on the basis of the audio playback progress; and rendering the interface object texture into an audio interface for display. In the audio interface rendering method provided by the embodiments of the present disclosure, the interface object texture acquired on the basis of the audio playback progress is rendered into the audio interface for display, so that the performance consumption when the audio interface is rendered can be reduced, and the rendering speed is improved.
An interrupt controller, a chip and an electronic device. The interrupt controller comprises at least one interrupt domain module and an output module, wherein each interrupt domain module is configured to receive a line interrupt request, obtain, from the received line interrupt request, a target line interrupt request that matches the current interrupt domain module, and generate a corresponding message interrupt request on the basis of the target line interrupt request; and the output module is configured to receive the message interrupt request sent by the at least one interrupt domain module, and control output of the message interrupt request. The interrupt controller processes a line interrupt request in an interrupt system which is based on a message interrupt.
The present disclosure provides a font creation method and apparatus, a computer device, and a storage medium. The method comprises: in response to selecting a target font template from among a plurality of font templates, displaying a character to be processed under the target font template, wherein said character is selected from a plurality of initial characters under the target font template; in response to an editing operation of said character, determining target editing content of said character; and on the basis of the target editing content, editing the plurality of initial characters other than said character under the target font template to obtain a plurality of target characters after the plurality of initial characters are edited.
Provided in the embodiments of the present disclosure are an image photographing method and device. The method comprises: acquiring a preview image photographed for a target scenario; determining, from a preset template library, at least one reference template image matching the preview image, wherein the reference template image comprises a character object, a background object and a reference line, the similarity between the background object and an object in the preview image is greater than a similarity threshold value, and the reference line is used for identifying a reference position of the character object in the preview image and a reference posture of the character object; and displaying the at least one reference template image, wherein the reference template image is used for guiding a user to perform image photographing with reference to the reference position and the reference posture.
Disclosed in the present application are an image generation method, an apparatus, an electronic device and a computer-readable medium. The method comprises: first, acquiring a target object description image, an object description image to be replaced, key point characterization data of said object description image and key point characterization data of an image to be processed; then, performing information fusion processing on the data, so as to obtain image generation condition characterization data; and then, according to the image generation condition characterization data, performing image generation processing, so as to obtain a target image corresponding to the image to be processed, both the target image and the target object description image being used for describing the same object, and the target image and the image to be processed keeping consistent under at least one type of object state characterization information. The present application enables target images to represent object replacement results for images to be processed, thus achieving the purpose of carrying out object replacement processing on one existing image or one existing video.
Provided in the present disclosure are an interaction method and apparatus, and an electronic device and a storage medium. The method comprises: during the playing of a first video, in response to a preset trigger condition having been met, acquiring a target question associated with the first video, and displaying same; receiving answer information from a current user for the target question; and in response to the answer information from the current user, displaying a feedback result for the answer information. In this way, the forms of content are enriched, and the interactivity is improved.
H04N 21/4788 - Services additionnels, p. ex. affichage de l'identification d'un appelant téléphonique ou application d'achat communication avec d'autres utilisateurs, p. ex. discussion en ligne
18.
INTERACTION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
The present invention relates to the technical field of computers, and provides an interaction method and apparatus, a device, and a storage medium. The method comprises: providing a preset widget associated with a preset creation interface of a preset application; in the preset widget, displaying object information of at least one preset object, which is used for creating a media work, in the preset application; and in response to a trigger operation for the preset widget, displaying the preset creation interface corresponding to the preset object in the preset application.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
19.
METHOD AND APPARATUS FOR VIDEO PUBLISHING, METHOD AND APPARATUS FOR VIDEO PLAYING, AND DEVICE AND STORAGE MEDIUM
On the basis of the embodiments of the present disclosure, provided are a method and apparatus for video publishing, a method and apparatus for video playing, and a device and a storage medium. The method for video publishing comprises: in response to a trigger by a first user, presenting a publishing page for a video, wherein the publishing page is provided with a consumption configuration entrance for the video; via the consumption configuration entrance, acquiring a resource consumption configuration inputted by the first user for the video, wherein the resource consumption configuration at least indicates the number of resources to be consumed to obtain a viewing permission for the video; and on the basis of a publishing indication for the video, publishing the video in association with the resource consumption configuration. Therefore, a work publisher can be supported in autonomously and flexibly configuring the viewing permission for a work as needed. This extends the types of work published and provides diversified work publishing and viewing functions.
Provided in the embodiments of the present disclosure are a page interaction method and apparatus, and a device and a storage medium. The method comprises: displaying a content presentation page, wherein the content presentation page is at least used for displaying a media content item, and the content presentation page can be displayed at least by means of selecting a pre-determined navigation tab; in response to a trigger operation performed on the navigation tab having been detected, providing an operation panel in the content presentation page, wherein the operation panel comprises a plurality of operation options for triggering a plurality of types of service functions for the content presentation page, and in the operation panel, there is at least one operation option for each type of service function; and in response to a selection operation performed on an operation option among the plurality of operation options having been detected, triggering in the content presentation page a service function corresponding to the operation option. Thus, by means of a navigation tab in an operation page, an option set containing a plurality of types of functions can be provided to a user for selection, thus making it easier for the user to quickly locate a function to be triggered, making user operations simpler, and providing a better user experience.
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
A video processing method and a related device. The method comprises: displaying a video to be processed; determining a target display line of a target object in said video; acquiring target text; and displaying the target text in said video along the target display line.
H04N 21/431 - Génération d'interfaces visuellesRendu de contenu ou données additionnelles
H04N 21/44 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé
H04N 21/435 - Traitement de données additionnelles, p. ex. décryptage de données additionnelles ou reconstruction de logiciel à partir de modules extraits du flux de transport
22.
SPEECH RECOGNITION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
The present disclosure relates to the technical field of computers, and provides a speech recognition method and apparatus, a device, and a storage medium. The method comprises: extracting a first keyword from context information of speech to be recognized; searching for a second keyword associated with the first keyword in a first preset word library; constructing a target word list on the basis of the first keyword and the second keyword; and on the basis of the target word list and said speech, using a preset speech recognition model supporting word enhancement to determine a text corresponding to said speech.
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
23.
CONVERSION METHOD AND APPARATUS FOR MULTIMEDIA EDITING FILE, DEVICE AND MEDIUM
The present disclosure provides a conversion method and apparatus for a multimedia editing file, a device, and a storage medium, in the method, the multimedia material and the editing operation of the multimedia editing file are indicated by different indication information, and the indication information is encapsulated into a data format that can be supported by respective applications themselves, without the need to convert the multimedia material and the editing operation with a large amount of data, and the file format conversion is implemented, through a method of converting the indication information, in different applications, the conversion method is easy to implement and has high conversion efficiency, and also supports the conversion of the multimedia editing file that occupies large memory in different applications, which improves the reliability and universality of multimedia file conversion.
The present disclosure relates to a text display method, device, apparatus, and storage medium, and the method comprises: determining a length of entered text within a first container and a current width of the first container, if it is determined that the current width of the first container is less than the length of the entered text, determining a target truncated character from the entered text; then deleting characters behind the target truncated character in the entered text, and adding a preset symbol at a next position adjacent to the target truncated character to obtain a truncated text corresponding to the entered text; and displaying the truncated text in the first container.
An image editing method and apparatus, an electronic device, and a storage medium are provided. The method includes: obtaining an original image including the first and second editing objects; rendering the first and second editing objects to different layers respectively, to generate the first and second original layers; rendering an editing result of the first editing object under a first editing operation based on the first original layer to generate the first editing layer in response to the first editing operation for the first editing object, and rendering an editing result of the second editing object under the second editing operation based on the second original layer to generate the second editing layer in response to the second editing operation for the second editing object; and generating, based on the first and second editing layers, a target image as an editing result of the original image.
Embodiments of the present disclosure provide a sticker effect generation method and apparatus, an electronic device, and a storage medium. The method includes: displaying an obtained target face model; displaying a selected sticker on the target face model in response to a selection operation for a sticker; determining, in response to an edit operation for the sticker on the target face model, display parameter information about the sticker after the sticker is edited on the target face model, and rendering and displaying the sticker on the target face model based on the display parameter information and a type of the sticker; and generating a target sticker effect object based on the display parameter information about the sticker and the target face model in response to a release operation.
Embodiments of the present disclosure provide an interaction method and apparatus, an electronic device, a storage medium and a computer program product. The method includes: receiving a first trigger operation acting on a personal homepage of a target user; and in response to the first trigger operation, switching a current page from the personal homepage to an object display page, and displaying an object received by the target user on the object display page.
The present disclosure provides an information processing method and apparatus, an electronic device, and a storage medium. The information processing method includes: creating a source synchronization block in a first document in response to a first operation event; and in response to a second operation event, generating, in a second document, a reference synchronization block that has a synchronization relationship with the source synchronization block, where the synchronization relationship includes: content in the source synchronization block being kept the same as content in the reference synchronization block, and an update to the content in the source synchronization block being synchronized with the content in the reference synchronization block; and an update to the content in the reference synchronization block being synchronized with the content in the source synchronization block.
Provided in the embodiments of the present disclosure are a text information generation method and apparatus, and an electronic device and a storage medium. The method comprises: acquiring a target video; extracting video features of the target video, wherein the video features are used for representing content of the target video in at least one content dimension; on the basis of the video features, generating a target prompt, wherein the target prompt is used for representing a generation rule for text information concerning the target video; and on the basis of the target prompt, generating the text information concerning the target video. A target prompt is generated by extracting video features of a target video, and text information concerning the target video is generated by using the target prompt in combination with the capability of a language model, thereby realizing the rapid and efficient generation of the text information, and also ensuring that content of the generated text information matches video content of the target video, and thus improving the generation efficiency and the content quality of the text information corresponding to the target video.
H04N 21/44 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé
The embodiments of the present disclosure relate to the technical field of retrieval. Disclosed are a method and apparatus for constructing a retrieval library, and a retrieval method and apparatus and an electronic device. The method for constructing a retrieval library comprises: identifying, from a full retrieval library, first music information that has ever been used in media content; identifying, from the full retrieval library, second music information that is newly added within a first time period; and constructing a target retrieval library on the basis of the first music information and the second music information.
Provided in the present disclosure are a video generation method and apparatus, and a device and a storage medium. The method comprises: in response to a video generation task submission operation acting on a video shooting page, acquiring a video generation task identifier, and displaying a video generation page, wherein the video generation task identifier is used for identifying a video generation task corresponding to the video generation task submission operation, and execution progress information of the video generation task is presented on the video generation page; in response to a page exit operation acting on the video generation page, exiting the video generation page; and displaying a task notification message, wherein the task notification message is used for triggering the acquisition of a resultant video generated by means of the video generation task on the basis of the video generation task identifier.
According to the embodiments of the present disclosure, provided are a request processing method and apparatus, and a device and a storage medium. The method comprises: acquiring an input message, which is received in a dialogue between a user and a virtual object, wherein the virtual object is associated with a target service; providing the input message and context information to a target model, wherein the context information comprises local context information and global context information, the local context information indicates at least one historical message in the dialogue, and the global context information indicates at least one interaction operation of the user that is performed between same and the target service; and using the target model to generate a response concerning the input message. In this way, the accuracy of a response concerning an input message can be improved.
Embodiments of the present disclosure relate to the technical field of electronic devices. Disclosed are an identifier display method and apparatus, an electronic device, and a readable storage medium. The identifier display method comprises: displaying a first identifier on a session interface; and on the basis of a session operation between at least two session objects in the session interface, updating and displaying the first identifier as a target identifier, wherein the target identifier matches the type of the session operation, the at least two session objects are objects participating in a target session in the session interface, and the target identifier is used for representing an interaction operation relationship associated with the target session.
Embodiments of the present disclosure relate to an interface interaction method and apparatus, a device, and a storage medium. The method provided herein comprises: presenting a preview interface of a live streaming room, wherein the preview interface of the live streaming room is used for displaying live streaming content of the live streaming room in the state of having not entered the live streaming room, and the live streaming content is displayed in the preview interface in a first layout mode; when a first operation associated with the preview interface is received, presenting a first live streaming interface of the live streaming room, such that the live streaming content of the live streaming room is displayed in the first live streaming interface in the first layout mode; and when a second operation associated with the preview interface is received, presenting a second live streaming interface of the live streaming room, such that the live streaming content of the live streaming room is displayed in the second live streaming interface in a second layout mode, wherein the second layout mode is different from the first layout mode. In this way, the embodiments of the present disclosure can support users to quickly switch from a preview interface to a desired layout mode of a live streaming interface.
The present disclosure provides a response delay test method and apparatus, a computer device, and a storage medium. The method comprises: acquiring a test video for a program to be tested; performing detection on a first display region of two adjacent video frames of the test video to determine an operation frame of the test video, wherein the first display region is used for representing trigger information of a current screen interface; performing detection on a second display region of the two adjacent video frames of the test video to determine a response frame of the test video; and calculating a response delay of said program on the basis of the operation frame and the response frame.
H04N 21/442 - Surveillance de procédés ou de ressources, p. ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans le disque dur interne
H04N 21/478 - Services additionnels, p. ex. affichage de l'identification d'un appelant téléphonique ou application d'achat
H04N 21/218 - Source du contenu audio ou vidéo, p. ex. réseaux de disques locaux
A method for measuring the characteristics of a material between two wires. The method comprises: determining a first structure and a first parameter of two wires (L1/L2), wherein the first structure involves the two wires (L1/L2) being provided on a first insulating layer (11) and there being a material to be measured (X) between the two wires (L1/L2), and the first parameter comprises a wire width W1 of the two wires (L1/L2); determining a second structure and a second parameter of a single wire (L3), wherein the second structure involves the single wire (L3) being provided on a second insulating layer (21) and the second insulating layer (21) having the same material as the the first insulating layer (11), and the second parameter comprises a wire width W2 of the single wire (L3), the wire width W2 being equal to the wire width W1; obtaining a dielectric constant and a loss factor of the material of the second insulating layer (21) when signals of different frequencies are applied to the single wire (L3); when signals of different frequencies are applied to the two wires (L1/L2), obtaining a dielectric constant of the material to be measured (X); and in combination with the dielectric constant and the loss factor of the material of the second insulating layer (21) when signals of different frequencies are applied to the single wire (L3), and when signals of different frequencies are applied to the two wires (L1/L2), obtaining a loss factor of the material to be measured (X).
A text-to-speech method and apparatus, an electronic device, and a storage medium. The method comprises: acquiring first text, and extracting a semantic feature of the first text by means of a text-to-speech model (S102); acquiring first speech data having a first timbre, and by means of the text-to-speech model, generating a first acoustic feature on the basis of the semantic feature of the first text and the first speech data, wherein the first acoustic feature is at least used for representing semantic information of the first text and timbre information of the first timbre (S104); by means of the text-to-speech model and on the basis of the first acoustic feature, generating second speech data having the first timbre, wherein text corresponding to the second speech data is the first text (S106).
G10L 13/027 - Synthétiseurs de parole à partir de conceptsGénération de phrases naturelles à partir de concepts automatisés
G10L 13/10 - Règles de prosodie dérivées du texteIntonation ou accent tonique
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
The embodiments of the present disclosure relate to a video processing method and apparatus, and a device and a storage medium. The method comprises: acquiring a requirement description text for making a video template, and determining first background information, which comprises an object label and the number of video clips of a video background; on the basis of the requirement description text, the first background information and candidate material information, generating constituent element information of the video template by using a generative model, wherein the constituent element information comprises at least one piece of music information, effect resource information and subtitle information; and generating a target video template on the basis of the constituent element information.
Provided in the embodiments of the present disclosure are a special effect generation method and apparatus, an electronic device, and a storage medium. The special effect generation method comprises: receiving requirement information input by a user, the requirement information being used for describing, on the basis of a natural language, a requirement for generating a target special effect; according to the requirement information, generating a special effect material matched with image content of the currently displayed target image; and, on the basis of the special effect material, generating a target special effect in the target image. By converting in real time the requirement for generating a target special effect input by the user into the special effect material matched with the image content of the target image, and then generating the target special effect on the basis of the special effect material, the present disclosure generates individual special effects in real time, and thus expands the content forms of the special effects, such that the special effects are matched with image content, improving application effects in images.
Embodiments of the present disclosure provide a video generation method and device, and a storage medium. The method comprises: acquiring a video script, and acquiring text features of the video script; acquiring video features of each video material among a plurality of video materials; acquiring a similarity matrix between the text features and the video features; matching the video material with the video script on the basis of the similarity matrix to obtain a video material sequence; and generating a target video file on the basis of the video material sequence. In the embodiments of the present disclosure, retrieval and sorting of the video materials are implemented by means of the matching between the video script and the video materials, and then video packaging is carried out on the basis of the video material sequence, thereby better achieving automated editing and production of videos.
H04N 21/435 - Traitement de données additionnelles, p. ex. décryptage de données additionnelles ou reconstruction de logiciel à partir de modules extraits du flux de transport
41.
INTERACTION CONTROL METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
Provided in the embodiments of the present disclosure are an interaction control method and apparatus, a device, and a storage medium. The interaction control method comprises: on the basis of a first operation associated with a target object in a virtual scene, presenting an interface element associated with the target object, the interface element being used for indicating a quantity of a virtual resource for the target object; on the basis of a second operation associated with the target object, controlling the target object to perform in the virtual scene a first action corresponding to the second operation; in a first phase of the first action, presenting in a target style a first part of the interface element corresponding to the first action, so as to indicate that a first quantity of the virtual resource corresponding to the first part has not been consumed yet; and controlling the first quantity of the virtual resource to be consumed in a second phase of the first action. In this way, the embodiments of the present disclosure can visually present the process of consumption of virtual resources of objects in virtual scenes, thus facilitating users to more accurately control objects.
A63F 13/42 - Traitement des signaux de commande d’entrée des dispositifs de jeu vidéo, p. ex. les signaux générés par le joueur ou dérivés de l’environnement par mappage des signaux d’entrée en commandes de jeu, p. ex. mappage du déplacement d’un stylet sur un écran tactile en angle de braquage d’un véhicule virtuel
A63F 13/52 - Commande des signaux de sortie en fonction de la progression du jeu incluant des aspects de la scène de jeu affichée
A63F 13/57 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p. ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile
42.
THREE-DIMENSIONAL RECONSTRUCTION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
Disclosed in embodiments of the present disclosure are a three-dimensional reconstruction method and apparatus, an electronic device, and a storage medium. The method comprises: constructing a signed distance field model on the basis of a sample image comprising a target object and a camera pose corresponding to the sample image; and determining a three-dimensional model of the target object on the basis of the constructed signed distance field model. The loss functions used in the construction process of the signed distance field model include a first loss function used for determining the curvature loss between the three-dimensional model and the target object. In the construction process from the start to the end, the weights of the first loss function are adjusted in descending order. The probability of the problem of holes occurring on the surface of the reconstructed three-dimensional model can be effectively reduced.
Provided are a method and apparatus for processing media data, and a device and a medium. The method comprises: in response to receiving a request for converting first media data into second media data (910), acquiring a group of data frames from the first media data, wherein the first media data has a first style, and the second media data has a second style different from the first style (920); respectively using a group of processing nodes among a plurality of processing nodes in a distributed processing system to convert the group of data frames into a group of target data frames having the second style (930); and generating the second media data on the basis of the group of target data frames (940). By using the exemplary implementation mode in the present disclosure, it is possible to concurrently process data frames in first media data by fully using the processing capability of a distributed processing system, thereby improving the processing effect of the media data and shortening the waiting time.
H04N 21/44 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé
44.
Display screen or portion thereof with a graphical user interface
Embodiments of the present disclosure provide a method and apparatus for content presentation, a device and a storage medium, and relate to the computer technology field. The method comprises: displaying a floating control in a target interface and presenting a media stream in the floating control; in response to the floating control being triggered, determining a trigger position and a current gesture operation starting from the trigger position; determining a current operation type of the current gesture operation in accordance with a target region where the trigger position is located in the floating control; and correspondingly controlling the floating control based on the current operation type and the current gesture operation, wherein different operation types correspond to different regions in the floating control.
The disclosure relates to a method, apparatus, electronic device, storage medium and product for processing a component, including: sending a block information entity request to a target platform, the block information entity request including a block identifier corresponding to a target component; receiving a block information entity corresponding to the target component sent by the target platform; and invoking a component software development kit to render the target component in a document based on the block information entity, which achieves a corresponding function in the form of a component in a document by rendering and running the target component sent by the target platform through the component software development kit and improves the diversity of document functions. Different requirements of users can be met.
Embodiments of the disclosure disclose a data display method and apparatus, an electronic device, and a storage medium. The method includes: determining, in response to a display instruction for target display data, candidate positions in a display region according to a preset display mode, and determining reference positions in the display region according to the preset display mode and positions of current display data in the display region; determining priority values of the candidate positions based on distances between the candidate positions and the reference positions; and determining a target position from the candidate positions based on the priority values, and displaying the target display data according to the target position.
A vibration unit and a head-mounted display device are provided. The vibration unit includes a vibration motor, a vibration motor mount, a pressing guide, a main frame and an elastic member, the pressing guide including a pressing rod and a pressing plate, and a through hole being provided in either end of the main frame. The vibration motor is in the vibration motor mount fixedly connected to a first surface of the pressing plate; a second surface of the pressing plate abuts against a first end of the pressing rod, and a second end of the pressing rod passes through the through hole and is moveably connected to the main frame; the elastic member is around the pressing rod and is between the pressing plate and the main frame; and the main frame includes a connecting base configured to be connected to the head-mounted display device.
File transmission methods, devices and a storage medium are provided. A method is applicable to the transmitter, and includes: obtaining a target file to be transmitted; in response to the target file being a predetermined file corresponding to a file transmission channel, slicing the target file; and transmitting a plurality of sliced target file slices through the file transmission channel to cause a receiver to obtain the target file based on the received target file slices. The file transmission channel is a transmission channel established between a client and a cloud, the cloud comprises a cloud rendering process and a cloud application process, the client and the cloud rendering process are communicatively connected through a predetermined network communication protocol, and the cloud rendering process and the cloud application process are communicatively connected through a predetermined inter-process communication mode.
On the basis of the embodiments of the present disclosure, provided are a request processing method and apparatus, and a device and a storage medium. The request processing method comprises: acquiring an input message received in a session performed between a user and a virtual object, wherein the virtual object is associated with a music service; on the basis of the input message, determining at least one query parameter; on the basis of the at least one query parameter, determining at least one piece of audio-video content, wherein the matching degree between audio content of the at least one piece of audio-video content and target music content is greater than a threshold, and the target music content is determined on the basis of the at least one query parameter; and at least on the basis of the at least one piece of audio-video content, generating a first response concerning the input message. The embodiments of the present disclosure can not only provide a music service for a user, but can also provide for the user audio-video content (e.g., a video work, an image-text work, etc.) which matches related music content, thereby enriching the provided content, and improving the quality of service.
The present disclosure relates to the technical field of computers. Disclosed are a sound source separation method and apparatus, and an electronic device and a storage medium. The method comprises: acquiring audio to be processed and prompt information of a target sound source, wherein the prompt information of the target sound source is obtained after initial prompt information of the target sound source is updated, the initial prompt information is obtained on the basis of audio of the target sound source, and the update is performed by obtaining a separation result of the target sound source from first sample audio on the basis of the initial prompt information; and on the basis of the audio to be processed and the prompt information of the target sound source, determining the audio of the target sound source from the audio to be processed.
The present disclosure relates to the technical field of computers. Disclosed are a song playback method and apparatus, an electronic device, and a storage medium. The method comprises: in a song playback process of a current project, acquiring song list processing information of the current project, wherein the song list processing information comprises a newly added song and a playback sequence of the newly added song in a song list, the song list processing information is generated on the basis of an interaction operation with a song list page, and the song list page is used for being displayed on a terminal of a participant of the current project; updating the song list on the basis of the song list processing information to obtain an updated song list; and on the basis of the updated song list, carrying out song playback.
G06F 16/638 - Présentation des résultats des requêtes
G06F 16/68 - Recherche de données caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement
57.
MEDIA RESOURCE PRELOADING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND MEDIUM
Embodiments of the present disclosure provide a media resource preloading method and apparatus, an electronic device, and a medium. The media resource preloading method comprises: acquiring a media content stream; when the media content stream comprises target media content and a pre-loading condition of the target media content is satisfied, acquiring a configuration file of the target media content, wherein the target media content comprises a universal media resource and a media resource to be loaded; on the basis of the configuration file, determining loading information of the media resource to be loaded; and on the basis of the loading information, pre-loading the media resource to be loaded. The pre-loading problem of an information source having non-fixed content is solved; when the media content stream comprises the target media content and the pre-loading condition of the target media content is satisfied, the pre-loading of the media resource to be loaded is implemented by means of the configuration file, thereby implementing the pre-loading of the information source having the non-fixed content; and the pre-loading operation before the display of the target media content improves the page rendering performance.
The present disclosure relates to the technical field of glasses, and in particular to a hinge structure of a pair of glasses, and a pair of glasses. The hinge structure comprises: a frame stator, which is configured to be connected to the frame; and a temple rotor, which is configured to be connected to the temple, wherein the temple rotor is rotatably connected to the frame stator, and when a force is applied to a temple, the temple rotor is driven to rotate, so as to enable the temple to turn outward or fold relative to the frame. When the temple rotor is rotatably connected to the frame stator, the temple rotor can rotate relative to the frame stator; and when the temple rotor rotates, the frame stator remains relatively stable. Since the temple rotor is connected to the temple, when the temple is moved, the temple rotor can be driven to rotate relative to the frame stator to enable the temple to turn outward or fold relative to the frame, so that the temple can be folded for the storage of a pair of glasses, thereby improving the user experience.
The embodiments of the present disclosure relate to the technical field of electronic devices. Provided are a media content generation method and apparatus, and an electronic device and a readable storage medium. The media content generation method comprises: receiving a first trigger operation on first media content displayed on a media playing interface; in response to the first trigger operation, displaying a media generation interface; acquiring a first media resource, and displaying the first media resource on the media generation interface, wherein the first media resource is an image resource of at least one preset target object that is extracted from the first media content; in response to receiving a second trigger operation on the media generation interface, acquiring a second media resource; and displaying second media content on the media generation interface, wherein the second media content is media content that is obtained by synthesizing the first media resource as the foreground and the second media resource.
H04N 21/431 - Génération d'interfaces visuellesRendu de contenu ou données additionnelles
H04N 21/44 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé
60.
METHOD AND APPARATUS FOR ACQUIRING VIDEO-RELATED CONTENT, ELECTRONIC DEVICE, AND MEDIUM
Embodiments of the present disclosure relate to a method and apparatus for acquiring video-related content, a device, and a medium. The method comprises: on the basis of a video, generating a natural language text associated with the video. The method further comprises: on the basis of the natural language text, acquiring video-related content associated with the video, wherein the video-related content comprises at least one of a generated video, a retrieved video, and a retrieved image-text. The method further comprises: displaying the natural language text and a thumbnail corresponding to the video-related content.
Disclosed in the present application are a combined clip generation method, an apparatus, an electronic device and a readable medium. The method comprises: displaying a plurality of media clips on a page of multimedia editing; in response to an operation for combining the plurality of media clips, generating a second draft and editing attribute information in a first draft of the multimedia editing, the editing attribute information being used for indicating an editing operation for the plurality of media clips in the second draft; and, on the page of the multimedia editing, presenting the second draft as a first combined clip according to the editing attribute information, and replacing the plurality of media clips.
Embodiments of the present disclosure provide a scoring model training method and apparatus, an image-text pair scoring method and apparatus, a device, and a medium. The method comprises: obtaining a sample image-text pair comprising a first image description of a first language, a sample image corresponding to the first image description, and a second image description of a second language, and label data of the sample image-text pair; using the first image description to obtain a first text representation by means of a first language encoder; using the second image description to obtain a second text representation by means of a second language encoder; using the sample image to obtain a sample image representation by means of an image encoder; using the first text representation, the second text representation, and the sample image representation to obtain a first multi-modal representation by means of a multi-modal fusion network; using the first multi-modal representation to obtain a prediction score by means of a scoring network; and using a difference between the prediction score and the label data to adjust a scoring model comprising the structure, so as to perform training to obtain a scoring model capable of performing quality evaluation on an image-text pair.
Disclosed in embodiments of the present disclosure are a photographing method and apparatus, an electronic device, and a storage medium. The method includes: in response to a first triggering operation for a preset control on a first photographing interface, displaying a first third interface, wherein the first third interface comprises a plurality of first template identifiers and a first photographing control; in response to a second triggering operation acting on a target template identifier, applying a photographing template associated with the target template identifier to a photographing object; in response to an operation acting on the first photographing control, concealing the first third interface, displaying a second photographing interface, and displaying a timing identifier on the second photographing interface; and if the timing identifier is updated as a preset identifier, photographing the photographing object on the basis of the photographing template.
A display method of a work, an apparatus, an electronic device, a storage medium, and a program product are provided. The method includes: presenting a target work on a work presentation page; in response to a page switch operation triggering on the work presentation page, displaying a personal homepage of a target poster and displaying a position control on the personal homepage, wherein the target poster is a poster of the target work, and the personal homepage is configured to display work items of works posted by the target poster; and displaying a work item of the target work on the personal homepage in response to a first trigger operation triggering on the position control, wherein the position control keeps being displayed on the personal homepage before the work item of the target work is displayed.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
65.
INTERACTION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
An interaction method, an interaction apparatus, an electronic device, and a storage medium are provided. The interaction method includes: displaying a target object in a first display mode on a target object display interface, where in the first display mode, the target object display interface includes a first interactive control; receiving a display mode switching on the target object display interface; and in response to the display mode switching, displaying the target object in a second display mode on the target object display interface, where in the second display mode, the target object display interface includes a second interactive control different from the first interactive control.
G06F 3/04847 - Techniques d’interaction pour la commande des valeurs des paramètres, p. ex. interaction avec des règles ou des cadrans
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique utilisant un écran tactile ou une tablette numérique, p. ex. entrée de commandes par des tracés gestuels
H04N 21/431 - Génération d'interfaces visuellesRendu de contenu ou données additionnelles
H04N 21/6587 - Paramètres de contrôle, p. ex. commande de lecture à vitesse variable ("trick play") ou sélection d’un point de vue
66.
EXPRESSION DRIVING METHOD AND APPARATUS, DEVICE, AND MEDIUM
Embodiments of the present disclosure relate to an expression driving method and apparatus, a device, and a medium. The method includes: obtaining a target image, and recognizing an expression of a target object in the target image to obtain at least one expression coefficient; extracting an expression coefficient to be processed from the at least one expression coefficient, and determining an initial left expression coefficient and an initial right expression coefficient in the expression coefficient to be processed; generating a target left expression coefficient and a target right expression coefficient based on a similarity between the initial left expression coefficient and the initial right expression coefficient; and driving a virtual character to show a corresponding expression based on the target left expression coefficient and the target right expression coefficient.
The disclosure provides a method, apparatus, electronic device, and storage medium for video recording. The method for video recording includes: extracting outline information of a first object in a template material, wherein the template material comprises the first object and a background; acquiring a recording material imported by a user based on the outline information, wherein the recording material comprises a second object corresponding to the first object; and adding the second object into a region corresponding to the first object in the template material to acquire a target video, wherein the target video comprises the second object and the background.
The disclosure discloses a method, an apparatus, an electronic device and a readable storage medium for displaying an identifier, and relates to the technical field of electronic devices. The method for displaying an identifier includes: displaying a first identifier on a session interface; and updating the first identifier to be a target identifier based on a session operation of at least two session objects in the session interface and displaying the target identifier, wherein a type of the target identifier matches with a type of the session operation, the at least two session objects are objects participating in a target session in the session interface, and the target identifier represents an interactive operation relationship associated with the target session.
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
69.
AUDIO PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
Embodiments of the present disclosure provide an audio processing method and apparatus, an electronic device, and a storage medium. The method includes: obtaining first audio and first text corresponding to the first audio; predicting a first pronunciation sequence for the first text by a first pronunciation prediction system based on the first audio and the first text, where tones of pronunciations of characters in the first text that are labeled in the first pronunciation sequence include neutral tones and/or third tones after tone sandhi; and the first third tone in two consecutive third tones in the first text is labeled as a third tone after tone sandhi in the first pronunciation sequence; and correcting a neutral tone in the first pronunciation sequence by a second pronunciation prediction system, and/or correcting a third tone after tone sandhi in the first pronunciation sequence by a third pronunciation prediction system.
G10L 15/187 - Contexte phonémique, p. ex. règles de prononciation, contraintes phonotactiques ou n-grammes de phonèmes
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
70.
METHOD, DEVICE, APPARATUS AND STORAGE MEDIUM FOR VIDEO PRODUCTION
Embodiments of the disclosure provide a method, a device, an apparatus, and a storage medium for video production. The method includes: presenting a text setup interface in a displayed video producing window; receiving a first trigger operation; displaying object attribute information corresponding to the video production object in the first information display area; receiving a second trigger operation; and displaying, in the second information display area, video related text corresponding to the video production object. According to the method, a simple and easy operation video production platform is provided for a video producer. A text setting interface of a video to be manufactured is presented on the video production platform, and a function item for video text generation is provided for the video producer.
The present disclosure relates to the technical field of computers. Disclosed are a video processing method and apparatus, and electronic device, and a storage medium. The method provided by the present disclosure comprises: obtaining a current video frame and a current bandwidth corresponding to the current video frame; performing bit rate prediction on the basis of video frame information and a target quantization parameter to obtain a predicted bit rate of the current video frame, wherein the video frame information comprises a content analysis result of the current video frame and/or frame information of a previous output frame, and the previous output frame is an output frame corresponding to a previous video frame; determining a target bit rate of the current video frame on the basis of the predicted bit rate and the current bandwidth; and encoding the current video frame on the basis of the target bit rate, and determining a current output frame. The method realizes dynamic adjustment of a bit rate of an encoder, and avoids bandwidth resource waste when a definition objective is met.
The present disclosure relates to an audio recognition method and apparatus, an electronic device, and a storage medium. The method comprises: acquiring a target audio to be recognized; acquiring target multimedia matching the target audio, wherein the target multimedia at least comprises multimedia obtained by recognizing the target audio and multimedia in a target multimedia library; and on the basis of the target multimedia, obtaining a recognition result indicating whether the target audio is spliced audio data.
G10L 25/51 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation
73.
METHOD AND APPARATUS FOR DISPLAYING MUSIC CONTENT, ELECTRONIC DEVICE, AND MEDIUM
Embodiments of the present invention relate to a method and apparatus for displaying music content, an electronic device, and a medium. The method comprises: acquiring query content of a user from a first page of an audio application. The method also comprises: on the basis of parsing for the query content, determining music content for replying to the query content, and displaying the music content in the first page. According to the embodiments of the present invention, a user intent can be better understood by parsing the query content, so that in the chat process, the matching performance of the music content and the user, and the accuracy of the provided music content are improved, reducing the cost for acquiring the music content by the user and improving the interaction experience of the user.
Embodiments of the present disclosure provide a method and apparatus for page interaction, a device, and a storage medium. The method comprises: presenting an information display page of a user, wherein the information display page at least comprises a work display area, and the work display area comprises works associated with the user; in response to detecting a first triggering operation for the work display area, presenting a preset panel, wherein the preset panel at least comprises a plurality of selection conditions; and in response to detecting a selection operation for at least one selection condition of the plurality of selection conditions, selecting, from among the works displayed in the work display area, works meeting at least one selection condition so as to be presented in the work display area.
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
75.
METHOD AND APPARATUS FOR GENERATING IMAGES AND TEXT, ELECTRONIC DEVICE, AND MEDIUM
One or more embodiments of the present disclosure relate to a method and apparatus for converting videos into images and text, an electronic device, and a medium. The method comprises: in response to a touch operation by a user on a first control which is on a first interface, obtaining a video, the video comprising a first audio and a first image. The method further comprises: on the basis of the video, generating a target image and a target text, the target image being associated with the first image and/or the first audio, and the target text being associated with the first audio and the first image. In addition, the method further comprises: displaying the target image and the target text on a second interface.
The present application discloses a combined clip saving method and apparatus, an electronic device, and a storage medium. The method comprises: in response to a saving operation on a first combined clip in a first multimedia editing task, saving a first draft and editing attribute information of the first combined clip, wherein the editing attribute information is used for indicating an editing operation on a plurality of media clips in the first draft; and in response to a calling operation on the first combined clip in a second multimedia editing task, on the basis of the editing attribute information and the first draft, displaying the first combined clip on a page corresponding to the second multimedia editing task.
Disclosed in the embodiments of the present disclosure are a video editing method and apparatus, and an electronic device and a storage medium. The method comprises: performing attribute analysis on a scene in a sample video frame, so as to obtain scene attribute information, wherein the scene attribute information at least comprises spatial information and semantic information which correspond to the scene in the sample video frame; determining a target editing effect, wherein the target editing effect comprises at least one of the following: editing a scene transition and editing a scene shape; and generating a target video frame on the basis of the scene attribute information and the target editing effect and by means of a neural radiation field model, wherein the neural radiation field model is constructed on the basis of the sample video frame. Diversified video editing can be implemented.
H04N 21/472 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de servicesInterface pour utilisateurs finaux pour l'interaction avec le contenu, p. ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés
H04N 21/4402 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la redistribution domestique, le stockage ou l'affichage en temps réel
Embodiments of the present disclosure relate to a live-streaming interaction method and apparatus, a device, and a storage medium. The method provided herein comprises: during a target interaction event associated with a plurality of live-streaming rooms, presenting one group of interaction resources for the target interaction event in a live-streaming interface of a first live-streaming room among the plurality of live-streaming rooms, wherein the live-streaming interface corresponds to a target user of the first live-streaming room, and the one group of interaction resources is obtained on the basis of at least one historical interaction operation of the target user during the target interaction event and/or at least one historical interaction operation of the target user during a historical interaction event associated with the first live-streaming room; and on the basis of a selection to a target interaction resource in the one group of interaction resources, applying a target strategy corresponding to the target interaction resource to the target interaction event. In this way, the embodiments of the present disclosure can improve the degree of participation of a user in a live-streaming interaction event.
The disclosure relates to a method, apparatus, device and medium for producing video. The method includes: creating a video producing task for a user group comprising at least a first user and a second user, the video producing task being used for collecting a storyboard video material and/or editing the storyboard video material into a target video; in response to a first editing operation of the first user for the video producing task, displaying an editing result of the first editing operation and recording the first editing operation in the video producing task; in response to the second editing operation of a second user for the video producing task, displaying an editing result of the second editing operation and recording the second editing operation in the video producing task; and generating a production result of the video producing task based on editing operations recorded in the video producing task.
H04N 21/431 - Génération d'interfaces visuellesRendu de contenu ou données additionnelles
H04N 21/44 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé
The present disclosure provides an interaction method and apparatus, a computer device, and a storage medium. The interaction method includes: in response to a dialog request for a target character associated with a target text, displaying a dialog page between a user and the target character, wherein the target character is associated with a target character model, and the target character model is determined based on the target text and text comprehension information in a plurality of text comprehension dimensions associated with the target character; and in response to receiving a first question input on the dialog page, obtaining and displaying a first answer result of the target character, wherein the first answer result is generated and obtained based on the target character model, and the first answer result matches the text comprehension information in the plurality of text comprehension dimensions.
G10L 13/033 - Édition de voix, p. ex. transformation de la voix du synthétiseur
G10L 13/08 - Analyse de texte ou génération de paramètres pour la synthèse de la parole à partir de texte, p. ex. conversion graphème-phonème, génération de prosodie ou détermination de l'intonation ou de l'accent tonique
81.
GAME INTERACTION CONTROL METHOD AND APPARATUS, STORAGE MEDIUM AND ELECTRONIC DEVICE
The disclosure provides a method and apparatus for processing an image, an electronic device, and a storage medium. The image processing method includes: obtaining an image to be processed, where the image to be processed includes a face region, and a skin state of the face region in the image to be processed is at a first skin age; and inputting the image to be processed into a pre-trained target skin image processing model, and obtaining a target effect image corresponding to the image to be processed, where the target skin image processing model is obtained by training an initial skin image processing model according to a sample original image and a sample effect image corresponding to the sample original image, a skin state of a face region in the target effect image is at a second skin age, and the second skin age is less than or equal to the first skin age.
Provided in the embodiments of the present disclosure are a live streaming interaction method and apparatus, and a device and a storage medium. The method comprises: presenting respective identifiers of a plurality of users in a live streaming room interface according to a preset layout, wherein the plurality of users participate in an interaction event in a live streaming room; and on the basis of interaction information of the plurality of users in the interaction event, updating display modes of the respective identifiers of the plurality of users in the layout, wherein the identifier of at least one user among the plurality of users is displayed in a more highlighted manner than the identifiers of the other users among the plurality of users. In this way, a display style of an identifier of a user can be dynamically changed on the basis of the interaction of the user in an interaction event in a live streaming room, so as to indicate the interaction situation of the user in the interaction event, thereby improving the interaction experience.
Embodiments of the present application relate to an information display method and apparatus, a device and a storage medium. The method provided herein comprises: presenting a first area in a livestreaming interface of a livestreaming room, wherein the first area is used for displaying interaction information on the basis of a preset information display condition, and the interaction information corresponds to interaction events associated with the livestreaming room; and in response to a target interaction event associated with the livestreaming room: if the target interaction event matches the information display condition, displaying at least in the first area target interaction information corresponding to the target interaction event; and if the target interaction event does not match the information display condition, displaying the target interaction information in a second area of the livestreaming interface, wherein the first area is different from the second area. On the basis of said method, the embodiments of the present application can improve the efficiency of obtaining the information in the livestreaming room.
H04N 21/431 - Génération d'interfaces visuellesRendu de contenu ou données additionnelles
H04N 21/4788 - Services additionnels, p. ex. affichage de l'identification d'un appelant téléphonique ou application d'achat communication avec d'autres utilisateurs, p. ex. discussion en ligne
84.
METHOD, APPARATUS, TERMINAL AND STORAGE MEDIUM FOR INFORMATION PROCESSING
The disclosure provides a method, apparatus, terminal and storage medium for information processing. The method of information processing includes: in response to a first operation event on a first content in a content interface of a first document, creating first comment information of the first content in the content interface, and publishing the first comment information to a discussion interface, wherein the content interface and the discussion interface are different interfaces, and the discussion interface is configured to display information published by a current user and an associated user of the current user.
Embodiments of the application provide a method, apparatus, device, and storage medium for environment calibration. The method includes: at an electronic device configured to communicate with a display generation component and one or more input devices: displaying, via the display generation component, a three-dimensional computer-generated environment; determining, in the three-dimensional computer-generated environment, calibration composition data of a target physical object in a computation coordinate system; and determining a calibration model of the target physical object in a rendering coordinate system used for calibration, based on the calibration composition data and a coordinate offset between the computation coordinate system and the rendering coordinate system.
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
86.
COMMUNICATION METHOD, CLIENT, SERVER CONTROL METHOD, AND STORAGE MEDIUM
The present invention provides a communication method, a client, a server control method, and a storage medium. In some embodiments, the present invention provides a communication method, comprising: determining a first symbol inputted on the basis of a first client; sending the first symbol to a second client to cause the second client to display a second symbol which has the same meaning as the first symbol; or, determining a second symbol according to the first symbol and regional information of the second client, and sending the second symbol to the second client, wherein the first symbol and the second symbol are different symbols and have the same meaning. The method provided by the present invention can solve the communication problem caused by using habits of users, without changing the using habits.
Embodiments of the present disclosure provide an image processing method and apparatus, an electronic device, and a storage medium. The method includes: receiving a first image sequence including a first image and transmitted based on a first transmission frame rate and a second image sequence including second images and transmitted based on a second transmission frame rate; generating a third image sequence based on the first image sequence and the second image sequence, where the third image sequence includes third images; the third images are in a one-to-one correspondence with the second images; image content of a first area in each of the third images comes from the second image; and image content of areas, other than the first area, in the third image comes from the first image; and generating a video stream based on the third image sequence and transmitting the video stream to a user.
The disclosure provides a controlling method and apparatus based on extended reality, an electronic device, and a storage medium. The controlling method based on extended reality includes: determining a first audio in a real space in response to a first operation event of a current user; determining a first position in an extended reality space in response to a second operation event of the current user; and playing the first audio by taking the first position as a sound source position of the first audio in the extended reality space.
The present disclosure provides a display control method and apparatus, an electronic device, and a storage medium. In the display control method, a display unit is communicatively connected to a processing unit, so that a user is able to view a real environment through the display unit. The method includes: displaying a user interface by the display unit, where the user interface includes: an image of the real environment presented through the display unit and target information generated by the processing unit; and controlling, in response to the head of the user being rotated in a first direction of rotation, the target information to rotate, in the user interface, relative to the head in a direction opposite to the first direction of rotation. In some embodiments of the present disclosure, the usage experience of a user is improved.
Embodiments of the disclosure provide a method, an apparatus, an electronic device, and a storage medium for image processing. The method includes: obtaining a target image to be processed in response to a first trigger operation, generating and displaying an effect image corresponding to the target image, the effect image including an effect contour line corresponding to a target object in the target image, the effect contour line being associated with an outer contour line of the target object, and a relative distance between the effect contour line and the outer contour line being a first distance; and in response to a second trigger operation input for the effect contour line, adjusting the relative distance between the effect contour line in the effect image and the outer contour line of the target object from the first distance to a second distance for display.
G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 10/46 - Descripteurs pour la forme, descripteurs liés au contour ou aux points, p. ex. transformation de caractéristiques visuelles invariante à l’échelle [SIFT] ou sacs de mots [BoW]Caractéristiques régionales saillantes
91.
INFORMATION DISPLAY METHOD AND APPARATUS, DEVICE, STORAGE MEDIUM AND PROGRAM PRODUCT
The present disclosure relates to an information display method and apparatus, a device, a storage medium and a program product. The method comprises: displaying a plurality of object identifiers in a first area of a content display interface, the first area being configured for switching of the display of the plurality of object identifiers based on a first interaction operation; and in response to a preset operation on the first area, switching in the content display interface to displaying the plurality of object identifiers in a second area, wherein the second area is configured for switching of the display of the plurality of object identifiers based on a second interaction operation.
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p. ex. glissement, rotation, agrandissement ou changement de couleur
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect utilisant des icônes
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
92.
COLLABORATIVE TASK PROCESSING METHOD, DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
A collaborative task processing method, a device, and a computer-readable storage medium are provided. The method includes: displaying a collaborative task list related to a first user in a collaborative task center interface of the first user; determining, in response to a filter operation of the first user for filtering a collaborative task based on a task creation source type, a target task creation source type specified by the first user; filtering a collaborative task in the collaborative task list based on the target task creation source type; and displaying a filtering result in the collaborative task center interface of the first user.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
The embodiments of the present disclosure provide a text animation generation method and apparatus, an electronic device, and a storage medium. The text animation generation method includes: in response to a first user operation, acquiring a target text and reference data corresponding to the target text, the reference data being used for indicating a font effect of an effect text generated based on the target text; generating a text image according to the target text and the reference data, the text image including the effect text corresponding to the target text; and generating a text animation corresponding to the effect text according to the text image.
The present disclosure provides a gesture recognition method and apparatus, an electronic device, and a storage medium. The method comprises: acquiring a gesture image, and extracting gesture features of the gesture image, the gesture features being used as shared features of a plurality of sub-task branches for implementing gesture recognition, and the plurality of sub-task branches at least comprising a gesture category classification branch and a palm orientation classification branch; processing the gesture features by means of the gesture category classification branch so as to generate first sub-attribute information representing a gesture category; processing the gesture features by means of the palm orientation classification branch so as to generate second sub-attribute information representing a palm orientation; and, on the basis of the first sub-attribute information and the second sub-attribute information, determining whether the gesture image comprises a gesture representing a specified action.
Disclosed in one or more embodiments of the present disclosure are a data processing method, a discrete representation determination method, and a signal generation method. The data processing method comprises: acquiring abstract representation data of a first timing signal; searching at least two discrete data spaces for a first discrete data combination corresponding to the abstract representation data; on the basis of the abstract representation data and the first discrete data combination, updating the at least two discrete data spaces; searching the at least two updated discrete data spaces for a second discrete data combination corresponding to the abstract representation data, such that the second discrete data combination can represent a global optimal solution which is determined by using the at least two updated discrete data spaces; and on the basis of the second discrete data combination, determining discrete representation data of the first timing signal.
Provided in the present disclosure are a target object identification method and apparatus, and a device and a storage medium. The method comprises: acquiring a target video, and identifying respective motion trajectories of one or more target objects in the target video; extracting a plurality of attribute features of the target objects from the motion trajectories of the target objects, and constructing a heterogeneous graph between the attribute features of the target objects and pre-stored base library features, wherein the attribute features and the base library features serve as nodes in the heterogeneous graph, and there are initial connection relationships between some of the nodes; and updating the initial connection relationships in the heterogeneous graph, determining from the base library features matching results of the attribute features of the target objects on the basis of the updated connection relationships between the nodes, and identifying the target objects on the basis of the matching results.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
97.
AUDIO PROCESSING METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM
An audio processing method and apparatus, and an electronic device and a storage medium. The method comprises: acquiring first audio data, which has first text content, a first tone and a first rhythm (S102); processing the first audio data by means of an audio processing model, so as to acquire a content feature of second audio data, a rhythm feature of the second audio data and a tone feature of a second tone, wherein second text content of the second audio data is the same as the first text content, the second tone of the second audio data is different from the first tone, and a second rhythm of the second audio data is the same as the first rhythm or is the same as a third rhythm of the second tone (S104); and by means of the audio processing model, generating the second audio data on the basis of the content feature of the second audio data, the rhythm feature of the second audio data and the tone feature of the second tone (S106).
G10L 13/047 - Architecture des synthétiseurs de parole
G10L 13/08 - Analyse de texte ou génération de paramètres pour la synthèse de la parole à partir de texte, p. ex. conversion graphème-phonème, génération de prosodie ou détermination de l'intonation ou de l'accent tonique
G10L 25/30 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par la technique d’analyse utilisant des réseaux neuronaux
98.
METHOD AND APPARATUS FOR INFORMATION PROCESSING, AND DEVICE AND STORAGE MEDIUM
In the embodiments of the present disclosure, provided are a method and apparatus for information processing, and a device and a storage medium. The method comprises: on the basis of an interaction event between a target object and at least one service component, generating historical interaction information for the interaction event, wherein the historical interaction information at least comprises a knowledge element, and the knowledge element is used for describing a service object associated with the interaction event; and providing the historical interaction information for interaction between the target object and a digital assistant. Thus, by using a knowledge element to describe a service object involved in an interaction event, the embodiments of the present disclosure can support a digital assistant in providing richer interaction capabilities.
Provided in the embodiments of the present disclosure are a method and apparatus for generating an image, and a device and a storage medium. The method comprises: acquiring input text, wherein the input text instructs that an image corresponding to at least one character is generated; processing the input text by using a first model, so as to determine material description text corresponding to the at least one character; acquiring a material image, which is generated on the basis of the material description text; and on the basis of the material image and a glyph image corresponding to the at least one character, generating a target image corresponding to the at least one character. In this way, a target image corresponding to a character can be generated on the basis of input text, and a user is allowed to freely edit the input text, so as to generate diversified target images, thereby meeting diversified image generation requirements of the user.
G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
The present disclosure provides an image processing method and a related device. The method comprises: acquiring a text inputted by a user; on the basis of the text, determining at least one keyword; on the basis of the at least one keyword, determining at least one piece of gradient color information; and on the basis of the at least one piece of gradient color information, determining a placeholder image having a gradient color effect.