The present disclosure provides an effect video generation method and apparatus, an electronic device, and a storage medium. The effect video generation method includes: when it is detected that a current video frame to be processed includes point cloud data to be processed, determining distance information between the point cloud data to be processed and at least one historically estimated plane; and in a case that the distance information and a data volume of the point cloud data to be processed satisfy a plane estimation condition, performing plane estimation on the point cloud data to be processed on the basis of a random sample consensus algorithm to generate a target estimated plane, so as to display, based on each estimated plane, a target effect corresponding to the current video frame to be processed and obtain a target effect video frame.
The present disclosure relates to the technical field of video processing, and relates to a video processing method and apparatus, and an electronic device. The method comprises: first acquiring key frames, which correspond to transition images, in a video; next, determining splitting nodes for the video according to the key frames corresponding to the transition images; then, splitting the video according to the splitting nodes for the video, so as to obtain video fragments; and finally performing video parallel processing on the basis of the video fragments.
A method and apparatus for controlling a virtual object, and a device and a storage medium are provided. The method includes: obtaining posture information of a user in a current frame; in response to determining that the posture information matches a preset posture, controlling movement of a first virtual object and a second virtual object in a current virtual scene based on the posture information; during the movement, in response to determining that the first virtual object and the second virtual object meet a set contact condition, generating a first result; and in response to determining that the first virtual object and the second virtual object do not meet the set contact condition, generating a second result.
A63F 13/56 - Calcul des mouvements des personnages du jeu relativement à d’autres personnages du jeu, à d’autres objets ou d'autres éléments de la scène du jeu, p. ex. pour simuler le comportement d’un groupe de soldats virtuels ou pour l’orientation d’un personnage
4.
ANIMATION RENDERING METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
Provided in the present disclosure are an animation rendering method and apparatus, and a device and a storage medium. The method includes: firstly, determining a target graphic object on an image frame to be rendered in a target graphic animation and a graphic drawing mode of the target graphic object; if it is determined that the graphic drawing mode of the target graphic object is a filling drawing mode, for a triangulation state of the target graphic object, determining whether the image frame to be rendered changes compared with an adjacent previous image frame; and if the image frame to be rendered does not change, on the basis of triangulation data of a graphic object, corresponding to the target graphic object, on the adjacent previous image frame, rendering the target graphic object on the image frame to be rendered.
Embodiments of the disclosure provide a video synthesis method, apparatus, device, medium, and product. The method includes: obtaining a region image corresponding to a target object and a background image excluding the region image by performing an image segmentation on an image frame in a video to be processed; determining a mirror image corresponding to the region image; obtaining at least one first extended image by extending the region image in a first extension direction; obtaining at least one second extended image by extending the mirror image in a second extension direction different from the first extension direction; rendering the background image, the at least one second extended image and the at least one first extended image to obtain a target rendered image at the end of the rendering; obtaining a target video corresponding to the video to be processed by performing a video synthesis on the target rendered image.
A method, an apparatus, a device, and a medium for generating a video are provided. In one method, a plurality of video frames in a first video are obtained. A plurality of evaluation indicators of the plurality of video frames in the first video are obtained, respectively. The plurality of video frames is divided into at least one group of consecutive video frames based on the plurality of evaluation indicators, respective evaluation indicators of respective video frames in the at least one group of consecutive video frames meeting a predetermined condition. For a first group of consecutive video frames in the at least one group of consecutive video frames, a first time period associated with the first group of consecutive video frames is determined. A second video is generated by using a first video segment in the first time period in the first video.
G06V 20/40 - ScènesÉléments spécifiques à la scène dans le contenu vidéo
G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p. ex. utilisant un modèle de réflectance ou d’éclairage
G06V 10/74 - Appariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques
The present disclosure relates to a display apparatus and an extended reality device. The display apparatus comprises: a first display module, a second display module and a first flexible circuit board. The first display module comprises a first substrate, N first signal lines and a multiplexer being arranged on the first substrate, a first connection end of the first flexible circuit board being fixed on the first substrate, and the multiplexer being electrically connected between the first connection end of the first flexible circuit board and the first signal lines. The second display module comprises a second substrate, N second signal lines and a first demultiplexer being arranged on the second substrate, a second connection end of the first flexible circuit board being fixed to the second substrate, and the first demultiplexer being electrically connected between the second connection end of the first flexible circuit board and the second signal lines. The number of electrical connection lines in the first flexible circuit board is M, M being less than N, and M and N being both positive integers.
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
8.
OPTICAL WAVEGUIDE STRUCTURE, OPTICAL WAVEGUIDE MODULE AND HEAD-MOUNTED DISPLAY DEVICE
The application discloses an optical waveguide structure, an optical waveguide module and a head-mounted display device. The optical waveguide structure according to an implementation of the disclosure includes a waveguide substrate, an in-coupling grating and an out-coupling grating. The in-coupling grating is used to couple input light into the waveguide substrate for transmission. The out-coupling grating is used to perform pupil expansion and out-coupling on light transmitted in the waveguide substrate. The out-coupling grating is a two-dimensional grating, and the two-dimensional grating has a two-dimensional structural element(s). A characteristic axis direction of the two-dimensional structural element(s) forms a relative rotation angle with a direction of a lattice period vector sum of the two-dimensional grating to change a diffraction efficiency of the out-coupling grating in a predetermined direction.
The embodiment of the disclose provides a method and device, and medium of code cleaning. A specific embodiment of the method comprises the following steps: running a target test case; obtaining metadata of a target class through reflection of the target class in a target bytecode called in the target test case; determining, based on the metadata, a dependency relationship between members comprised in the target class; determining real values of the members associated with the dependency relationship based on the running of the target test case; determining a dead branch in a decision structure comprised in the target class based on the real values; and deleting a corresponding statement of the dead branch from a source code corresponding to the target class.
Embodiments of the present disclosure provide an image processing method, an electronic device, and a storage medium. The method includes: obtaining an image to be processed and a target color card, where the image to be processed includes a target layer set, and the target layer set includes a material layer, where the material layer includes target materials of the image to be processed; grouping the target materials based on material attributes of the target materials, to obtain at least one material group; and determining a mapping relationship between the at least one material group and candidate colors in the target color card, obtaining a target color corresponding to each material group of the at least one material group based on the mapping relationship, and updating colors of target materials in each material group based on the target color.
G06T 7/90 - Détermination de caractéristiques de couleur
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p. ex. de visages similaires sur les réseaux sociaux
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 20/62 - Texte, p. ex. plaques d’immatriculation, textes superposés ou légendes des images de télévision
11.
INFORMATION DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
Embodiments of the present disclosure provide an information display method and apparatus, a device, and a storage medium. The method comprises: in response to search information input into a material search box of a video editing interface, acquiring from a preset database first search result information corresponding to the search information, wherein the preset database comprises a correspondence between video frame text and video frame information, a correspondence between audio text and video frame information, and a correspondence between video file information and video frame information; and displaying the first search result information in the video editing interface, wherein the first search result information comprises: target video frame information corresponding to at least one respective target video frame text comprising the search information, video file information corresponding to the target video frame information, and audio text corresponding to the target video frame information. The present disclosure can reduce the time spend by users on searching for materials, improving user experience.
H04N 21/44 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé
12.
INFORMATION DISPLAY METHOD AND APPARATUS, A DEVICE, AND A STORAGE MEDIUM
Embodiments of the present application relate to an information display method and apparatus, a device, and a storage medium. The method presented herein comprises: displaying a viewing interface associated with an application (410); and in the viewing interface, displaying prompt information corresponding to the current life cycle stage of the application, wherein the prompt information comprises at least one of the following: a label used for indicating the current life cycle stage, or an interactive information component corresponding to the current life cycle stage (420).
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
13.
MEDIA CACHING METHOD AND APPARATUS, AND DEVICE AND MEDIUM
Provided in the embodiments of the present disclosure are a media caching method and apparatus, and a device and a medium. The method comprises: on the basis of the playback progress of a currently played target media file in a media list, determining prediction information associated with a plurality of media files in the media list, wherein the prediction information indicates the switching probability of each segment in the plurality of media files being switched to playback within a target duration; on the basis of the prediction information and cache information, determining the degree of demand for caching each segment of the plurality of media files, wherein the degree of demand is determined on the basis of a playback latency of the corresponding segment and/or the switching probability thereof, and the cache information indicates the segments of the plurality of media files that have been cached; and on the basis of the degree of demand, determining, from the plurality of media files, at least one segment to be cached. In this way, the embodiments of the present disclosure can ensure the playback smoothness of a media file, and also avoid unnecessary download control to the greatest degree, so as to reduce bandwidth wastage.
The present disclosure relates to a recommendation method and apparatuses, a computer-readable storage medium and a computer program product, and to the technical field of multimedia. The recommendation method comprises: determining an object to be recommended matched with a reference object in a reference book; using a description text of the reference object in the reference book and the object to be recommended to generate multimedia content for introducing the object to be recommended; and, during a reading process of a user, displaying the multimedia content.
The embodiments of the present disclosure relate to a method and apparatus for generating background music for a video, and an electronic device and a program product. The method comprises: determining a scene transition point in a video, wherein the scene transition point represents a transition between two scenes (202). The method further comprises: on the basis of music features of first background music of the video, determining an audio track group collection for second background music (204). The method further comprises: on the basis of the scene transition point and the audio track group collection for the second background music, generating the second background music (206).
Provided in the embodiments of the present disclosure are a video generation method, and a device. The video generation method comprises: presenting at least one video generation control on a preset content presentation page, and presenting at least one piece of historically posted video reference content on the content presentation page; in response to a trigger operation of a user on the at least one video generation control or the video reference content, acquiring first description media content and video parameters, which are determined by the user; and in response to a generation operation triggered by the user, generating a target video on the basis of the first description media content and the video parameters. Therefore, presentation content on a content presentation page can be enriched, and trigger methods for video generation are also enriched. In addition, a generation operation for a target video is performed on the basis of both first description media content and video parameters, thereby improving the content quality of the generated target video, and improving the user experience.
H04N 21/472 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de servicesInterface pour utilisateurs finaux pour l'interaction avec le contenu, p. ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés
The present disclosure provides a battery and an electronic device. The battery comprises: at least one negative electrode current collector, wherein the at least one negative electrode current collector has a first surface and a second surface; and a plurality of coating areas, located on the surfaces of the at least one negative electrode current collector, wherein the impedance of at least two coating areas among the plurality of coating areas is different and satisfies that: the impedance of at least one coating area located on the first surface is different from the impedance of at least one coating area located on the second surface. The present disclosure can balance the energy density and the electric conduction speed of the battery, and improve the battery capacity of a large-power-consumption device.
A method and apparatus for generating music, and an electronic device and a program product. The method (200) comprises: determining a music element of input music, wherein the music element of the input music comprises at least one of a structure, a beat, a genre, an instrumentation timbre, harmony and sentiment (202); on the basis of the music element of the input music, determining a set of audio track groups of output music, wherein each audio track group is a combination of audio tracks of music (204); and on the basis of the set of audio track groups of the output music, generating the output music corresponding to the input music (206).
The present disclosure relates to the technical field of video processing, and relates to a video generation method and apparatus and a computer readable storage medium. The video generation method comprises: on the basis of a first user instruction, controlling an action of a first virtual object in a virtual scene, so as to acquire first original action information of the first virtual object; on the basis of a second user instruction and the first original action information, controlling an action of a second virtual object in the virtual scene, so as to acquire second original action information of the second virtual object; and generating a video on the basis of the first original action information and the second original action information.
Embodiments of the present disclosure provide an image processing method and apparatus, an electronic device, and a storage medium. The image processing method comprises: displaying an image editing page, wherein the image editing page comprises a parameter setting area and a result display area, and the parameter setting area comprises product images and candidate category options; in response to a selection event for the candidate category options, acquiring an object category value corresponding to a target category option, and if the object category value satisfies a preset category condition, displaying candidate style options in the parameter setting area; and in response to a selection event for the candidate style options, acquiring a style parameter corresponding to a target style option, generating a target image of a target object on the basis of the style parameter, and displaying the target image in the result display area. According to the method and the related apparatus provided in the embodiments of the present disclosure, the target image can be automatically generated on the basis of the style parameter, thereby reducing the image editing difficulty, and improving the image generation efficiency.
The present disclosure provides a method for generating a video having a border image and a device. The method includes: acquiring a first video; extracting, from the first video, a text for describing content in the first video; generating a border image based on the text, where the text is used to determine content of the border image; and compositing the first video with the border image to obtain a second video.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/40 - ScènesÉléments spécifiques à la scène dans le contenu vidéo
G06V 20/62 - Texte, p. ex. plaques d’immatriculation, textes superposés ou légendes des images de télévision
22.
DATA PROCESSING METHOD AND APPARATUS, DEVICE, AND MEDIUM
Embodiments of the present disclosure relate to a data processing method and apparatus, a device, and a medium. The method comprises: respectively performing pruning processing on candidate network layers in an original neural network according to a plurality of preset pruning rates to obtain a plurality of corresponding sub-neural networks; respectively inputting test data sets into the original neural network and the plurality of sub-neural networks for processing, and obtaining, on the basis of output data sets of the original neural network and the plurality of sub-neural networks, a reference performance index corresponding to the original neural network and a plurality of test performance indexes corresponding to the plurality of sub-neural networks; and analyzing, according to performance losses of the plurality of test performance indexes relative to a reference performance index, parameter redundancies of parameters of the candidate network layers in the original neural network under different pruning rates.
The present disclosure provides a video processing method, apparatus, and device, and a storage medium. The video processing method includes: displaying, in response to a preset trigger operation acting on a video playing page of a first video, a mask page on a current video frame picture of the first video; pulling up, in response to a preset slide operation acting on the mask page, a video recommendation page from a bottom of the mask page for display, and showing a cover of a recommended video corresponding to the first video on the video recommendation page. The recommended video belongs to a recommended video stream determined based on the first video.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/0483 - Interaction avec des environnements structurés en pages, p. ex. métaphore livresque
24.
METHOD, APPARATUS, DEVICE, STORAGE MEDIUM AND PROGRAM PRODUCT OF AUDIO PROCESSING
The disclosure relates to a method, an apparatus, a device, a storage medium and a program product for processing audio. The method includes: capturing an external sound to obtain a second audio during playing a first audio to the outside; determining a play duration of the first audio based on a current system time and a play latency; calculating a reference timestamp based on the play duration of the first audio and a capture latency; processing the second audio based on the reference timestamp to obtain a third audio; and performing audio mixing processing on the first audio and the third audio to obtain a target audio.
Embodiments of the present disclosure provide a special effect processing method, an electronic device, a storage medium, and a program product. The method includes: displaying a special effect preview interface in response to an effect preview triggering operation for a target special effect, where the special effect preview interface at least includes a time setting item, and the time setting item is used to set a preview moment of the target special effect; and in response to a time setting triggering operation based on the time setting item, determining a special effect preview image according to a set preview moment and the target special effect, and displaying the special effect preview image.
The embodiments of the disclosure disclose a method, an apparatus, a device and storage medium for message processing, and relate to the technical field of computers. The method includes: displaying a predetermined message identifier in a target interface, wherein at least one of predetermined media content and target resource information is presented in the target interface, and the predetermined message identifier is set to indicate that an unread message currently exists; and in response to a triggering operation of a for the predetermined message identifier, presenting a message interaction area in the target interface, and presenting the unread message sent by at least one sender in the message interaction area, wherein the message interaction area is set for message interaction between the current user and the at least one sender of a unread message.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
27.
DATA REQUEST METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
Embodiments of the present disclosure disclose a data request method and apparatus, an electronic device, and a storage medium. The method is applied to a client and includes: sending a data request to a business server, so that the business server determines, based on the data request, address description information of data to be requested; receiving a first internet protocol address sent by the business server, where the first internet protocol address is obtained by performing network-side domain name resolution on the address description information by the business server; and requesting, based on the first internet protocol address, data content of the data to be requested.
H04L 61/4511 - Répertoires de réseauCorrespondance nom-adresse en utilisant des répertoires normalisésRépertoires de réseauCorrespondance nom-adresse en utilisant des protocoles normalisés d'accès aux répertoires en utilisant le système de noms de domaine [DNS]
H04L 61/10 - Correspondance entre adresses de types différents
H04L 61/5007 - Adresses de protocole Internet [IP]
28.
IMAGE PROCESSING METHOD AND APPARATUS, AND MEDIUM, PROGRAM PRODUCT AND ELECTRONIC DEVICE
The present disclosure relates to an image processing method and apparatus, and a medium, a program product and an electronic device. The image processing method comprises: in response to receiving an image to be processed, determining geometric information of a target three-dimensional model corresponding to an object, which is to be rendered, in said image; acquiring a first normal map and a second normal map, wherein the first normal map comprises normal adjustment data of each vertex in the target three-dimensional model, and the second normal map at least comprises normal adjustment data of each vertex in a plurality of glow regions on the target three-dimensional model; and rendering said image on the basis of the geometric information, the first normal map and the second normal map, so as to obtain a rendered target image. In this way, not only can the rendering effect for glow regions be effectively ensured, but multiple instances of rendering can also be effectively avoided, such that the rendering efficiency can be effectively improved, and the image processing time can be shortened.
The present disclosure provides an image rendering method and apparatus, an electronic device, and a storage medium. The method comprises: determining a first render thread from among a plurality of render threads in a rendering terminal, and generating a shared texture use request in the first render thread; on the basis of the first render thread, sending the shared texture use request to a second shared window associated with a second render thread; acquiring, from a handle list in the second shared window, a target handle matched with the shared texture use request, and transmitting the target handle to the first render thread; and switching a corresponding local window in a first shared window on the basis of the target handle, and in the local window, acting, on a shared texture corresponding to the target handle, a rendering operation of the first render thread to obtain a target rendering result.
Provided in the embodiments of the present disclosure are a video live-streaming method, apparatus and system, and a device and a storage medium. The video live-streaming method comprises: in response to a virtual video live-streaming operation triggered by a target user, sending to a server a virtual video live-streaming request corresponding to the target user, so that the server allocates to the target user a target collection device in an idle state on the basis of the virtual video live-streaming request, and returns a virtual video live-streaming start instruction (S110); in response to the virtual video live-streaming start instruction, acquiring the current viewing position and the current viewing perspective of the target user (S120); by means of a real-time communication network, sending the current viewing position and the current viewing perspective to the server, so that the server synchronously adjusts the current photographing position and the current photographing perspective of the target collection device on the basis of the current viewing position and the current viewing perspective, photographs a three-dimensional virtual scene in real time on the basis of the adjusted target collection device, and returns a captured current live-streaming video stream by means of the real-time communication network (S130); and playing the current live-streaming video stream captured in real time by the target collection device (S140). By means of the technical solution of the embodiments of the present disclosure, it is unnecessary to construct a virtual video, and the same video live-streaming effect as the virtual video is realized.
The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium. The method comprises: acquiring an original image; on the basis of the original image, obtaining a fragment moving layer and a background layer, wherein the fragment moving layer comprises a plurality of image fragments of the original image, and the plurality of image fragments present a dynamic effect of moving in a three-dimensional space, and the background layer comprises the original image; and synthesizing the fragment moving layer and the background layer to obtain a target video.
Provided in the embodiments of the present disclosure are a video processing method and apparatus, and a related product. The method comprises: acquiring video data, and acquiring a coding unit mask image corresponding to a first video frame in the video data, wherein the coding unit mask image is used for representing a coding unit division result of the first video frame; and by means of a video processing model, and on the basis of the first video frame, a video frame previous to the first video frame in the video data, a video frame following the first video frame in the video data and the coding unit mask image, performing region-adaptive feature enhancement processing on the first video frame, so as to obtain an enhanced first video frame.
The embodiments of the present disclosure relate to a search method and apparatus, and a device and a storage medium. The method provided herein comprises: on the basis of a search request of a user, displaying a result page with respect to the search request; displaying at least one virtual object on the result page, wherein the at least one virtual object comprises a virtual object, which is determined to match the search request, among a plurality of candidate virtual objects; and in response to a first preset operation on a target virtual object among the at least one virtual object, displaying a dialog window by means of which a dialog with the target virtual object is executed, wherein the dialog window displays target reply content of the target virtual object with respect to the search request. In this way, the embodiments of the present disclosure can support matching, after a user initiates a search request, of a virtual object which is related to the search request, thereby helping to answer a question of the user.
The present disclosure relates to the technical field of computers, and relates to a virtual object construction method and device, and a computer readable storage medium. The virtual object construction method comprises: in response to a user placing a first construction unit in a virtual scene, displaying first prompt information for prompting candidate placement information of a second construction unit, wherein the candidate placement information is predicted on the basis of a placement position of the first construction unit; and in response to a confirmation operation of the user on the first prompt information, placing the second construction unit on the basis of the candidate placement information to construct a virtual object.
A63F 13/533 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir pour inciter une interaction avec le joueur, p. ex. en affichant le menu d’un jeu
Provided in the embodiments of the present disclosure are a method and apparatus for making a conversation with a virtual character, and an electronic device. The method for making a conversation with a virtual character comprises: displaying a first conversation interface, wherein the first conversation interface is used for making a conversation with a first character, the conversation being associated with a first interactive story, and the first character is a virtual character; in response to receiving a conversation suggestion request, displaying a candidate conversation suggestion, wherein the candidate conversation suggestion is generated on the basis of story information related to the first character and/or character information of the first character in the first interactive story, and is associated with story content of the first interactive story; determining a target message on the basis of a first preset operation of a user on the candidate conversation suggestion; and sending the target message to the first character. By means of the embodiments of the present disclosure, a candidate conversation suggestion related to a plot can be displayed to a user on the basis of a request of the user, thereby facilitating plot development.
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique utilisant un écran tactile ou une tablette numérique, p. ex. entrée de commandes par des tracés gestuels
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
A63F 13/533 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p. ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir pour inciter une interaction avec le joueur, p. ex. en affichant le menu d’un jeu
Provided in the embodiments of the present disclosure are a media data generation method and apparatus, an electronic device and a storage medium. The method comprises: displaying a candidate material area of an interaction page; in response to an interaction event for the candidate material area, displaying the candidate material area and a parameter setting area, the parameter setting area comprising parameter setting options; in response to a parameter setting event in the parameter setting area, obtaining parameter content corresponding to the parameter setting options, and displaying the parameter content at corresponding positions of the parameter setting options, the parameter content being used for generating media data; and, in response to a display event for a generation result area, switching the candidate material area into a folded form and displaying the parameter setting area and the generation result area, and displaying the media data in the generation result area. The embodiments of the present disclosure reduce the frequency of page switching in a media data generation process, thereby optimizing interaction modes for generating media data, and improving user experience.
The present invention provides a method for generating a video having a side overlay image, and a device. The method comprises: acquiring a first video; extracting from the first video a text for describing the content in the first video; generating a side overlay image on the basis of the text, wherein the text is used for determining the attribute and/or content of the side overlay image; and compositing the first video with the side overlay image to obtain a second video.
Provided in the embodiments of the present disclosure are a special-effect editing method and apparatus, and an electronic device, a storage medium and a program product. The special-effect editing method comprises: displaying a special-effect editing interface, wherein the special-effect editing interface comprises a special-effect editing control, the special-effect editing control is used for adding and/or deleting a special-effect editing item, and at least comprises a slot editing control, and the special-effect editing item comprises a slot editing item; and in response to a special-effect generation request, when the special-effect editing interface comprises the slot editing item, generating a special-effect template on the basis of a first image added to the slot editing item and special-effect-associated content which has been edited in the special-effect editing interface, and when the special-effect editing interface does not comprise the slot editing item, generating a target special effect on the basis of the special-effect-associated content which has been edited in the special-effect editing interface. The present application supports the use of the same special-effect editing interface to produce a target special effect and a special-effect template, thereby realizing diversified special-effect editing and improving the flexibility of special-effect editing.
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p. ex. glissement, rotation, agrandissement ou changement de couleur
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
39.
Display screen or portion thereof with a graphical user interface
The present invention relates to the technical field of computers, and relates to an interactive multimedia content processing method and apparatus, a device, a medium and a product. The interactive multimedia content processing method of the present invention comprises: on the basis of content text of an original story, generating content text of one or more story branches; on the basis of the content text of the original story and the content text of the one or more story branches, generating images, wherein the images include images of characters and images of scenes; and on the basis of the content text of the original story, the content text of the one or more story branches and the images, generating interactive multimedia content.
G06F 16/40 - Recherche d’informationsStructures de bases de données à cet effetStructures de systèmes de fichiers à cet effet de données multimédia, p. ex. diaporama comprenant des données d'image et d’autres données audio
41.
METHOD AND APPARATUS FOR MIXING AUDIO, AND DEVICE, MEDIUM AND PROGRAM PRODUCT
A method and apparatus for mixing audio, and a device, a medium and a program product. The method (300) comprises: acquiring a target vocal for a first track (104) among a plurality of tracks (102) and target background music for a second track (106) among the plurality of tracks (102) (302). The method further comprises: determining a first group of sound features (110) for a first group of tracks (118) that are related to the vocal, and a second group of sound features (112) for a second group of tracks (120) that are related to the background music (304). The method further comprises: on the basis of the first group of sound features (110) and the second group of sound features (112), normalizing the target vocal and the target background music (306); and on the basis of the processed target vocal and the processed target background music, generating target mixed audio (116) (308).
G10H 1/00 - Éléments d'instruments de musique électrophoniques
G10L 19/00 - Techniques d'analyse ou de synthèse de la parole ou des signaux audio pour la réduction de la redondance, p. ex. dans les vocodeursCodage ou décodage de la parole ou des signaux audio utilisant les modèles source-filtre ou l’analyse psychoacoustique
42.
MULTIMEDIA AGGREGATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
The present disclosure relates to a multimedia aggregation method and apparatus, an electronic device, and a storage medium. The method comprises: acquiring a set of multimedia to be aggregated, the set of multimedia to be aggregated comprising multiple pieces of multimedia to be aggregated; acquiring correlations between the multiple pieces of multimedia to be aggregated, and on the basis of the correlations, dividing the set of multimedia to be aggregated into multiple subsets of multimedia to be aggregated; and performing clustering processing on multimedia to be aggregated in each of the subsets of multimedia to be aggregated to obtain an aggregation result of the multiple pieces of multimedia to be aggregated.
The present disclosure provides a video processing method and apparatus, and a related product. The method comprises: acquiring video data, determining image information of video frames in the video data in at least one image channel, and determining an image similarity between adjacent video frames in the video data (S102); on the basis of the image information and the image similarity, determining whether the video data comprises a video special effect (S104); and, if the video data comprises a video special effect, transcoding the video data according to a first transcoding mode, wherein the first transcoding mode is used for reducing the amount of video information lost during the video data transcoding process (S106).
H04N 21/44 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé
44.
METHOD AND APPARATUS FOR INTERACTING WITH VIRTUAL OBJECT, DEVICE AND STORAGE MEDIUM
The embodiments of the present disclosure relate to a method and apparatus for interacting with a virtual object, a device and a storage medium. The method provided herein comprises: presenting access entries associated with a target user; and, on the basis of the selection of an access entry, presenting an interaction interface with a virtual object, the virtual object corresponding to the target user, and the current interaction mode of the current user and the virtual object in the interaction interface being determined from a plurality of preset interaction modes on the basis of at least one of the following: platform configuration information associated with the access entry, and the entry type of the access entry.
The present invention relates to the technical field of computers. Disclosed are a list page interaction method and apparatus, a computer device, and a storage medium. The method comprises: displaying a target list page, wherein description information of at least one unassociated target object is displayed in the target list page; in response to an association instruction for the unassociated target object, updating the unassociated target object as an associated target object, and adding, in the target list page, display of an interaction area related to the associated target object, wherein an interaction component is displayed in the interaction area; and in response to a trigger instruction for the interaction component, performing an interaction operation associated with the associated target object.
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect utilisant des icônes
A table commenting method, and a device. The method comprises: when a first operation instruction for a table is received, displaying a comment input area for a target region in the table (S101); receiving a selection instruction for a target cell in the comment input area, and a first comment input in the comment input area, the target cell comprising one or more cells in the target region (S102); and generating region comment content for the target region, the region comment content comprising the first comment for the target cell (S103). Comments can be inputted and displayed on the basis of regions, which makes it convenient for a user to uniformly manage comments. Moreover, coarse-grained second comments and fine-grained first comments can be implemented means of a same comment input region.
Provided in the embodiments of the present disclosure are a media content generation method and apparatus, an electronic device, a storage medium and a program product. The method comprises: displaying at least one prompt information group, the prompt information group comprising at least one piece of candidate prompt information, and the candidate prompt information being used for describing content features of target media content; in response to an information selection operation for the at least one piece of candidate prompt information in the at least one prompt information group, determining first prompt information from among the candidate prompt information; and displaying the target media content generated on the basis of the target prompt information, the target prompt information comprising the first prompt information. The embodiments of the present disclosure can use the technical solution to reduce the difficulty and uncertainty of prompt information inputting, so as to improve the quality of generated media content.
Provided in the embodiments of the present disclosure are an audio processing method and apparatus, a storage medium and an electronic device. The audio processing method comprises: acquiring an audio to be processed, and segmenting said audio to obtain a plurality of audio segments; and inputting the plurality of audio segments into a pre-trained audio classification model, so as to obtain global classification information of said audio and/or local classification information of the audio segments, the global classification information and the local classification information respectively being multi-label classification information. By means of the pre-trained audio classification model, the present disclosure performs classification processing with different granularities on an audio to be processed, so as to obtain global classification information of said audio and/or local classification information of each audio segment, thus satisfying requirements for classifying audio data with different granularities. Additionally, both the global classification information and the local classification information are multi-label classification information, which can label overlap audio events within the audio data, thus improving the classification accuracy and recall rate.
G10L 25/51 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation
G10L 15/04 - SegmentationDétection des limites de mots
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
According to the embodiments of the present disclosure, provided are a method and apparatus for subscribing to a media item, and a device and a storage medium. The method comprises: displaying a first media item, and a subscription control associated with the first media item; in response to receiving a user operation for the subscription control, determining a subscription theme associated with the first media item; and in response to determining that a predetermined condition corresponding to the subscription theme is met, displaying at least one second media item matching the subscription theme. Thus, users can be supported to subscribe to desired media items by means of a simpler operation, and the efficiency of browsing the media items by the users is improved.
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
50.
INFORMATION INTERACTION METHOD AND APPARATUS, AND ELECTRONIC DEVICE
An information interaction method and apparatus, and an electronic device. The method is applied to a project management system and comprises: in response to a trigger operation for an associated work item information control in a first work item display interface, displaying an information selection interface of an associated work item (101); in the information selection interface of the associated work item, displaying main description information and at least one piece of field information of the associated work item (102); and in response to a selection operation for the at least one piece of field information, displaying, in the first work item display interface, information corresponding to the selected field information (103).
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
51.
GROUP MANAGEMENT METHOD AND SYSTEM, DEVICE, AND MEDIUM
The present application provides a group management method and system, a device, and a medium. The method comprises: in response to a preset operation, acquiring a group template corresponding to at least one first object, wherein the group template comprises content corresponding to at least one group configuration item; on the basis of the group template, creating at least one first group corresponding to the at least one first object, wherein the first group comprises the content corresponding to the at least one group configuration item, or on the basis of the group template, performing a configuration operation on at least one second group corresponding to the at least one first object, wherein the second group having experienced the configuration operation comprises the content corresponding to the at least one group configuration item.
A lens holder, a lens assembly, a camera module, a video see-through device, and a head-mounted display device. The lens holder comprises a first connection structure (11), an expansion structure (13) and a base (12), which are sequentially connected in the axial direction of the lens holder; the first connection structure (11) is used for mounting a lens (2); the base (12) is used for mounting an image sensor (3); the expansion structure (13) linearly expands in the axial direction of the lens holder when the temperature rises, so that the distance between the first connection structure (11) and the base (12) is increased.
Provided in the embodiments of the present disclosure are a method and apparatus for communication, and a device and a medium. In the method, a first component sends to a second component a creation instruction for creating, at the second component, a media engine instance for the first component, wherein the media engine instance is used for communicating with a server to acquire streaming media data for the first component; furthermore, the first component sends authentication information to the second component, wherein the authentication information is used for using the media engine instance to establish a connection with the server at the second component. In this way, the time consumption of a signaling exchange process can be effectively reduced.
Provided in the embodiments of the present disclosure are a method and apparatus for playing streaming media data, and a device and a medium. The method comprises: on the basis of the size of an audio buffer data block in an audio buffer corresponding to audio content in streaming media data, presentation timestamp (PTS) information of the audio buffer data block, and the size of each audio frame in an audio frame sequence acquired from the audio buffer, determining PTS information for each audio frame in the audio frame sequence; furthermore, on the basis of the PTS information for each audio frame in the audio frame sequence, playing video content and audio content in the streaming media data. In this way, the accuracy of audio-video synchronization can be improved.
H04N 21/43 - Traitement de contenu ou données additionnelles, p. ex. démultiplexage de données additionnelles d'un flux vidéo numériqueOpérations élémentaires de client, p. ex. surveillance du réseau domestique ou synchronisation de l'horloge du décodeurIntergiciel de client
55.
IMAGE PROCESSING METHOD, DEVICE AND STORAGE MEDIUM
An image processing method, a device, and a storage medium are provided. The image processing method includes: dividing a set portion of a target object to obtain an initial mask map; acquiring a first depth map of a virtual object and a second depth map of a standard virtual model relating to the target object; adjusting the initial mask map based on the first depth map and the second depth map to obtain a target mask map; rendering the set portion based on the target mask map to obtain a set portion map; and rendering the virtual object to obtain a virtual object map; and superimposing the set portion map and the virtual object map to obtain a target image.
Embodiments of the present disclosure provide a method, an apparatus, a device, and a storage medium for video transcoding. The method includes: obtaining a first video to be transcoded; determining first video feature information corresponding to the first video; determining, based on the first video feature information and a predetermined decision tree regression model, a predicted play count of the first video at each of bit rate levels that are currently not transcoded; and determining a target bit rate level from the bit rate levels based on the predicted play count, and transcoding the first video based on the target bit rate level.
H04N 19/40 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le transcodage vidéo, c.-à-d. le décodage partiel ou complet d’un flux d’entrée codé suivi par un ré-encodage du flux de sortie décodé
H04N 19/149 - Débit ou quantité de données codées à la sortie du codeur par estimation de la quantité de données codées au moyen d’un modèle, p. ex. un modèle mathématique ou un modèle statistique
57.
METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR CONTENT SHARING
The embodiments of the disclosure provide methods, apparatuses, devices and a storage medium for content sharing. The method includes: presenting a panel for sharing target content, the panel displaying a list of objects to which the target content can be shared; detecting a selection of an object in the list of objects based on an activated selection mode of a plurality of selection modes; in response to detecting a selection operation on at least one of the list of objects, sharing the target content to the at least one object; and providing a control for supporting sending a message to the at least one object while maintaining the presentation of the panel. In this way, the efficiency and flexibility of content sharing can be improved, diversified demands in a sharing scene are met, and user experience is improved.
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p. ex. courriel pour la prise en charge des services des réseaux sociaux
58.
METHOD AND APPARATUS FOR SEARCH RESULT PRESENTATION
The disclosure provides a method and apparatus for search result presentation. Herein, the method of search result presentation includes: in response to a trigger, sending a search request carrying a book keyword; and receiving a search result corresponding to the search request, and presenting the search result on a search result page, wherein the search result comprises book information of a first book indicated by the book keyword, and a target topic content associated with the book keyword, the book keyword corresponding to a plurality of topic contents, the target topic content being obtained by filtering the plurality of topic contents based on an attribute feature of a second book indicated by the book keyword.
Embodiments of the disclosure relate to a method, an apparatus, a device, and a storage medium for running an application. The method provided herein includes: receiving, from a terminal device, a first request to launch a client of a target application, where the client is deployed in a first application platform; sending a first message to a second application platform corresponding to the target application to instruct that the target application is run at the second application platform, where the second application platform has a container for running the target application; receiving, from the second application platform, running data of the target application; and sending the running data to the terminal device to enable the terminal device to draw a running interface of the target application based on the running data.
Embodiments of the present disclosure provide a method of audio processing method, an electronic device, and a storage medium. The method includes: obtaining an intermediate feature of prompt text as a first intermediate feature, where the first intermediate feature is obtained by pre-processing the prompt text based on a language model; inputting the first intermediate feature and acquired target audio into the language model, to output, as a second intermediate feature, an intermediate feature corresponding to the target audio, where the target audio corresponds to the prompt text, and the first intermediate feature and the second intermediate feature are both cached as a key value (KV); and inputting the first intermediate feature and the second intermediate feature into the language model to generate a processing result corresponding to the target audio.
The disclosure provides a method, apparatus and electronic device for information query. A specific implementation of the method includes: determining a first information set having an association relationship with a first session in response to an information query request triggered by a first user in the first session, wherein the first user is a session member of the first session, and the first information set includes information outside the first session; obtaining a query result from the first information set according to the information query request; and displaying the query result in the first session. This implementation narrows a data query range, increases data query efficiency and further improves user experience.
Provided is a method for placing a virtual object in a video. The method comprises: obtaining a three-dimensional (3D) point cloud corresponding to a video; for each image frame in the video, obtaining 3D points in the 3D point cloud having corresponding two-dimensional (2D) points in the image frame; obtaining a grid by means of triangulation based on the 3D points; determining a target position of the virtual object in the image frame according to a placement position of the virtual object in the video and the grid; and placing the virtual object on a target location in the image frame. Based on the foregoing method for placing a virtual object in a video, the present disclosure further provides an apparatus, an electronic device, a storage medium, and a program product for placing a virtual object in a video.
The present application provides a live stream video processing method, apparatus, electronic device, and storage medium. The method comprises: obtaining video stream data for a live stream; generating target object area data according to the video stream data; and adding the target object area data to the video stream data and sending the video stream data, so that bullet comments can be rendered and displayed outside an area occupied by a target object in a live streaming process.
H04N 21/4788 - Services additionnels, p. ex. affichage de l'identification d'un appelant téléphonique ou application d'achat communication avec d'autres utilisateurs, p. ex. discussion en ligne
64.
METHOD AND APPARATUS OF PLAYING MEDIA CONTENT, DEVICE, STORAGE MEDIUM AND PRODUCT
The embodiments of the disclosure provide a method and an apparatus of playing a media content, a device, a storage medium and a product, the method includes: playing a first media content in a first playing page, and displaying a first associated information corresponding to a second media content associated with the first media content, wherein the first media content and the second media content are media contents of different types; and in response to a triggering operation of a user on the first associated information, playing the second media content associated with the first media content in a second playing page.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect utilisant des icônes
65.
QUANTIZATION METHOD AND APPARATUS FOR SPEECH RECOGNITION MODEL, ELECTRONIC DEVICE, AND PRODUCT
Embodiments of the present disclosure relate to a quantization method and apparatus for a speech recognition model, an electronic device, and a product. The method comprises determining a weight matrix for a network layer of the speech recognition model, where the weight matrix includes a plurality of blocks divided into a plurality of groups. The method further comprises adjusting an order of the plurality of blocks in the weight matrix according to the plurality of groups and based on a plurality of block parameters of the plurality of blocks, where the block parameter indicates how much a corresponding block affects the speech recognition model. The method further comprises quantizing the plurality of blocks in the adjusted weight matrix according to the adjusted order. In addition, the method further comprises restoring the order of the plurality of blocks in the quantized weight matrix.
A method, apparatus, device, and medium for managing a workflow are provided. In one method, in response to receiving a creation request, a page for creating the workflow is presented. The workflow is used to define a plurality of sequential operations in a predetermined task. The page comprises: a first region for providing a plurality of nodes, and a second region for providing a content of the workflow. The plurality of nodes comprises a model node for calling a machine learning model. In response to receiving an interaction request for the page, the workflow is managed based on the interaction request. The model node allows the powerful processing power of a machine learning model to be called in the workflow to complete predetermined tasks of the digital assistant.
The present disclosure provides a video processing method and apparatus, a device and a storage medium, wherein the method comprises: in response to determining that a target video has been played for a preset duration, displaying a preset activity resource corresponding to the target video and a video generation control on a video playing page of the target video, wherein the target video belongs to a video information stream; and in response to a triggering operation on the video generation control, generating an activity video corresponding to the target video based on the preset activity resource.
Described herein are techniques for processing videos. The techniques comprise obtaining an editing operation sequence comprising a plurality of editing operations; displaying an element on an interface of editing a video, the element corresponding to one of the plurality of editing operations, and the element having a disabled state and an enabled state; switching the element from the disabled state to the enabled state; and performing the one of the plurality of editing operations on the video.
G11B 27/02 - Montage, p. ex. variation de l'ordre des signaux d'information enregistrés sur, ou reproduits à partir des supports d'enregistrement ou d'information
69.
Display screen or portion thereof with an animated graphical user interface
A system for customization of augmented reality (AR) effects is described. The operations performed by the system includes determining whether a user media texture has been uploaded and assign a user media texture index to the uploaded media texture asset. The indexed uploaded media texture asset is provided and AR effect on digital content is deployed.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
71.
PROCEDURE PROCESSING METHOD, MEDIUM AND ELECTRONIC DEVICE
A procedure processing method, a medium, and an electronic device are provided, the method includes: determining at least two target templates; configuring, by using node configuration interfaces respectively corresponding to the target templates, target nodes respectively corresponding to the target templates; generating a target procedure based on at least two target nodes; and executing the target procedure by using a procedure engine.
Embodiments of the present disclosure relate to a method and apparatus for generating fiction content, a device, and a storage medium. The method provided herein comprises: on the basis of a preset operation of a first user for a target fiction, acquiring image information associated with the first user; providing setting information associated with a target character to be created in the target fiction, wherein at least part of the setting information is determined on the basis of the image information; and providing, to the first user, first content associated with the target fiction, wherein the first content is generated on the basis of existing content of the target fiction and the setting information. In this way, according to the embodiments of the present disclosure, immersive social experience can be provided for users, thereby providing new reading experience for the users, and promoting continuous reading.
Provided are a method and apparatus for editing a media item, and a device and a medium. The method comprises: in response to receiving an editing request from a user for initiating an editing operation on a media item, determining whether there is a previous editing operation on the media item before the editing operation; in response to determining that there is the previous editing operation, determining whether information of a previous editing resource which is used by the previous editing operation is stored, wherein the previous editing resource comprises an editing resource of a first type, the editing resource of the first type being available to a user of the first type, and the editing resource of the first type being unavailable to a user of a second type; and in response to determining that the information of the previous editing resource is stored, initiating the editing operation. By using an exemplary implementation of the present disclosure, an abnormal state in which an editing resource type available to a user does not match a user type can be avoided, thereby improving the reliability and stability of an editing process.
Provided in the embodiments of the present disclosure are a data transmission method and apparatus, and a device and a storage medium. The data transmission method comprises: on the basis of the current first network state information of a first network path and the current second network state information of a second network path, determining whether a redundant-data cross-path transmission condition is currently met; if the redundant-data cross-path transmission condition is currently met, determining target redundant data corresponding to the current target media data to be transmitted; and transmitting the target media data by means of the first network path, and then transmitting the target redundant data by means of the second network path. By means of the technical solution in the embodiments of the present disclosure, the cross-path transmission of redundant data can be realized, such that the data transmission quality is improved with a low traffic consumption, thereby improving the user experience.
Embodiments of the present disclosure relate to a video editing method and apparatus, a device, and a storage medium. The method provided herein comprises: in response to obtaining a group of video materials inputted by a user, segmenting the group of video materials into a plurality of video clips on the basis of semantic information of the group of video materials; by classifying the plurality of video clips into at least one group and associating the at least one group with a slot in a video template, obtaining video content corresponding to the slot, wherein the group comprises at least one video clip among the plurality of video clips; determining rhythm information of first audio content, wherein the rhythm information indicates a group of time points in the first audio content; adjusting the time length of at least one piece of video content among a plurality of pieces of video content on the basis of the rhythm information, so that the adjusted at least one piece of video content is matched with the rhythm information; and generating a target video on the basis of the first audio content and the adjusted video content.
H04N 21/44 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé
H04N 21/845 - Structuration du contenu, p. ex. décomposition du contenu en segments temporels
76.
CONTENT DISPLAY METHOD AND APPARATUS, DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND PRODUCT
Embodiments of the present invention provide a content display method and apparatus, a device, a computer-readable storage medium, and a product. The method comprises: displaying in a preset display page at least one associated theme extended content associated with a target theme, wherein extended content associated theme content comprises an associated extended theme and at least part of text content associated with the associated extended theme; in response to a triggering operation of a user on any preset display page of the extended content, displaying a content display page associated with the extended content associated theme content; and displaying in the content display page at least one extended one of multiple associated themes and text contents corresponding to multiple associated extended themes, wherein the data volumes of the text contents displayed in the content display page are greater than those of the text contents displayed in the preset display page.
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect
77.
METHOD AND APPARATUS FOR IDENTIFYING TRACKING DEVICE, AND DEVICE AND MEDIUM
Provided in the present application are a method and apparatus for identifying a tracking device, and a device and a medium. The method comprises: determining a plurality of first relative positions between a plurality of tracking devices and a current device; and identifying the tracking devices on the basis of the plurality of first relative positions. By means of the method, the usage position of a tracking device on a tracked target can be automatically identified on the basis of a relative position between a current device and the tracking device, such that it is unnecessary for a user to distinguish the usage position of the tracking device before using same, thereby facilitating the user usage and providing a better experience to a user.
Embodiments of the present invention relate to a method and apparatus for generating a video template, and a device and a storage medium. The method provided herein comprises: acquiring first input material and configuration information; on the basis of the first input material and the configuration information, determining first generation material for generating a video, wherein the first generation material is generated on the basis of the first input material and the configuration information; on the basis of the first input material and the first generation material, generating a first video editing result, wherein the first video editing result is used for displaying a video effect obtained by processing the first input material and the first generation material according to a specified editing operation; and on the basis of the video editing result, generating a video template, wherein the video template at least indicates the configuration information and the editing operation. In this way, according to the embodiments of the present invention, a video template can be published using inputted material and generated material and on the basis of the creative operation of a user, thereby helping share the creativity of the user, and enhancing interaction experience.
Embodiments of the present disclosure provide an application function configuration method and apparatus, an electronic device, and a storage medium. An identification plug-in corresponding to a target application is downloaded from an application publishing platform when the target application triggers installation or update, where the identification plug-in is stored in application publishing data of the target application. Client region information is obtained by parsing the identification plug-in, where the client region information is used to indicate a registration region corresponding to a terminal device running the target application. A corresponding target function module is loaded based on the client region information.
The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium. The method includes: inputting an original image including a deformation defect into an image restoration model to obtain first pixel value distribution information, second pixel value distribution information, and third pixel value distribution information, where a first output image includes the first pixel value distribution information used to describe pixel value distribution of the first output image in each color channel of a preset color space, the second pixel value distribution information used to describe pixel value distribution of the first output image in a transparency channel, and the third pixel value distribution information used to describe pixel value distribution of the first output image in each coordinate channel of a preset deformation field; and fusing these information to obtain a processed image.
The present disclosure provides a control method and apparatus of an electronic device, a terminal, and a storage medium. The electronic device includes a sensor configured to determine whether the electronic device is in a wearing state, a camera configured to perform at least one of face tracking or eye tracking, and an infrared lamp configured to expose the camera. The control method of an electronic device includes: determining whether the camera is in an on state; based on determining that the camera is in an off state, reading data of the sensor according to a preset period to determine whether the electronic device is in the wearing state; and based on determining that the camera is in the on state, triggering generation and reading of the data of the sensor using an interrupt signal.
H04N 23/60 - Commande des caméras ou des modules de caméras
H04N 23/20 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande pour générer des signaux d'image uniquement à partir d'un rayonnement infrarouge
H04N 23/56 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande munis de moyens d'éclairage
82.
INFORMATION PROCESSING LINK EVALUATION METHOD AND APPARATUS, AND DEVICE, MEDIUM AND PROGRAM PRODUCT
According to the embodiments of the present disclosure, provided are an information processing link evaluation method and apparatus, and a device, a medium and a program product. The method comprises: receiving a data query request with respect to a first processing node among one or more processing nodes comprised in an information processing link, wherein the one or more processing nodes are each configured to use a machine learning model to execute information processing; on the basis of a query result corresponding to the data query request, generating a data resource with respect to the first processing node, wherein the data resource comprises one or more data entries, and each data entry comprises an input of the first processing node; receiving model configuration information with respect to at least one processing node to be evaluated among the one or more processing nodes, wherein the at least one processing node at least comprises the first processing node; and generating a data object on the basis of the data resource and the model configuration information, wherein the data object is used for constructing a first evaluation case with respect to the at least one processing node. Therefore, case evaluation for a series link can be quickly and conveniently implemented.
A method and apparatus for creating and using a patch, a device, and a storage medium. The method comprises: in response to a patch creation request, presenting a patch creation page, the patch creation page displaying a reference text and a recording control (710); in response to detecting triggering on the recording control, receiving audio inputted by a user (720); creating a target patch for the user on the basis of the received audio and the reference text (730); presenting a patch confirmation page, the patch confirmation page at least comprising sample audio generated on the basis of the target patch (740); and in response to receiving a confirmation for the target patch in the patch confirmation page, storing the target patch for the user to be used in audio (750). In this way, the user is able to conveniently and quickly create his or her own patches to generate personalized audio.
A system for customization of augmented reality (AR) effects is described. The operations performed by the system includes determining whether a user media texture has been uploaded and assign a user media texture index to the uploaded media texture asset. The indexed uploaded media texture asset is provided and AR effect on digital content is deployed.
Embodiments of the present disclosure provide a special effect processing method and apparatus, an electronic device, and a storage medium. The special effect processing method comprises: displaying a livestreaming page, the livestreaming page comprising at least two livestreaming display areas, and each livestreaming display area being used to display a livestreaming video stream corresponding to one livestreaming account; in response to a special effect display triggering operation for a target special effect, displaying a special effect result corresponding to the target special effect in each of livestreaming display areas corresponding to at least two livestreaming accounts. The technical solutions of the embodiments of the present disclosure implement the effect of displaying a special effect result in multiple livestreaming display areas of the same livestreaming page, i.e., a special effect interaction effect between multiple livestreaming accounts is presented, enriching a special effect interaction mode, and improving a special effect interaction experience.
A computing device for displaying a 3D virtual object for an augmented reality (AR) scene is described. The computing device has a processor and a non-transitory computer-readable memory, wherein the processor is configured to carry out instructions from the memory that configure the computing device to: determine a polygon mesh representation of a target object within the AR scene, wherein the polygon mesh representation models a physical appearance of the target object; identify a target mesh surface of the polygon mesh representation onto which the 3D virtual object should be anchored; monitor the target object and update the polygon mesh representation according to changes in the physical appearance of the target object; and display the 3D virtual object within the AR scene according to a position of the target mesh surface on the updated polygon mesh representation, wherein the target mesh surface moves along with the target object.
The present disclosure relates to the technical field of hard disks. Disclosed are a storage block replacement method and apparatus based on a solid state drive. The method comprises: acquiring a faulty storage block from a super block in a solid state drive, and determining a target plane where the faulty storage block is located and a first erasure count corresponding to the faulty storage block; determining an erasure count range by using the first erasure count; determining a target plane to which a target storage block belongs, and determining, from a preset replacement pool, candidate storage blocks located on the target plane; and searching the candidate storage blocks for the target storage block that satisfies the erasure count range, and replacing the faulty storage block with the target storage block.
The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium. The method comprises: acquiring an original image, the original image comprising a subject object; obtaining a background layer and a subject scanning layer on the basis of the original image, wherein the background layer comprises a background area, the subject scanning layer comprises a scanned object, the scanned object moves in a preset direction over time, and during the movement of the scanned object, the subject object is progressively revealed from non-existence to complete visibility; and fusing the background layer and the subject scanning layer to obtain a processed image.
The embodiments of the present disclosure provide an image processing method, apparatus, device and a storage medium. The method includes: obtaining a target object detected in a current key frame and a rendered virtual object corresponding to the current key frame, wherein the key frame is a frame which triggers object detection; determining, based on the detected target object and a vision space queue, a virtual object to be newly added; determining, based on the rendered virtual object and the vision space queue, a virtual object to be deleted; and updating, based on the virtual object to be newly added and the virtual object to be deleted, a virtual object corresponding to the current key frame.
The present disclosure provides a program generation method and related device. The method includes: acquiring a target component, where the target component includes a first-type component based on a first program platform and a second-type component based on a second program platform; generating a first-type file based on the first-type component, and generating a second-type file based on the second-type component; and fusing the first-type file and the second-type file to generate a target program.
The embodiments of the disclosure relates to a method, apparatus, device, and storage medium for media item input. The method provided here includes: presenting a plurality of media items based on input information of a user, and the plurality of media items are determined based on the input information; adding a first media item in a media editing window in response to a selection of the first media item of the plurality of media items; and in response to at least one predetermined operation for the first media item, replacing the first media item in the media editing window with a second media item of the plurality of media items. In this way, the embodiments of the disclosure may efficiently switch different media items for editing, thereby improving efficiency of media editing.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
92.
CODE RETRIEVAL METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
The embodiments of the present disclosure relate to a code retrieval method and apparatus, and a device and a storage medium. The method provided in the present disclosure comprises: configuring a first retrieval tool associated with an agent; in response to the agent using the first retrieval tool to initiate a first retrieval request, on the basis of structured information of a target code, determining from the target code at least one entity that matches the first retrieval request, wherein the structured information indicates a plurality of entities in the target code and association relationships among the plurality of entities; and on the basis of the at least one entity, providing to the agent a first retrieval result addressing the first retrieval request. In this way, the embodiments of the present disclosure can improve the efficiency of code retrieval.
An audio processing method and apparatus, and a device and a storage medium. The method comprises: extracting, from first media content, background audio content and text audio content corresponding to text content (210); on the basis of a request for replacing first text in the text content with second text, using timbre information associated with a first audio segment in the text audio content to generate a second audio segment corresponding to the second text, wherein the first audio segment corresponds to the first text (220); on the basis of the second audio segment, adjusting a third audio segment in the background audio content which corresponds to the first text (230); and on the basis of the first media content, the second audio segment and the adjusted third audio segment, generating second media content (240). In this way, the method can support editing media content by means of modifying text corresponding to the media content, and can make the modified audio content more real, thereby improving the quality of the edited media content.
H04N 21/439 - Traitement de flux audio élémentaires
H04N 21/472 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de servicesInterface pour utilisateurs finaux pour l'interaction avec le contenu, p. ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés
G10L 13/033 - Édition de voix, p. ex. transformation de la voix du synthétiseur
94.
COLLECTION PROCESSING METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
Provided are a collection processing method and apparatus, and a device and a storage medium. The method comprises: firstly, displaying on a first page a target media resource corresponding to a target user, wherein the target media resource is a multimedia resource which has been in a collected state (S101); and in response to a collection grouping trigger operation acting on the first page, on the basis of the target media resource corresponding to the target user, displaying at least one collection group on the first page, wherein the collection group comprises at least one target media resource, and the collection group is obtained by means of aggregating the target media resources on the basis of a content analysis result after content analysis is performed on the target media resources by using a content analysis model (S102).
G06F 16/955 - Recherche dans le Web utilisant des identifiants d’information, p. ex. des localisateurs uniformisés de ressources [uniform resource locators - URL]
95.
MEDIA CONTENT PROCESSING METHOD AND APPARATUS, DEVICE AND STORAGE MEDIUM
Provided in the embodiments of the present disclosure are a media content processing method and apparatus, a device and a storage medium. The method comprises: acquiring corresponding special effect component information of one or more special effects, wherein the one or more special effects will be applied within target time of media content, and the special effect component information of each special effect indicates an association relationship between components used by the special effect; on the basis of the corresponding special effect component information of the one or more special effects, generating merged component information related to a group of target components to be applied within the target time, the merged component information at least indicating an association relationship between the group of target components; and, on the basis of the merged component information, executing preprocessing of one or more target components among the group of target components. Therefore, the smoothness of media content display and the user experience are improved.
The embodiments of the present disclosure provide a page display method and apparatus, and an electronic device, a storage medium and a program product. The page display method comprises: receiving a page display operation of a current user for a target user; and in response to the page display operation, using a first display style to display a preset page of the target user, wherein the first display style is associated with user content of the target user in the preset page. By using the above technical solution, the embodiments of the present disclosure can enrich display styles of the preset page.
The present application discloses a method and apparatus for determining an expression coefficient, a device, a medium, and a product. The method comprises: first acquiring a facial image and a reference image corresponding to the facial image; then, on the basis of the facial image and the reference image, determining a predicted expression coefficient corresponding to the facial image, such that the predicted expression coefficient can indicate an expression state presented by the facial image.
The present application discloses a training data construction method and an expression coefficient extraction method. The training data construction method comprises: firstly, obtaining an expression coefficient acquisition sequence and at least two image acquisition sequences; and then constructing training data on the basis of the expression coefficient acquisition sequence and the at least two image acquisition sequences. In this way, model updating processing can be carried out subsequently on the basis of the training data, and the updated model is used to perform expression coefficient extraction processing.
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
99.
MULTIMEDIA CONTENT PUBLISHING METHOD, RELATED DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
The present disclosure relates to a multimedia content publishing method, a related device and a computer-readable storage medium, and relates to the technical field of multimedia. The multimedia content publishing method comprises: during the process of publishing first multimedia content and second multimedia content by means of an application, in response to the application being switched to a background and a desktop being displayed, displaying a first control on the desktop; in the first control, displaying publishing progress information of the first multimedia content; and, in response to the publishing progress information of the first multimedia content satisfying a specified condition, displaying in the first control publishing progress information of the second multimedia content.
The embodiments of the present disclosure relate to an interaction method and apparatus, a device and a storage medium. The method provided herein comprises: displaying a live streaming interface associated with a live streaming interaction event, the live streaming interaction event being associated with a group of participants; and, on the basis of a configuration operation for a plurality of participants in the group of participants, displaying in a first layout a group of pictures corresponding to the group of participants in the live streaming interface, so that the display priority of a plurality of pictures corresponding to the plurality of participants is higher than that of other pictures in the group of pictures. The embodiments of the present disclosure can effectively highlight a plurality of participants in a live streaming room, thus helping to improve the engagement and interaction experience of users during live streaming interaction.