Techniques are disclosed relating to implementing audio techniques for real-time audio generation. For example, a music generator system may generate new music content from playback music content based on different parameter representations of an audio signal. In some cases, an audio signal can be represented by both a graph of the signal (e.g., an audio signal graph) relative to time and a graph of the signal relative to beats (e.g., a signal graph). The signal graph is invariant to tempo, which allows for tempo invariant modification of audio parameters of the music content in addition to tempo variant modifications based on the audio signal graph.
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
G05B 15/02 - Systems controlled by a computer electric
G06F 21/10 - Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
G06F 21/16 - Program or content traceability, e.g. by watermarking
G10H 1/00 - Details of electrophonic musical instruments
G10H 1/06 - Circuits for establishing the harmonic content of tones
G10L 21/12 - Transforming into visible information by displaying time domain information
G10L 25/06 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being correlation coefficients
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
Techniques are disclosed relating to determining composition rules, based on existing music content, to automatically generate new music content. In some embodiments, a computer system accesses a set of music content and generates a set of composition rules based on analyzing combinations of multiple loops in the set of music content. In some embodiments, the system generates new music content by selecting loops from a set of loops and combining selected ones of the loops such that multiple ones of the loops overlap in time. In some embodiments, the selecting and combining loops is performed based on the set of composition rules and attributes of loops in the set of loops.
Techniques are disclosed relating to automatically generating new music content based on image representations of audio files. A music generation system includes a music generation subsystem and a music classification subsystem. The music generation subsystem may generate output music content according to music parameters that define policy for generating music. The classification subsystem may be used to classify whether music is generated by the music generation subsystem or is professionally produced music content. The music generation subsystem may implement an algorithm that is reinforced by prediction output from the music classification subsystem. Reinforcement may include tuning the music parameters to generate more human-like music content.
G10H 1/053 - Means for controlling the tone frequencies, e.g. attack or decayMeans for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
4.
Techniques for generating musical plan based on both explicit user parameter adjustments and automated parameter adjustments based on conversational interface
Disclosed techniques relate to user control of generative music. In some embodiments, a computing system generates a musical plan based on both conversational inputs (e.g., using a large-language model (LLM)) and non-conversational inputs (e.g., via a traditional user interface) to a hybrid interface. The computing system may generate an initial version of the musical plan based on the LLM context and update the context and plan based on various types of user input via the hybrid interface. Disclosed techniques may advantageously allow guided user control over generative music systems.
TECHNIQUES FOR GENERATING MUSICAL PLAN BASED ON BOTH EXPLICIT USER PARAMETER ADJUSTMENTS AND AUTOMATED PARAMETER ADJUSTMENTS BASED ON CONVERSATIONAL INTERFACE
Disclosed techniques relate to user control of generative music. In some embodiments, a computing system generates a musical plan based on both conversational inputs (e.g., using a large-language model (LLM)) and non-conversational inputs (e.g., via a traditional user interface) to a hybrid interface. The computing system may generate an initial version of the musical plan based on the LLM context and update the context and plan based on various types of user input via the hybrid interface. Disclosed techniques may advantageously allow guided user control over generative music systems.
Techniques are disclosed that pertain to training a machine learning model to generate audio data similar to a music generator program. A computer system, executing a rules-based music generator program, selects and combines multiple musical expressions to generate audio data. The computer system trains a machine learning model to select and combine musical expressions to generate music compositions. The machine learning model receives generator information by the generator program that indicates expression selection decisions to generate the audio data, mixing decisions to generate the audio data, and first audio information output based on the generator program's expression selection decisions and the mixing decisions. The computer system compares the generator information to expression selection decisions, mixing decisions, and second audio information generated by the machine learning model based on the machine learning model's expression selection decisions and mixing decisions. The computer system updates the machine learning model based on the comparing.
Techniques are disclosed that pertain to generating output music content based on musical embeddings. A computer system generates output music content that includes multiple overlapping musical expressions in time. The computer system receives user feedback at a point in time while the output music content is being played. Based on the user feedback and based on characteristics of the output music content associated with the point in time, the computer system determines one or more expression embeddings generated based on expressions selected for inclusion in the output music content and one or more composition embeddings generated based on combined expressions in the output music content. The computer system generates additional output music content based on the expression and composition embeddings.
Techniques are disclosed that pertain to training a machine learning model to generate audio data similar to a music generator program. A computer system, executing a rules-based music generator program, selects and combines multiple musical expressions to generate audio data. The computer system trains a machine learning model to select and combine musical expressions to generate music compositions. The machine learning model receives generator information by the generator program that indicates expression selection decisions to generate the audio data, mixing decisions to generate the audio data, and first audio information output based on the generator program's expression selection decisions and the mixing decisions. The computer system compares the generator information to expression selection decisions, mixing decisions, and second audio information generated by the machine learning model based on the machine learning model's expression selection decisions and mixing decisions. The computer system updates the machine learning model based on the comparing.
G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
Techniques are disclosed relating to adjusting, via a user interface, parameters (e.g., the gain) of musical phrases in generative music content. A computing system may select a set of musical phrases to include in generative music content. The computing system may determine respective gain values of multiple selected musical phrases. The computing system may mix the selected musical phrases based on the determined gain values to generate output music content. The computing system may cause display of an interface that visually indicates the selected musical phrases and their determined gain values relative to other selected musical phrases. The computing system may receive user gain input that indicates to adjust a gain value of one of the selected musical phrases. The computing system may adjust the mix of the selected musical phrases based on the user input.
Techniques are disclosed relating to adjusting, via a user interface, parameters (e.g., the gain) of musical phrases in generative music content. A computing system may select a set of musical phrases to include in generative music content. The computing system may determine respective gain values of multiple selected musical phrases. The computing system may mix the selected musical phrases based on the determined gain values to generate output music content. The computing system may cause display of an interface that visually indicates the selected musical phrases and their determined gain values relative to other selected musical phrases. The computing system may receive user gain input that indicates to adjust a gain value of one of the selected musical phrases. The computing system may adjust the mix of the selected musical phrases based on the user input.
Techniques are disclosed relating to generating music content. In one embodiment, a method includes determining one or more musical attributes based on external data and generating music content based on the one or more musical attributes. Generating the music content may include selecting from stored sound loops or tracks and/or generating new tracks based on the musical attributes. Selected or generated sound loops or tracks may be layered to generate the music content. Musical attributes may be determined in some embodiments based on user input (e.g., indicating a desired energy level), environment information, and/or user behavior information. Artists may upload tracks, in some embodiments, and be compensated based on usage of their tracks in generating music content. In some embodiments, a method includes generating sound and/or light control information based on the musical attributes.
G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G10H 1/00 - Details of electrophonic musical instruments
G10H 7/00 - Instruments in which the tones are synthesised from a data store, e.g. computer organs
Techniques are disclosed relating to implementing audio techniques for real-time audio generation. For example, a music generator system may generate new music content from playback music content based on different parameter representations of an audio signal. In some cases, an audio signal can be represented by both a graph of the signal (e.g., an audio signal graph) relative to time and a graph of the signal relative to beats (e.g., a signal graph). The signal graph is invariant to tempo, which allows for tempo invariant modification of audio parameters of the music content in addition to tempo variant modifications based on the audio signal graph.
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
G05B 15/02 - Systems controlled by a computer electric
G06F 21/10 - Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
G06F 21/16 - Program or content traceability, e.g. by watermarking
G10H 1/00 - Details of electrophonic musical instruments
G10H 1/06 - Circuits for establishing the harmonic content of tones
G10L 21/12 - Transforming into visible information by displaying time domain information
G10L 25/06 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being correlation coefficients
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
Techniques are disclosed relating to determining composition rules, based on existing music content, to automatically generate new music content. In some embodiments, a computer system accesses a set of music content and generates a set of composition rules based on analyzing combinations of multiple loops in the set of music content. In some embodiments, the system generates new music content by selecting loops from a set of loops and combining selected ones of the loops such that multiple ones of the loops overlap in time. In some embodiments, the selecting and combining loops is performed based on the set of composition rules and attributes of loops in the set of loops.
Various methods for representing audio suitable for use in variational audio encoding are disclosed. A method comprises maintaining, by a computing system, state information for multiple resonator models with different resonant frequencies. The method further comprises iteratively performing a number of different operations, by the computing system for multiple respective samples in a set of audio samples in the time domain. These operations include updating the state information for the multiple resonator models based on the sample amplitude. The operations also include determining respective resonator amplitudes and phases for the updated multiple resonator models and storing, respective resonator amplitude and change-in-phase information for the sample.
Various methods for representing audio suitable for use in variational audio encoding are disclosed. A method comprises maintaining, by a computing system, state information for multiple resonator models with different resonant frequencies. The method further comprises iteratively performing a number of different operations, by the computing system for multiple respective samples in a set of audio samples in the time domain. These operations include updating the state information for the multiple resonator models based on the sample amplitude. The operations also include determining respective resonator amplitudes and phases for the updated multiple resonator models and storing, respective resonator amplitude and change-in-phase information for the sample.
G10H 7/04 - Instruments in which the tones are synthesised from a data store, e.g. computer organs in which amplitudes at successive sample points of a tone waveform are stored in one or more memories in which amplitudes are read at varying rates, e.g. according to pitch
G10L 19/02 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
Techniques are disclosed relating to automatically generating new music content based on image representations of audio files. A music generation system includes a music generation subsystem and a music classification subsystem. The music generation subsystem may generate output music content according to music parameters that define policy for generating music. The classification subsystem may be used to classify whether music is generated by the music generation subsystem or is professionally produced music content. The music generation subsystem may implement an algorithm that is reinforced by prediction output from the music classification subsystem. Reinforcement may include tuning the music parameters to generate more human-like music content.
G10H 1/053 - Means for controlling the tone frequencies, e.g. attack or decayMeans for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
17.
Music generator generation of continuous personalized music
Techniques are disclosed relating to automatically generate new music content. In some embodiments, a computing system receivers user input specifying a user-defined music control element. The computing system may train a machine learning model to change both composition and performance parameters based on user adjustments to the user-defined music control element. In embodiments in which composition and performance subsystems are on different devices, one device may transmit configuration information to another device, where the configuration information specifies how to adjust parameters based on user input to the user-defined music control element. Disclosed techniques may facilitate centralized learning for human-like music production while allowing individualized customization for individual users. Further, disclosed techniques may allow artists to define their own abstract music controls and make those controls available to end-users.
G10H 1/053 - Means for controlling the tone frequencies, e.g. attack or decayMeans for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
Techniques are disclosed relating to automatically generating new music content based on image representations of audio files. A music generation system includes a music generation subsystem and a music classification subsystem. The music generation subsystem may generate output music content according to music parameters that define policy for generating music. The classification subsystem may be used to classify whether music is generated by the music generation subsystem or is professionally produced music content. The music generation subsystem may implement an algorithm that is reinforced by prediction output from the music classification subsystem. Reinforcement may include tuning the music parameters to generate more human-like music content.
Techniques are disclosed relating to automatically generating new music content based on image representations of audio files. Techniques are also disclosed relating to implement audio techniques for real-time audio generation. Additional techniques are disclosed relating to implementing user-created controls for modifying music content. Yet more techniques are disclosed relating to tracking contributions to composed music content.
G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G10H 7/00 - Instruments in which the tones are synthesised from a data store, e.g. computer organs
Techniques are disclosed relating to automatically generate new music content based on image representations of audio files. A computer system generate image representations of audio files. The image representations may be generated, for example, based on data in the audio files and MIDI representations of the audio files. Audio files for combination may then be selected based on analysis of the image representations. For example, image-based machine learning algorithms may be implemented to assess the image representations and select music for combining.
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
G05B 15/02 - Systems controlled by a computer electric
G06F 21/10 - Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
G06F 21/16 - Program or content traceability, e.g. by watermarking
G10H 1/00 - Details of electrophonic musical instruments
G10H 1/06 - Circuits for establishing the harmonic content of tones
G10L 21/12 - Transforming into visible information by displaying time domain information
G10L 25/06 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being correlation coefficients
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
Techniques are disclosed relating to implementing audio techniques for real-time audio generation. For example, a music generator system may generate new music content from playback music content based on different parameter representations of an audio signal. In some cases, an audio signal can be represented by both a graph of the signal (e.g., an audio signal graph) relative to time and a graph of the signal relative to beats (e.g., a signal graph). The signal graph is invariant to tempo, which allows for tempo invariant modification of audio parameters of the music content in addition to tempo variant modifications based on the audio signal graph.
G05B 15/02 - Systems controlled by a computer electric
G10L 21/12 - Transforming into visible information by displaying time domain information
G06F 21/10 - Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
G06F 21/16 - Program or content traceability, e.g. by watermarking
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
G10L 25/06 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being correlation coefficients
G10H 1/00 - Details of electrophonic musical instruments
G10H 1/06 - Circuits for establishing the harmonic content of tones
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
22.
Listener-defined controls for music content generation
Techniques are disclosed relating to implementing user-created controls to modify music content. A music generator system may be configured to automatically generate output music content by selecting and combining audio tracks based on various parameters. Users may create their own control elements that the music generator system may train (e.g., using AI techniques) to generate output music content according to a user's intended functionality of a user-created control element.
G05B 15/02 - Systems controlled by a computer electric
G10L 21/12 - Transforming into visible information by displaying time domain information
G06F 21/10 - Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
G06F 21/16 - Program or content traceability, e.g. by watermarking
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
G10L 25/06 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being correlation coefficients
G10H 1/00 - Details of electrophonic musical instruments
G10H 1/06 - Circuits for establishing the harmonic content of tones
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
23.
Block-Chain Ledger Based Tracking of Generated Music Content
Techniques are disclosed relating to tracking contributions to composed music content. In some embodiments, a computer system determines playback data for a music content mix, where the playback data indicates characteristics of playback of the music content mix and the music content mix includes a determined combination of multiple audio tracks. In some embodiments, the system records, in an electronic block-chain ledger data structure, information specifying individual playback data for one or more of the multiple audio tracks in the music content mix. The information specifying individual playback data for an individual audio track may include usage data for the individual audio track and signature information associated with the individual audio track.
G06F 21/16 - Program or content traceability, e.g. by watermarking
G06F 21/10 - Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
09 - Scientific and electric apparatus and instruments
38 - Telecommunications services
41 - Education, entertainment, sporting and cultural services
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable software for composing music and for creating
and editing digital music and sounds; downloadable software
for creating and synthesizing digital art; interactive
software for generating computer-synthesized music and art;
downloadable mobile applications for creating and streaming
computer generated music; downloadable mobile applications
for creating and viewing computer generated art;
downloadable mobile applications for creating and editing
computer-synthesized algorithmically generated music and
art; downloadable mobile applications for providing
interactive music and multimedia content; downloadable
interactive entertainment software featuring artificial
intelligence for creating computer-synthesized
algorithmically generated music and art. Transmission and delivery of computer-synthesized
algorithmically generated digital music and art via wireless
communication networks and the Internet. Providing on-line computer-synthesized algorithmically
generated music for streaming not for downloading; providing
information relating to computer-synthesized algorithmically
generated music and art; providing computer synthesized and
algorithmically generated music and art via an internet
website portal; providing non-downloadable digital music
from a global computer network; providing non-downloadable
computer-synthesized algorithmically generated digital art
from a global computer network; providing non-downloadable
computer-synthesized algorithmically generated audio, video
and images, via a website and an online database; provision
of a computer-synthesized algorithmically generated light
displays and digital art for entertainment purposes;
entertainment, namely, providing computer-synthesized
algorithmically generated music and art to users online via
a communication network; providing computer-synthesized
algorithmically generated art, music, and sound recordings
via electronic communication networks. Hosting a website featuring technology that enables users to
stream computer generated music and sounds; computer
services, namely, hosting of platforms featuring technology
that allows users to produce, edit and stream
computer-synthesized and algorithmically generated audio and
video files via an interactive website and mobile
applications; computer services, namely, hosting an
interactive website featuring technology that allows users
to generate and stream computer-synthesized algorithmically
generated music and sounds; computer services, namely,
hosting an interactive website featuring technology that
allows users to generate and view computer-synthesized
algorithmically generated art.
Techniques are disclosed relating to generating music content. In one embodiment, a method includes determining one or more musical attributes based on external data and generating music content based on the one or more musical attributes. Generating the music content may include selecting from stored sound loops or tracks and/or generating new tracks based on the musical attributes. Selected or generated sound loops or tracks may be layered to generate the music content. Musical attributes may be determined in some embodiments based on user input (e.g., indicating a desired energy level), environment information, and/or user behavior information. Artists may upload tracks, in some embodiments, and be compensated based on usage of their tracks in generating music content. In some embodiments, a method includes generating sound and/or light control information based on the musical attributes.
G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G10H 1/00 - Details of electrophonic musical instruments
G10H 7/00 - Instruments in which the tones are synthesised from a data store, e.g. computer organs
09 - Scientific and electric apparatus and instruments
38 - Telecommunications services
41 - Education, entertainment, sporting and cultural services
42 - Scientific, technological and industrial services, research and design
Goods & Services
(1) Downloadable software for composing music and for creating and editing digital music and sounds; downloadable computer software for creating and synthesizing digital art images and photographs; downloadable interactive entertainment software for generating computer-synthesized music and digital art images; downloadable mobile applications for creating and streaming computer generated music; downloadable mobile applications for creating and viewing computer generated digital art images; downloadable mobile applications for creating and editing computer-synthesized algorithmically generated music and art; downloadable mobile applications for broadcasting and streaming of interactive music; downloadable interactive entertainment software featuring artificial intelligence for creating computer-synthesized algorithmically generated music and art. (1) Transmission and delivery of computer-synthesized algorithmically generated digital music and art via wireless communication networks and the Internet.
(2) Providing on-line computer-synthesized algorithmically generated music for streaming not for downloading; providing information relating to computer-synthesized algorithmically generated music and art; providing computer synthesized and algorithmically generated music and art via an internet website portal; providing non-downloadable digital music from a global computer network; providing non-downloadable computer-synthesized algorithmically generated digital art from a global computer network; providing non-downloadable computer-synthesized algorithmically generated audio, video and images, via a website and an online database; provision of a computer-synthesized algorithmically generated light displays and digital art for entertainment purposes; entertainment, namely, providing computer-synthesized algorithmically generated music and art to users online via a communication network; providing computer-synthesized algorithmically generated art, music, and sound recordings via electronic communication networks.
(3) Hosting a website featuring technology that enables users to stream computer generated music and sounds; computer services, namely, hosting of platforms featuring technology that allows users to produce, edit and stream computer-synthesized and algorithmically generated audio and video files via an interactive website and mobile applications; computer services, namely, hosting an interactive website featuring technology that allows users to generate and stream computer-synthesized algorithmically generated music and sounds; computer services, namely, hosting an interactive website featuring technology that allows users to generate and view computer-synthesized algorithmically generated art.
Techniques are disclosed relating to determining composition rules, based on existing music content, to automatically generate new music content. In some embodiments, a computer system accesses a set of music content and generates a set of composition rules based on analyzing combinations of multiple loops in the set of music content. In some embodiments, the system generates new music content by selecting loops from a set of loops and combining selected ones of the loops such that multiple ones of the loops overlap in time. In some embodiments, the selecting and combining loops is performed based on the set of composition rules and attributes of loops in the set of loops.
09 - Scientific and electric apparatus and instruments
38 - Telecommunications services
41 - Education, entertainment, sporting and cultural services
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable software for composing music and for creating and editing digital music and sounds; downloadable software for creating and synthesizing digital art; interactive software for generating computer-synthesized music and art; downloadable mobile applications for creating and streaming computer generated music; downloadable mobile applications for creating and viewing computer generated art; downloadable mobile applications for creating and editing computer-synthesized algorithmically generated music and art; downloadable mobile applications for providing interactive music and multimedia content; downloadable interactive entertainment software featuring artificial intelligence for creating computer-synthesized algorithmically generated music and art Transmission and delivery of computer-synthesized algorithmically generated digital music and art via wireless communication networks and the Internet Providing on-line computer-synthesized algorithmically generated music for streaming not for downloading; providing information relating to computer-synthesized algorithmically generated music and art; providing an Internet website portal in the field of computer synthesized and algorithmically generated music and art; providing non-downloadable digital music from a global computer network; providing non-downloadable computer-synthesized algorithmically generated digital art from a global computer network; providing a website and an online database featuring non-downloadable computer-synthesized algorithmically generated audio, video and images; provision of a computer-synthesized algorithmically generated light displays and digital art for entertainment purposes; entertainment, namely, providing computer-synthesized algorithmically generated music and art to users online via a communication network; providing an online database via a communication network featuring computer-synthesized algorithmically generated art, music, and sound recordings Providing a website featuring technology that enables users to stream computer generated music and sounds; computer services, namely, providing an software platform that allows users to produce, edit and stream computer-synthesized and algorithmically generated audio and video files via an interactive website and mobile applications; computer services, namely, providing an interactive website featuring technology that allows users to generate and stream computer-synthesized algorithmically generated music and sounds; computer services, namely, providing an interactive website featuring technology that allows users to generate and view computer-synthesized algorithmically generated art
Techniques are disclosed relating to determining composition rules, based on existing music content, to automatically generate new music content. In some embodiments, a computer system accesses a set of music content and generates a set of composition rules based on analyzing combinations of multiple loops in the set of music content. In some embodiments, the system generates new music content by selecting loops from a set of loops and combining selected ones of the loops such that multiple ones of the loops overlap in time. In some embodiments, the selecting and combining loops is performed based on the set of composition rules and attributes of loops in the set of loops.
Techniques are disclosed relating to determining composition rules, based on existing music content, to automatically generate new music content. In some embodiments, a computer system accesses a set of music content and generates a set of composition rules based on analyzing combinations of multiple loops in the set of music content. In some embodiments, the system generates new music content by selecting loops from a set of loops and combining selected ones of the loops such that multiple ones of the loops overlap in time. In some embodiments, the selecting and combining loops is performed based on the set of composition rules and attributes of loops in the set of loops.
Techniques are disclosed relating to generating music content. In one embodiment, a method includes determining one or more musical attributes based on external data and generating music content based on the one or more musical attributes. Generating the music content may include selecting from stored sound loops or tracks and/or generating new tracks based on the musical attributes. Selected or generated sound loops or tracks may be layered to generate the music content. Musical attributes may be determined in some embodiments based on user input (e.g., indicating a desired energy level), environment information, and/or user behavior information. Artists may upload tracks, in some embodiments, and be compensated based on usage of their tracks in generating music content. In some embodiments, a method includes generating sound and/or light control information based on the musical attributes.
G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G10H 1/00 - Details of electrophonic musical instruments
G10H 7/00 - Instruments in which the tones are synthesised from a data store, e.g. computer organs
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
42 - Scientific, technological and industrial services, research and design
Goods & Services
Transmission and delivery of computer-synthesized algorithmically generated digital music and art via wireless communication networks and the Internet Downloadable software for composing music and for creating and editing digital music and sounds; downloadable software for creating and synthesizing digital art; interactive software for generating computer-synthesized music and art; downloadable mobile applications for creating and streaming computer generated music; downloadable mobile applications for creating and viewing computer generated art; downloadable mobile applications for creating and editing computer-synthesized algorithmically generated music and art; downloadable mobile applications for providing interactive music and multimedia content; downloadable interactive entertainment software featuring artificial intelligence for creating computer-synthesized algorithmically generated music and art Providing on-line computer-synthesized algorithmically generated music for streaming not for downloading; providing information relating to computer-synthesized algorithmically generated music and art; providing an Internet website portal in the field of computer synthesized and algorithmically generated music and art; providing non-downloadable digital music from a global computer network; providing non-downloadable computer-synthesized algorithmically generated digital art from a global computer network; providing a website and an online database featuring non-downloadable computer-synthesized algorithmically generated audio, video and images; provision of a computer-synthesized algorithmically generated light displays and digital art for entertainment purposes Providing a website featuring technology that enables users to stream computer generated music and sounds; computer services, namely, providing an software platform that allows users to produce, edit and stream computer-synthesized and algorithmically generated audio and video files via an interactive website and mobile applications; computer services, namely, providing an interactive website featuring technology that allows users to generate and stream computer-synthesized algorithmically generated music and sounds; computer services, namely, providing an interactive website featuring technology that allows users to generate and view computer-synthesized algorithmically generated art; entertainment, namely, providing computer-synthesized algorithmically generated music and art to users online via a communication network; providing an online database via a communication network featuring computer-synthesized algorithmically generated art, music, and sounds
Techniques are disclosed relating to generating music content. In one embodiment, a method includes determining one or more musical attributes based on external data and generating music content based on the one or more musical attributes. Generating the music content may include selecting from stored sound loops or tracks and/or generating new tracks based on the musical attributes. Selected or generated sound loops or tracks may be layered to generate the music content. Musical attributes may be determined in some embodiments based on user input (e.g., indicating a desired energy level), environment information, and/or user behavior information. Artists may upload tracks, in some embodiments, and be compensated based on usage of their tracks in generating music content. In some embodiments, a method includes generating sound and/or light control information based on the musical attributes.
Techniques are disclosed relating to generating music content. In one embodiment, a method includes determining one or more musical attributes based on external data and generating music content based on the one or more musical attributes. Generating the music content may include selecting from stored sound loops or tracks and/or generating new tracks based on the musical attributes. Selected or generated sound loops or tracks may be layered to generate the music content. Musical attributes may be determined in some embodiments based on user input (e.g., indicating a desired energy level), environment information, and/or user behavior information. Artists may upload tracks, in some embodiments, and be compensated based on usage of their tracks in generating music content. In some embodiments, a method includes generating sound and/or light control information based on the musical attributes.
G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
H04B 1/00 - Details of transmission systems, not covered by a single one of groups Details of transmission systems not characterised by the medium used for transmission