A video voicemail recording system enables a caller to leave a conventional audio voicemail message over the telephony connection used for routing a call or to leave a message of a different communication modality using a client application at the calling device. A call is routed to a client device from a calling device. In-call options for selection at the calling device are presented responsive to the call going unanswered, in which a first in-call option allows an operator of the calling device to record an audio-only voicemail message over the telephony service and a second in-call option allows the operator of the calling device to record or input a message of a second communication modality (e.g., a video message). A request to open the client application at the calling device is transmitted responsive to a selection of the second in-call option. The message is received in response thereto.
A conferencing server receives audio data from devices connected to a conference. The conferencing server generates multiple time-contiguous containers. Each time-contiguous container includes an identifier of an associated device of the devices and one or more payloads of the audio data from the associated device. Each payload has a predefined time length. The conferencing server transmits the multiple time-contiguous containers to a consumer server.
A text message communication is received from an operator device. A telephone number is assigned to a contact center agent device to enable the contact center agent device to use the telephone number to participate in a text message communication. Use of the telephone number by other devices is restricted while the telephone number is assigned to the contact center agent device. At a conclusion of the text message communication, the telephone number is released from the contact center agent device to enable one of the other devices to use the telephone number to either continue the same text message communication or to participate in a different text message communication.
Various embodiments of an apparatus, method(s), system(s) and computer program product(s) described herein are directed to a Viseme Engine. The Viseme Engine receives audio data associated with a user account. The Viseme Engine predicts at least one viseme that corresponds with a portion of phoneme audio data and identifies one or more facial expression parameters associated with the predicted viseme. The facial expression parameters being applicable to a face model. The Viseme Engine renders the predicted viseme according to the one or more facial expression parameters.
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
G06V 20/40 - ScenesScene-specific elements in video content
G10L 25/18 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
G10L 25/57 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for processing of video signals
H04L 65/403 - Arrangements for multi-party communication, e.g. for conferences
Methods and systems provide for presenting time distributions of participants across topic segments in a communication session. In one embodiment, the system connects to a communication session with a number of participants; receives a transcript of a conversation between the participants produced during the communication session, the transcript including timestamps for each utterance of a speaking participant; determines, based on analysis of the transcript, a meeting type for the communication session; generates a number of topic segments for the conversation and respective timestamps for the topic segments; for each participant, analyzes the time spent by the participant on each of the generated topic segments in the meeting; and presents, to one or more users, data on the time distribution of participants for each topic segment and across topic segments within the conversation.
Sentiment scores are presented within a communication session. In one embodiment, a system extracts, from a transcript, utterances including one or more sentences spoken by the participants. The system identifies a subset of the utterances spoken by a subset of the participants. For each utterance, the system determines a word sentiment score for each word in the utterance, and determines an utterance sentiment score based on the word sentiment scores. The system determines an overall sentiment score for a conversation based on the utterance sentiment scores. The system transmits, to one or more client devices, the overall sentiment score for the conversation.
Video-assisted presence detection is used to enhance a user experience in telephony communications. Image data, video data, or both, from a camera are used to determine whether a user is present at their device before a call is transferred to him or her. The video-assisted presence detection can be implemented based on a privacy setting. For example, one implementation allows a system to have partial access to a detector to detect user presence without capturing facial information, and without identifying that person. Another implementation allows the system to have partial access to the detector to detect user presence without having access to a video feed of the detector.
H04M 3/58 - Arrangements for transferring received calls from one subscriber to anotherArrangements affording interim conversations between either the calling or the called party and a third party
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
8.
Extracting Filler Phrases From A Communication Session
Methods and systems provide for extracting filler words and phrases from a communication session. In one embodiment, the system receives a transcript of a conversation involving one or more participants produced during a communication session; extracts, from the transcript, utterances including one or more sentences spoken by the participants; identifies a subset of the utterances spoken by a subset of the participants associated with a prespecified organization; extracts filler phrases within the subset of utterances, the filler phrases each comprising one or more words representing disfluencies within a sentence, where extracting the filler phrases includes applying filler detection rules; and presents, for display at one or more client devices, data corresponding to the extracted filler phrases.
Selections of content shared from a remote device during a video conference are copied to a destination of a computing device connected to the video conference live or at which a recording of the video conference is viewed. The content shared from the remote device during the video conference is output at a display of the computing device. A portion of the content is selected according to an instruction received from a user of the computing device while output at the display of the computing device to copy to a destination associated with software running at the computing device. The portion of the content is identified using a machine vision process performed against the content while output at the display of the computing device. The portion of the content is then copied to the destination.
Methods, systems, and apparatus, including computer programs encoded on computer storage media relate to a method for ingesting 3D objects from a virtual environment for 2D data representation. The system may provide a video conference session including a first video stream of a video conference participant and a second video stream of a virtual environment. The system may receive a 3D data representation of a 3D object in the virtual environment and generate a 2D data representation of the 3D object based on the 3D data representation. The system may display the 2D data representation in the video conference session.
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
G03H 1/02 - Holographic processes or apparatus using light, infrared, or ultraviolet waves for obtaining holograms or for obtaining an image from themDetails peculiar thereto Details
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
A communications monitoring software identifies first communications between a user and other users. The communications monitoring software associates respective communication scores with the first communications based on respective identified communications features of the first communications. With respect to each audio-only communication of the first communications, the identified communications features include at least two of a number of users in the each audio-only communication; whether the user enabled a microphone associated with the user; whether another user in the each audio-only communication enabled a microphone associated with the another user; a total amount of time that the user spoke; or a total amount of time that the another user spoke. The communications monitoring software identifies a subset of second communications based on the respective communication scores associated with the first communications. The subset of the second communications are presented in a user interface to the user.
A communication server detects a first communication request from a first caller device to a second caller device. The communication server detects a second communication request from the second caller device to the first caller device. The first communication request and the second communication request both occur within a threshold time period. The communication server establishes, without additional input from the first caller device and the second caller device, an active session in response to the first communication request. The active session is established between the first caller device and the second caller device. The communication server dismisses the second communication request.
A software platform receives a request from a device of a first user for a dynamic user profile of a second user. The software platform obtains profile information of the second user based on a number of future conferences between the first user and the second user. The software platform generates the dynamic user profile of the second user based on the obtained profile information. The software platform transmits the dynamic user profile of the second user to the device of the first user for display.
H04L 51/216 - Handling conversation history, e.g. grouping of messages in sessions or threads
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
A client is configured to control one or more functions of another client remotely via a wireless connection. Enabling control of the one or more functions of another client includes receiving a control command from a first client. The control command indicates a function to be performed by a second client. The control command is transmitted to the second client when a binding status indicates a binding between the first client to the second client.
A client device determines that a telephony outage is occurring. The client device connects to an on-premises telephony node using an encrypted password at the client device. The client device accesses a set of telephony services via the on-premises telephony node.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
A velocity of a device of a user is detected using one or more sensors of the device. A presence update is transmitted to a server based on the detected velocity. The server receives the presence update and updates a presence status based on the presence update. An incoming communication is routed to a first device associated with a first user when the modality is a first modality and routes the incoming communication to a second device associated with a second user when the modality is a second modality.
Respective video streams are received from individual virtual cameras placed in a virtual environment. Each of the individual virtual cameras is focused on and associated with one of the respective digital representations of users. An environment video stream is received from an environment virtual camera that is placed in the virtual environment and that captures at least one of the respective digital representations. The environment virtual camera is not associated with any of the respective digital representations. Video streams that include the respective video streams and the environment video stream are the transmitted to a client device associated with one of the users.
A first indication that a task was assigned to a user of a client device is transmitted. A second indication that the task was completed is received from the client device. In response to the second indication, a task status corresponding to the task is updated and a user status corresponding to the user is updated. The updated task status is transmitted in response to a query.
Respective devices of conference participants are connected to a conference hosted by a conferencing server. Respective commands are transmitted to the respective devices to initiate distributed recording. Subsequent to a termination of the conference, respective high-resolution media files are received from the respective devices. At least a subset of the respective high-resolution media files are composited into a high-resolution output media file. Subsequent to the termination of the conference, respective audio media files and/or screen content media file corresponding to at least the subset of the respective high-resolution media files may also be received.
A user device is connected to a conference hosted by a conferencing server. A media stream is received from the user device. A derived media stream obtained from the media stream is streamed to the conferencing server during the conference. During the conference, the media stream is stored in a media file at the user device. The media file is transferred from the user device to the conferencing server. The media file can be transferred subsequent to termination of the conference. The media file can include one or more of a video stream, an audio stream, or a content stream.
A server accesses a natural language query. The server facilitates a mapping of the natural language query to a vector using a query-to-vector engine. The server matches the vector to an intent representing a prediction associated with the natural language query. The server provides a response to the natural language query based on the intent.
Digital representations of participants are displayed in a virtual environment. The participants include augmented or virtual reality (AR/VR) participants and traditional video conference participants. An input is received from a traditional video conference participant via an interactive interface presented on a device associated with the traditional video conference participant. The input indicates a new virtual location within the virtual environment. The virtual environment is updated to reflect the new virtual location of the digital representation of the traditional video conference participant based on the input. A two-dimensional video stream is generated from the perspective of the new virtual location and transmitted to the device associated with the traditional video conference participant.
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04L 65/403 - Arrangements for multi-party communication, e.g. for conferences
23.
Normalized Resolutions For Conference Participants
Normalized resolutions are determined for first and second regions of interest of an initial video stream captured by a video capture device located within a physical space. The first region of interest is associated with a first conference participant within the physical space and the second region of interest is associated with a second conference participant within the physical space. Instructions are transmitted to the video capture device to cause the video capture device to capture, at the normalized resolutions, a first video stream associated with the first region of interest and a second video stream associated with the second region of interest. The first and second video streams conform sizes and quality levels of the first and second conference participants within separate user interface tiles of a conferencing software user interface to which the first and second video streams are output.
Agenda intelligence software determines completions of agenda items of a multi-participant communication using a transcription generated in real-time during the multi-participant communication and generates an agenda for a next multi-participant communication including incomplete agenda items. A software platform may in some cases include the agenda intelligence software, a transcription engine which generates the real-time transcription of the multi-participant communication, and a communication system which implements the multi-participant communication. The agenda generated for the next multi-participant communication may further include agenda items not previously identified for the multi-participant communication and determined using the real-time transcription of same.
G06Q 10/109 - Time management, e.g. calendars, reminders, meetings or time accounting
H04L 12/18 - Arrangements for providing special services to substations for broadcast or conference
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
25.
Locally Recorded Media Inclusion In A Recording Of A Multi-Participant Communication
A client device is disconnected during a multi-participant communication, such as a call or a conference. An indication of the disconnection is transmitted to the client device to cause an agent at the client device to record media locally at the client device. The media recorded by the agent at the client device based on the indication of the disconnection is later received and included within a recording of the communication. For example, a gap of the recording in which the disconnection occurred may be identified, such as by performing a comparison of media within the recording to identify a start time of the gap and an end time of the gap. The media is then inserted within a portion of the recording of the multi-participant communication corresponding to the gap.
A microphone of a primary client device is used to capture audio for a conference participant. During the conference, audio is sampled from the microphone of the primary client device and from microphones of one or more secondary client devices at the same location as the primary client device. Based on a score calculated for the audio sampled from the secondary client device being higher than a score calculated for audio sampled from the primary client device, the microphone of the secondary client device is selected for audio capture for the remote conference participant. The audio is output through conferencing software to which the primary client device is connected via a user interface tile for the conference.
G10L 25/60 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for measuring the quality of voice signals
H04W 4/02 - Services making use of location information
27.
Merging A Call With A Video-Enabled Virtual Meeting
A call is merged with a virtual meeting to allow an audio-only caller to join the virtual meeting while bypassing one or more security checks configured for the virtual meeting. After the virtual meeting is initiated, a call is established between a phone device of the audio-only caller and a customer endpoint. A request is received from the customer endpoint to join the phone device with a virtual meeting. A channel is opened between the phone device and a web service associated with the virtual meeting. The phone device is then joined to the virtual meeting over the channel. To facilitate a seamless transition from the call to the virtual meeting, the call may be maintained as an audio channel of the virtual meeting for the audio-only caller.
Session content is presented during a communication session to the participant devices. The participant devices transmit requests for notes to be generated for the session content. One or more segments of the communication session are determined when a threshold number of participant devices that requested notes to be generated is exceeded. Information associated with the one or more segments is transmitted to a participant device of the participant devices.
H04L 65/402 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel non-real time sessions, e.g. downloading a file in a parallel FTP session, initiating an email or combinational services
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
H04L 65/403 - Arrangements for multi-party communication, e.g. for conferences
29.
Virtual Backgrounds For Other Conference Participants
An image is received from a first device of a first conference participant. A virtual background image is selected based on a target-background mode set by a second conference participant. A modified image is obtained by replacing a background segment of the image with at least a portion of the virtual background image. The modified image is displayed on a second device of the second conference participant. The virtual background image can be selected based on a time of day. A user interface presented on the second device may be provided enabling the second conference participant to select the target-background mode.
A conference service number system enables the reconfiguration of an existing telephone number as a service number usable for selectively routing calls to each of a client endpoint and a dedicated conference software instance. A conferencing system implements separate conferencing software instances for individual operators for whom unique telephone numbers, are assigned. A telephony system facilitates calls to and from telephone numbers and implements a menu system (e.g. an interactive voice response (IVR) menu) for presenting a caller with options to either route a call to a specific telephone number to the subject operator (e.g., to a client device of that operator) or to a conferencing software instance implemented specifically for that operator. The call is accordingly routed to either a device of the operator or to the conferencing software instance based on the selection by the caller.
Transparent frames are utilized in a video conference. The transparent frames are used in conjunction with transparent screens and cameras positioned behind the transparent screens. Streamed videos of one or more remote participants are displayed on the transparent screens in every other frame of the video such that the transparent screen alternates between displaying a frame of the remote participant and a transparent frame. Video frames of the one or more participants are captured during the display of the transparent frames. The captured video frames of the participants are displayed at one or more remote screens viewed by the one or more remote participants.
Low lighting adjustments can be provided within a video communication session. A system generates a lighting adjustment request including a lighting adjustment depth, then segments a region of a video frame into texture sub-regions. The system smoots areas that are adjacent to the texture sub-regions. The system detects an amount of lighting using an artificial intelligence model and modifies the video frame to adjust the amount of lighting. The amount of adjustment of lighting corresponds to the lighting adjustment depth.
A conference system enables a communication session between two or more participants. During the communication session, a participant device displays a visual code that is scanned by the device in which the identity is unknown. Based on the scan of the visual code, the conference system transfers the communication session such that it is continued between the device in which the identity is unknown and the one or more devices of the remaining participants of the conference.
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
Cloud-connected wireless screen extension is performed to facilitate a screen share of content using a virtual display instantiated based on a connection between a first device and a second device. A virtual display is instantiated at a first device associated with a conference participant of a video conference based on a connection established between the first device and a second device. A screen share of first content during the video conference is then facilitated from the first device via the second device using the virtual display while second content excluded from the screen share is output at a display of the first device. In this way, the second device can be adapted as an additional (e.g., extended) display available for screen sharing content from the first device during a video conference.
The audio stream of a participant to a conference is modified within the conference to change a perceptible output of a characteristic of speech represented by the audio stream. An audio stream is obtained from a participant device connected to a conference. The audio stream represents speech of a user of the participant device. A user request to modify a first characteristic of the speech is initiated within the conference. The first characteristic is modified without modifying other characteristics of the speech to produce a modified audio stream, such that a second characteristic of the speech remains unmodified within the modified audio stream. An output of the modified audio stream within the conference is then caused in place of the audio stream. The audio stream modification as disclosed herein may be performed while the conference remains ongoing or during playback of a recording of the conference.
Voice and video features of a software platform are integrated to enable customization of software services of the software platform on a customer-basis. Routing rules are defined to route calls to certain phone numbers from certain software services. Thereafter, when an outbound call is initiated by a software service, the call is received via a telephony system associated with the software platform, a routing rule customized for the software platform is identified based on information signaled with the call, such as an identifier associated with the software service. A phone number is determined based on the routing rule, and the outbound call reporting the determined phone number is delivered to a destination phone number.
A conference gallery view intelligence system determines at least two regions of interest within a conference room based on an input video stream received from a video capture device located within the conference room. An output video stream for rendering within conferencing software is produced for each of the at least two regions of interest. The output video stream for each of the at least two regions of interest is then transmitted to one or more client devices connected to the conferencing software.
A computer stores packets from a first device at a first buffer. The computer decodes the packets to obtain decoded packets at a decoder. The computer encodes encoding the decoded packets to obtain encoded packets at an encoder. The computer transmits the encoded packets from the encoder to a storage unit. The computer fetches the encoded packets from the storage unit using a second buffer. The computer causes a transmitter to transmit the encoded packets from the second buffer to a second device.
A virtual assistant is configured to automatically identify tasks for a user by processing text from various applications of a unified communications platform (e.g., transcripts of conferences, voicemails, emails, and chat logs) to detect action items and infer associated action item data (e.g., task owner, location, and due date). For example, a virtual assistant system may be configured to utilize machine learning natural language understanding technology to extract action items from various input text to form a to-do list with due dates for the task owner. In some implementations, a two-tier machine learning model topology is used to identify action items in strings. The system may recognize named entities such as nouns, verbs, dates/times, locations of action item sentences. The output information may be displayed on a dashboard, in push notifications, or within other user interface aspects of a personal device, thus providing notification or task planning for personal assistance.
A contact center server obtains historical contact center data of a contact center by tracking contact center conditions. The contact center server trains, based on the historical contact center data, multiple modeling engines to generate agent demand data representing a number of agents working at a given time. The contact center server trains, based on the historical center contact center data and performance data of the multiple modeling engines, a combination engine to generate a combination of one or more modeling engines from the multiple modeling engines. The contact center server provides an output representing the trained combination engine and the multiple modeling engines.
A contact center server receives a request to determine a number of agents working at a future time. The contact center server generates, using a combination engine, a combination of one or more modeling engines from multiple modeling engines. The contact center server determines, using the combination of the one or more modeling engines, the number of agents. The contact center server provides an output representing the number of agents.
G06Q 10/0631 - Resource planning, allocation, distributing or scheduling for enterprises or organisations
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
Inter-party communications, such as telephony communications, virtual meeting communications, or messaging communications, are switched between clients while in progress. An inter-party communication may be detected at a first client. A channel may then be opened between the first client and a second client responsive to an operator initiating switching the inter-party communication from the first client to the second client. The inter-party communication may be switched from the first client to the second client using the channel while the inter-party communication remains in progress. The inter-party communication may then be continued at the second client.
09 - Scientific and electric apparatus and instruments
35 - Advertising and business services
38 - Telecommunications services
42 - Scientific, technological and industrial services, research and design
45 - Legal and security services; personal services for individuals.
Goods & Services
Downloadable software for audio teleconferencing, video
teleconferencing, network conferencing, web conferencing,
conducting telephony operations, conducting audio and video
telecommunications, web messaging, instant messaging,
sharing electronic interactive whiteboards, and accessing a
generative artificial intelligence (AI) assistant;
downloadable computer software for use in composing,
reading, managing, and transmitting email; downloadable
computer software for use in creating, managing, sharing,
and syncing electronic calendars, and in scheduling of
meetings and events; downloadable software for accessing
employee communications platforms and intranets featuring
electronic messaging, video streaming, document and
information sharing, virtual collaboration, news feeds,
podcasts, third-party app integrations, and related employer
analytic tools, namely, polls and surveys, and moderation
controls. Provision of an on-line marketplace for suppliers and users
of software applications; human resources consultancy. Audio teleconferencing; video teleconferencing; network
conferencing services; web conferencing services; instant
messaging services; web messaging; audio and video
telecommunications services; voice over internet protocol
(VOIP) services; electronic transmission of email and
messages; voicemail services; communication services,
namely, transmission of voice, audio, visual images, and
data by telecommunications networks, wireless communication
networks, the internet, information services networks, and
data networks; telecommunication access services, namely,
providing access to an omnichannel contact center,
artificial intelligence (AI) chatbot services, file sharing,
web messaging chat services, phone chat services, video
communications services, all for use in the fields of
customer support, customer relationship management, customer
service, customer engagement, and helpdesk functionality;
providing online chat rooms for social networking. Software as a service (SaaS) services featuring software for
live digital communications, namely, live video and audio
conferencing with multiple simultaneous users, audio
teleconferencing, video teleconferencing, network
conferencing, web conferencing, conducting telephony
operations, conducting audio and video telecommunications,
web messaging, instant messaging, electronic mail,
presenting electronic interactive whiteboards, and accessing
a generative artificial intelligence (AI) assistant;
computer services, namely, hosting electronic mail servers;
software as a service (SaaS) services featuring
non-downloadable software for use in managing and sharing
contact information; software as a service (SaaS) services
featuring non-downloadable software for use in creating,
managing, sharing, and syncing electronic calendars and in
scheduling of meetings and events; hosting of websites
featuring technology that enables users to search for,
register for attendance at, and virtually attend online
business, educational, social and entertainment events;
providing on-line non-downloadable web-based software that
enables users to create and host content for ticketed online
events; software as a service [SaaS] services, featuring
software which uses generative artificial intelligence (AI)
to create summaries of video meetings, audio meetings, text
conversations, and chat messages, to generate email replies
and chat messages, to create real-time meeting transcripts,
to create ideas and organization for whiteboards, to create
tasks based on meeting content, and to assist in meeting
scheduling; contact center as a service (CCaaS) services,
namely, platform as a service featuring software for
providing access to a cloud based omnichannel contact center
platform; platform as a service (PaaS) featuring computer
software platforms for monitoring, controlling, and managing
omnichannel call centers; platform as a service (PaaS)
featuring computer software platforms for collection,
management, analyzation, and visualization of customer and
employee data and metrics for use with operation of
omnichannel call centers; platform as a service (PaaS)
services featuring software for customer relationship
management (crm); software as a service (SaaS) services
featuring software for monitoring, controlling, and managing
omnichannel call centers; software as a service (SaaS)
services featuring software for collection, management,
analyzation, and visualization of customer and employee data
and metrics for use with operation of omnichannel call
centers; software as a service (SaaS) services featuring
software for customer relationship management (CRM);
programming of computer software for others, namely,
programming computer programs for use in the fields of
customer support, customer relationship management, customer
service, customer engagement, and helpdesk functionality;
design, development and customization of computer software
in the fields of customer support, customer relationship
management, customer service, customer engagement, and
helpdesk functionality; providing temporary use of online
non-downloadable computer artificial intelligence (AI)
chatbot software for simulating conversations; providing
temporary use of online, non-downloadable computer software
for collecting, reviewing, and analyzing business
information to support customer service; providing temporary
use of online, non-downloadable computer software for
enabling electronic communications, namely, file sharing,
email, chat, instant messaging, video, artificial
intelligence (AI) chatbot chats, digital voice, and voice
over internet protocol (VoIP); providing temporary use of
online, non-downloadable computer software for management of
inquiries from internal teams and departments; providing
temporary use of non-downloadable computer software for data
aggregation to visualize, evaluate, analyze, and collect
business data and metrics; providing temporary use of
non-downloadable computer software using artificial
intelligence for resolving inquiries; design consulting
services in the field of employee communication systems;
information technology consultancy in the field of employee
communication systems, namely, consulting relating to
installation, maintenance and repair of communication
systems software; software as a service (SaaS) services
featuring software for employee communications platforms and
intranets featuring electronic messaging, video streaming,
document and information sharing, virtual collaboration,
news feeds, podcasts, third-party app integrations, and
related employer analytic tools, namely, polls and surveys,
and moderation controls. Online social networking services.
44.
Assigning Display Locations For A Series Of Video Conferences
Methods, systems, and apparatus, including computer programs encoded on computer storage media related to display configuration for a communications session. The system assigns a display location for each of multiple meeting participants. A meeting participant is associated with an unique identifier of the meeting participant. A user interface configured to display meeting participants at display locations is displayed. After a particular meeting participant has joined a first video conferencing session, the user interface display the particular meeting participant at their assigned display location.
Methods, systems, and apparatus, including computer programs encoded on computer storage media related to multi-stream video encoding for screen sharing a communications session. The system may determine an active pixel area and a remaining pixel area of a video region. of a video region. A first video stream of the active pixel area is generated at a first frame rate. A second video stream of the remaining pixel area is generated at second frame rate, where the second frame rate is a frame rate lower than the first frame rate. A client device may transmit the first video stream and the second video stream to a second client device.
Message-based interactive voice response (IVR) menu reconnection is used to reconnect a calling device to a destination at a specific node of a call path after the calling device disconnects from a call with the destination. Menu options of an IVR service presented during a call between the calling device and the destination are determined responsive to the calling device disconnecting from the call. A message including one or more selectable elements each associated with one of the menu options is then transmitted to the calling device. Responsive to a selection of a selectable element at the calling device, the calling device is connected to a destination endpoint. Thus, where the calling device had partially or fully traversed an IVR service during the call, the message-based IVR menu reconnection disclosed herein enables the calling device to reconnect to the destination without having to repeat that IVR service traversal.
A graphical user interface (GUI) may be configured for display at an output interface during a video conference. The GUI may comprise visual elements associated with participant devices of the video conference. For example, the visual elements may include video feeds and/or images associated with the participant devices. During the video conference, a first visual element may be moved to a location in the GUI based on a characteristic associated with the first visual element. The characteristic and the location may be based on an input. In some implementations, the visual elements may be arranged in a two-dimensional visual layout. In some implementations, the visual elements may be arranged in a three-dimensional visual layout. The visual elements may be moved, for example, based on a communication sent during the video conference, an arrival of a participant to the video conference, and/or a communication modality used during the video conference.
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
The distribution of incoming queries to a customer interaction center agent group is parallel processed amongst agents of that group to improve queue wait times. A threshold number of queries that may be processed by agent devices associated with the agent group at a given time are defined based on a number of agents of the agent group that are available at the given time. In response to determining that the number of queries is satisfies the threshold number of queries based on the number of agents that are available at a current time, a number of queries awaiting processing are distributed to one or more agent devices of the agent group. The threshold number of queries may be based on half of the number of agents that are available at the given time.
A video conference system for sharing sensitive information between a first participant and a second participant. The video conference system receives a video stream captured by a camera device of a first participant of a video conference, analyzes the video stream during the video conference to identify portions of the video stream containing sensitive information presented by the first participant and captured by the camera device, masks the portions of the video stream containing sensitive information to generate a redacted video stream, and outputs the redacted video stream during the video conference for display at a display device of a second participant of the video conference.
A video conference system for execution of a workflow based on a type of object shared in a video conference. The video conference system receives a video stream captured by a camera device of a first participant of a video conference; performs object detection on the video stream during the video conference to determine an object type of an object presented by the first participant to the camera device; determines a workflow related to the object type based on the determined object type; and outputs data for executing the workflow at a device of a second participant of the video conference.
A system may transmit, from a first user device connected to a video conference and associated with a conference participant, an indication to a peripheral device to temporarily buffer a portion of media content of the video conference. The system may perform, within the video conference, a transition of the conference participant from the first user device to a second user device connected to the video conference including causing the peripheral device to relay the portion of the media content during the transition. In some implementations, the portion of the media content may be buffered by storing less than a predefined amount of time of the media content received by the first user device in a random access memory of the peripheral device.
A user device may use a light communications system to detect, from a light source in a physical space, light in a pattern that represents encoded information. The light communications system may include a sensor and an emitter. The user device may decode the encoded information based on the pattern to cause an update associated with a virtual meeting. In some implementations, the update may include the user device joining the virtual meeting using a meeting identifier. In some implementations, the update may include adjusting a microphone or a speaker in the physical space. In some implementations, the update may include a pairing between the user device and a second device that enables bidirectional communication between the user device and the second device.
A system device may determine a light pattern that represents encoded information associated with a virtual meeting. The system device may cause a light communications system to emit the light pattern. The light communications system may include a sensor and an emitter. The light pattern may configure a user device that detects the light pattern to perform an update associated with the virtual meeting. In some implementations, the light pattern may be constrained to a zone in a physical space. In some implementations, the system may use a facial recognition system to locate a person in the physical space when light from the light communications system is occluded.
A device of a proposed participant of the video conference receives an invitation to a video conference. The device provides, via a display of the device and based on the invitation, a graphical user interface element comprising a first graphical user interface icon for accepting the invitation, a second graphical user interface icon for rejecting the invitation, and a third graphical user interface icon for requesting a recording of the video conference. The first graphical user interface icon, the second graphical user interface icon, and the third graphical user interface icon are displayed simultaneously within the graphical user interface element. The device receives receiving a selection of the third graphical user interface icon. The device transmits a signal requesting the recording in response to the selection.
Methods, systems, and apparatus, including computer programs encoded on computer storage media related to visual asset display and controlled movement in a video communication session. The system provides for display, via a user interface, video of a meeting participant during a video communication session. A visual asset for display is selected. The visual asset is provided for display via the user interface. The visual asset is maneuvered along a path about the user interface such that the visual asset moves about the user interface during the video communication session.
Various embodiments of an apparatus, method(s), system(s) and computer program product(s) described herein are directed to a Scaling Engine. The Scaling Engine identifies a background object portrayed in a background template for a video feed. The Scaling Engine determines a background template display position for concurrent display of the background object with video feed data. The Scaling Engine generates a scaled background template by modifying a current aspect ratio of the background template with the background object set at the background display position according to a video feed aspect ratio. The Scaling Engine generates a merged video feed by merging the scaled background template with live video feed data, the merged video feed data providing an unobstructed portrayal of the identified background object.
Methods and systems provide for applying a video effect to a video corresponding to a participant within a video communication session. The system displays a video for each of at least a subset of the participants and a user interface including a selectable video effects UI element. The system receives a selection by a participant of the video effects UI element. In response to receiving the selection, the system displays a variety of video effects options for modifying the appearance of the video and/or modifying a visual representation of the participant. The system then receives a selection by the participant of a video effects option, and further receives a subselection for customizing the amount of the video effect to be applied. The system then applies, in real time or substantially real time, the selected video effect in the selected amount to the video corresponding to the participant.
Methods, systems, and apparatus, including computer programs encoded on computer storage media related to display configuration for a communications session. The system assigns a display location for each of multiple meeting participants. A meeting participant is associated with an unique identifier of the meeting participant. A user interface configured to display meeting participants at display locations is displayed. After a particular meeting participant has joined a first video conferencing session, the user interface display the particular meeting participant at their assigned display location.
Location determination and telephone number distribution for emergency calls is enabled by a telephony system which maintains multiple pools of telephone numbers. Each pool corresponds to a different region such that the pools of telephone numbers are defined at the region-level rather than at the site-level. The telephony system determines the location of a calling device initiating an emergency call regardless of whether the calling device is at a known site. Based on the determined location of the calling device, one of the pools of telephone numbers which corresponds to that location is selected. The telephony system thereafter distributes a telephone number for the calling device to use for the emergency call from that selected pool of telephone numbers to facilitate an emergency call between the calling device and a local public safety answering point.
In one embodiment, the system connects to a communication session with a number of participants; receives a transcript of a conversation between the participants; extracts utterances from the transcript; associates a subset of the utterances with a first group of speakers and the remaining subset of the utterances with a second group of speakers; calculates one or more statistical metrics for a number of engagement metrics based on the utterances of the first group of speakers and the utterances of the second group of speakers; assigns a weight to each of the engagement metrics; determines an engagement score for the communication session based on the assigned weights for the engagement metrics; and presents, to one or more users, the engagement score for the communication session.
Calls run through a virtual desktop infrastructure server are enhanced by opening a media channel between a personal computing device and a media server for a call initiated using a virtual desktop infrastructure server. A first stream of media data for the call is merged with a second stream of media data for the call in a single virtual channel of the protocol using a first packet queue to store packets of the first stream and a second packet queue to store packets of the second stream as the packets await transmission. A first packet of media data of the first stream is pushed into the first packet queue. A fill level of the first packet queue is compared to a first congest threshold associated with the first packet queue. Responsive to the fill level exceeding the first congest threshold, a congestion mitigation measure is invoked.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Telecommunication services, namely, providing transmission of electronic calendars via the internet Downloadable calendaring software; downloadable computer software for creating an electronic calendar, scheduling meetings and events, managing calendars, synchronizing calendars, and managing group calendars Software as a service (SAAS) services featuring software for use in creating an electronic calendar, scheduling meetings and events, managing calendars, synchronizing calendars, and managing group calendars; Computer services, namely, hosting on-line interactive public calendars that allow multiple participants to share event schedules
63.
Video Communications Platform Virtual Environment Streaming
Methods, systems, and apparatus, including computer programs encoded on computer storage media relate to a method for casting from a virtual environment to a video communications platform. The system may provide a video conference session in a video conference application. A connection may be established between the video conference application and a VR or AR device. The video conference application may receive 2D video content from the VR or AR device. The 2D video content may comprise a view of a virtual environment. The video conference application may stream the 2D video content in the video conference session.
G03H 1/02 - Holographic processes or apparatus using light, infrared, or ultraviolet waves for obtaining holograms or for obtaining an image from themDetails peculiar thereto Details
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
64.
User-Specific Security Credential Bypass For Shared Device Voicemail
A shared device voicemail box can be accessed from a device that is an unauthenticated device without a user-specific security credential. The device transmits a request to a server. The request includes a unique code based on an image. The device accesses the voicemail box based on an access grant received from the server. Using the image, a non-registered user of a telephony system can access a secured voicemail box.
Meeting controls are provided for network conferences. Providing meeting controls includes maintaining a policy database and receiving a request to participate in a conference. The request includes an identifier. The meeting controls are transmitted to a user device based on the request.
Sessions in progress are seamlessly moved between devices of a software platform. Proximity-based session handovers are performed between devices of the software platform utilizing ultrasound signals. The ultrasound signals include a frequency signature. The frequency signature is associated with a stationary device. A handover of a session in progress from a mobile device to the stationary device is performed based on the detection of the ultrasound signal.
H04W 4/48 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for in-vehicle communication
67.
Transparent frame utilization in video conferencing
Transparent frames are utilized in a video conferencing session. The transparent frames are used in conjunction with transparent screens and cameras positioned behind the transparent screens. Streamed videos of one or more remote participants are displayed on the transparent screens in every other frame of the video such that the transparent screen alternates between displaying a frame of the remote participant and a transparent frame. Video frames of the one or more participants are captured during the display of the transparent frames. The captured video frames of the participants are displayed at one or more remote screens viewed by the one or more remote participants.
A server generates a recording of a conference. The server obtains, after termination of the conference, permission from at least one participant device of the conference to store a subset of media generated by the at least one participant device during the conference in a data repository. The permission specifies at least one media type to store in the data repository and at least one media type not to store in the data repository. The server stores, in the data repository, the recording of the conference modified based on the obtained permission.
Voicemail spam detection is performed based on content of voicemail messages. The content of an incoming voicemail message is compared to a spam template that includes a representation of a spam voicemail. Spam templates may be generated based on spam indications provided by users for voicemail messages they have received. User indications for sufficiently similar voicemail messages may be aggregated by maintaining a vote count for a spam template that reflects how many times a user has indicated a matching voicemail message is spam. A spam template may also include an occurrence count that reflects how many times voicemail messages matching a spam template have been detected in a telephony system. An incoming voicemail message may be compared to spam templates and, responsive to a match of content and/or a corresponding vote count or occurrence count meeting a condition, the voicemail message may be identified as spam.
A server determines a number of devices preceding a user device in a user queue of devices for communication with a contact center agent device. The server determines a number of contact center agent devices available for the communication. The server calculates an estimated wait time for the user device based on the number of devices preceding the user device, the number of contact center agent devices, and wait times of user devices, distinct from the user device, requesting communications with the contact center agent device.
A virtual background associated with a first user of a virtual meeting is identified. In response to identifying the virtual background associated with the first user, a device associated with a second user of the virtual meeting is synchronized to use the virtual background during the virtual meeting. A composite video depicting the first user or the second user overlaid on the virtual background is generated for display on a device connected to the virtual meeting.
A machine learning model (e.g., including a deep learning neural network) with learned embeddings is applied to time series data with associated metadata to obtain predictions of the time series value. For example, a call volume in a period of time may be predicted based on call volume data for a sequence of time bins in a window of preceding time. Time bins may be associated with respective metadata, such as day of week, hour of day, day of month, holiday, part of business cycle, weather, and/or tide. These pieces of metadata may be mapped to embedding vectors using trained embedding functions. The resulting embedding vectors may be input to a neural network along with the corresponding time series data (e.g., call volumes) to make a prediction for future time bin. For example, the prediction may be used to provision servers in a network infrastructure.
A method includes connecting a call from a client device to a destination having an interactive voice response service; transcribing audio from the destination during the call to identify menu options of the interactive voice response service; generating visualizations representing the menu options; and outputting the visualizations to a display associated with the client device. A system includes a telephony system, an automatic speech recognition processing tool, and a visualization output generation tool. The telephony system connects a call from a client device to a destination having an interactive voice response service. The automatic speech recognition processing tool transcribes audio from the destination during the call to identify menu options of the interactive voice response service. The visualization output generation tool generates visualizations representing the menu options. The telephony system outputs the visualizations to a display associated with the client device.
H04M 1/72469 - User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
H04M 3/493 - Interactive information services, e.g. directory enquiries
H04M 3/56 - Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
Methods and systems provide for user-prompted group actions in a shared space hosted by a communication platform. In one embodiment, the system detects, by a processing device within a space, a client device within the space, the client device being associated with a participant of a current communication session; receives, at the processing device, a user input corresponding to the client device, the user input indicating a change in function for the current communication session; correlates, at the processing device, the user input with one or more additional user inputs corresponding to other client devices within the space; and applies the change in function based on the correlation to modify one or more aspects of the communication session.
Whiteboard roles controlling levels of access to functionality of a digital whiteboard shared to a video conference for participants of the video conference are inherited based on conference roles of those participants within the video conference. Based on a request to share a digital whiteboard to a video conference, a whiteboard role is determined for each participant of the video conference based on a conference role of the participant within the video conference. For each participant during the video conference, access to functionality of the digital whiteboard corresponding to the whiteboard role determined for the participant is enabled, in which different functionality of the digital whiteboard is enabled for different whiteboard roles.
A manipulation of a media stream associated with a manipulated participant of a communication session is identified. A notification of the manipulation is transmitted to a first participant of the communication session. An approval indication of the manipulation is received from the first participant. A determination is made that the approval indication indicates a disapproval of the manipulation. A request to disable the manipulation is transmitted to a second participant of the communication session.
H04N 21/472 - End-user interface for requesting content, additional data or servicesEnd-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
H04L 65/612 - Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
77.
Priority Score-Based Connection Of User Device To Agent Device
A contact center server receives indicia of user devices accessing the server. The contact center server determines that a contact center agent device is available for communicating with a user device of the user devices. The contact center server calculates, for at least a subset of the user devices, a priority score based on an elapsed time since the user device initiated a contact center engagement and a priority level of an account associated with the user device. The contact center server selects, based on the calculated priority scores, a first user device for communicating with the available agent device. The contact center server connects the first user device to the available agent device via the contact center server.
42 - Scientific, technological and industrial services, research and design
Goods & Services
Providing temporary use of on-line non-downloadable software development tools for designing and automating business processes; Providing temporary use of on-line non-downloadable software development tools for designing and automating business processes using artificial intelligence (AI)
79.
DYNAMIC DETERMINATION OF VIDEO STREAM QUALITY DISCREPANCIES IN A COMMUNICATION SESSION
Methods and systems provide dynamic adjustments for video optimization in a communication session. In one embodiment, the system receives, at a server, an outgoing video stream from a transmitting device to a receiving device; identifies a first set of video parameter data corresponding to the quality of the outgoing video stream; receives, from the receiving device, a second set of video parameter data corresponding to the quality of the video stream; determines one or more quality discrepancies between the first set of video parameter data and the second set of video parameter data; and provides notification of at least a subset of the quality discrepancies to one or both of the transmitting device and the receiving device.
Methods and systems provide dynamic adjustments for video optimization in a communication session. In one embodiment, the system receives a video stream from a transmitting device; transmits the video stream to one or more client devices, the first video stream having associated video transmission quality parameters; receives, from the one or more client devices, initial video display quality parameters related to the video stream; determines a video stream adjustment by comparing the video transmission quality parameters with the video display quality parameters; generates a modified video stream by applying the video stream adjustment to the video stream; transmits the modified video stream to the one or more client devices; receives, updated video display quality parameters related to the modified video stream; and compares the updated video display quality parameters to the initial video display quality parameters to determine that one or more video quality metrics have increased.
A conference system is configured to identify a local conference participant who is speaking in a user interface of a remote conference participant. The conference system captures audio data of a conference participant who is speaking in a physical space having at least two conference participants. An identity of the conference participant is determined based on sensor data captured by a sensor device associated with the physical space. The conference system generates an output configured to cause a client application to present a representation of the identity of the conference participant who is speaking concurrently with a presentation of speech audio of the conference participant who is speaking.
A media stream associated with a participant of a communication session is received. A biometric marker is generated for the participant based on the media stream. User profiles associated with the biometric marker are identified in a biometrics reference library. A determination is made as to whether a cardinality of the user profiles exceeds a threshold number. Responsive to determining that the cardinality of the user profiles exceeds the threshold number, another participant is notified of a possible inauthenticity of the participant.
A request from a first device of a first communication session participant to connect to a second device of a second communication session participant is received at communications software. The communications software connects the first device to the second device, communications software determines a level of authenticity for the first communication session participant based on a communications history of at least one of the first communication session participant or the second communication session participant. The communications software notifies the second communication session participant of the level of authenticity.
Temporary access to a digital whiteboard is enabled for conference participants during a conference (e.g., a video conference). Based on a request to share a digital whiteboard with conference participants during a conference, ones of the conference participants to which to grant temporary access permissions for the digital whiteboard are determined. The temporary access permissions are granted to enable the ones of the conference participants to access the digital whiteboard during the conference. Based on an event occurrence, which may, for example, correspond to a termination of the conference or a disconnection of an owner of the digital whiteboard from the conference, the temporary access permissions are revoked to restrict further access to the digital whiteboard by the ones of the conference participants.
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
A server receives a query for connecting a user device to an agent device at a contact center server. The server determines, based on the query, a set of features of an agent associated with the agent device. The server determines that the agent having the set of features is available. The server calculates a priority score for the user device based on an elapsed time since the user device initiated a contact center engagement and additional stored data associated with the user device. The server connects the user device to the agent device based on the agent being available and based on the priority score for the user device exceeding a priority score for at least one other user device.
A conference participant is alerted as to an event during a conference responsive to a determination that a focus of the conference participant is other than on the conference. During the conference, an event associated with the conference participant is detected based on a real-time transcription of the conference. For example, the event may relate to a topic relevant to the conference participant or a request associated with a name of the conference participant. A determination is made that a focus of the conference participant is other than on the conference based on information associated with a device of the conference participant, such as input received from a camera associated with the device or a setting of an audio output device associated with the device. Based on that determination and the detected event, output is presented to alert the conference participant as to the event.
A first conference participant of a conference that includes a whiteboard is identified. A first viewport into the whiteboard is identified. The first viewport is displayed at a device of the first conference participant. A second viewport is set based on the first viewport. The second viewport is displayed at a device of a second conference participant. The first conference participant may be identified based on a trigger associated with the first conference participant. The first conference participant may be identified based on speech in an audio stream received from the device of the first conference participant. The first conference participant may be identified based on an edit to the whiteboard received from the device of the first conference participant.
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
A contact center server receives, during a contact center engagement and via a chat bot of the contact center server, a query from a user device. The contact center server determines that the query corresponds to a stored prompt associated with a stored response in a contact center knowledgebase. The contact center server provides, via the chat bot, the stored response to the user device.
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
A video conference that is in progress is run though a device, such as a portable device. Video of the video conference is displayed on a display of the device. The device detects when a connection is made between the device and an external display during the video conference that is in progress. When the device detects the connection between the device and the external display, the device causes the video of the video conference to be displayed on the external display. When the video of the video conference is displayed on the external display, a control panel is displayed on the display of the device. The device retains the connection to the video conference while the video of the video conference is displayed on the external display and the control panel is displayed on the display of the device.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
A system may assign a set of one or more actions to a programmable button of a contact center device connected to an agent device of a contact center agent. The set of one or more actions may correspond to a workflow associated with a contact center engagement between a contact center user and the contact center agent. The set of one or more actions may be assigned based on at least one of activating the contact center device or connecting the contact center device to a network. The system may initiate, based on a selection of the programmable button, the workflow by transmitting, from the contact center device to the agent device, information configured to cause the agent device to perform the set of one or more actions.
A request is received. The request identifies a first conference participant of a conference as a follow-along participant for a second conference participant of the conference for which a whiteboard is enabled. A trigger associated with the first conference participant is identified. Responsive to the identified trigger, a second viewport is set for display at a device of the second conference participant. The second viewport is based on a first viewport into the whiteboard where the first viewport is displayed at a device of the first conference participant.
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
A server receives a query from a user device for posting to a communication modality. The server determines, using a chat bot associated with the communication modality, that the query corresponds to a stored question associated with a stored answer in a knowledgebase. The server provides, via the chat bot, the stored answer to the user device with a prompt to forgo posting the query to the communication modality.
An incognito mode limits access to subject digital whiteboard content by one or more participants of a digital whiteboard session. During an active digital whiteboard session, a content item is added to a digital whiteboard, based on input obtained from a participant device connected to the active digital whiteboard session, in an incognito mode that limits access to the content item by other participant devices connected to the active digital whiteboard session. At some point thereafter during the active digital whiteboard session, a determination is made to change access privileges for the content item to revoke the incognito mode designation thereof.
An incognito mode limits access to subject digital whiteboard content by one or more participants of a digital whiteboard session. During an active digital whiteboard session, a content item is added to a digital whiteboard, based on input obtained from a participant device connected to the active digital whiteboard session, in an incognito mode that limits access to the content item by other participant devices connected to the active digital whiteboard session. At some point thereafter during the active digital whiteboard session, a determination is made to change access privileges for the content item to revoke the incognito mode designation thereof.
The determination may be on a time-basis or an event-basis. The access privileges are thus changed during the active digital whiteboard session to enable concurrent access to the content item within the digital whiteboard by the participant device and at least one of the other participant devices.
A contact center server accesses queries provided by user devices. The contact center server determines a confidence score representing a likelihood that a query of the queries matches an intent using an intent matching engine. The contact center server matches, via the intent matching engine based on the confidence score exceeding a threshold, the query to the intent. The contact center server provides, to a client device, a subset of the queries that are not matched to any intent. The subset is identified based on confidence scores. The contact center server receives, from the client device, data representing an intent that matches at least one query in the subset. The contact center server trains the intent matching engine based on the received data.
A computer assigns a participant of a video conference to a virtual breakout room associated with the video conference. The computer determines an availability of a physical space for the virtual breakout room. The computer allocates the physical space for use in connection with the virtual breakout room. The computer captures an image of the participant from a video stream associated with the video conference. The computer displays the image of the participant on a display of a computing device located in the physical space.
A conference schedule system that implements automated privacy controls for a schedule view of a shared conference space digital calendar. The conference schedule system determines an identity of a person viewing a display screen configured to display a schedule view of a digital calendar associated with a shared conference space, accesses event permissions associated with an event schedule for the shared conference space, and modifies a selective output of information associated with the event in the schedule view displayed at the display screen based on the event permissions and the identity of the person.
A first image of a first video stream and a second image of a second video stream are displayed at a device. The first video stream and the second video stream are video streams of a video conference. A stitching image is identified in preview images received from a camera of the device. The stitching image is transmitted to a conferencing software associated with the video conference.
A server obtains a natural language query from a first user device. The server matches an intent to the natural language query using an intent matching engine. The intent represents predicted data associated with the natural language query. The server transmits the natural language query and the intent to a second user device. The server receives, from the second user device, a response indicating whether the natural language query is properly matched to the intent. The server trains the intent matching engine based on a machine learning technique and the response.
A shared conference space system is configured for use with a shared conference space. The shared conference space system determines identities of a set of conference participants of a shared conference space using a sensor. The shared conference space system associates the identities of the set of conference participants to a shared conference space digital calendar associated with the shared conference space and outputs to a display of the shared conference space system a portion of the shared conference space digital calendar including the identities of the set of conference participants.
A system may receive, from a computing device that is not connected to a conference to which one or more participant devices are connected, a message including text entered by a user of the computing device. The system may determine a permission for enabling communications between the computing device and the one or more participant devices by authenticating a credential associated with the computing device. The system may transmit, based on the permission, the message to the one or more participant devices during the conference. In some implementations, the system may invoke speech synthesis software during the conference to produce machine-generated speech representative of the message. The speech synthesis software may use a spoken voice model of the user, generated using recorded voice samples of the user, to produce the machine-generated speech.
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination