The embodiments of the disclosure provide a method for rendering a virtual object, a host, and a computer readable storage medium. The method includes: determining a plurality of regions in an environment; determining lighting information of each of the plurality of regions; obtaining a to-be-rendered virtual object and selecting at least one candidate region corresponding to the to-be-rendered virtual object among the plurality of regions; determining a reference lighting information based on the lighting information of each of the at least one candidate region; and rendering the to-be-rendered virtual object based on the reference lighting information.
The embodiments of the disclosure provide a method for activating a system function, a host, and a computer readable storage medium. The method includes: providing a visual content; tracking a first motion state of a physical object by using a tracking device; and in response to determining that the first motion state of the physical object indicates that a distance between the physical object and the host is less than a first distance threshold and the physical object corresponds to a first content region in the visual content, performing a first system function corresponding to the first content region, wherein the first content region corresponds to a first physical region on a body of the host.
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
A spectrum measurement device includes a scanning light receiver, an optical component, and a processor. The scanning light receiver, on a plane, receives a plurality of light beams of a display image sequentially according to a scanning operation to generate a plurality of input light beams sequentially. The optical component receives the input light beams sequentially and generates a plurality of pieces of processed information. The processor obtains luminance and chromaticity information of the display image according to the processed information.
The embodiments of the disclosure provide a method for managing data drop rate, a client device, and a computer readable storage medium. The method includes: determining, by the client device, a first data component corresponding to a t-th time point and a second data component, wherein the first data component belongs to a first data type, and the second data component belongs to the first data type or a second data type; sending, by the client device, a first data packet to a host at the t-th time point, wherein the first data packet comprises a first payload having a fixed size, and the first payload comprises the first data component and the second data component.
H04L 1/08 - Arrangements for detecting or preventing errors in the information received by repeating transmission, e.g. Verdan system
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
A head-mounted display device including a display part and a headrest module is provided. The headrest module includes a base, a first driver, a first soft pad, a first sensor and a controller. The base is pivotally connected to the display part. The first driver is connected between the base and the display part, and is used to drive the base to rotate relative to the display part. The first soft pad is disposed on the base and used to contact a forehead of a user. The first sensor is disposed on the first soft pad and is used to sense the pressure exerted by the forehead on the first soft pad. The controller is electrically connected to the first sensor and the first driver, and is used to control the first driver to drive the base or stop driving the base according to the sensing result of the first sensor.
A wearable device includes a host, a pair of temple arms, and a head strap module. The pair of temple arms are connected to two opposite sides of the host. The head strap module includes a pair of connectors, a pair of swivel rings, a pair of buckles, and a support strap. The pair of connectors are respectively detachably connected to the pair of first temple arms. The pair of swivel rings are respectively connected to the pair of connectors. The support strap has a bridging section, a limiting section, and a pair of extension sections. The limiting section is located at a first end of the bridging section. One of the pair of extension sections is connected to a second end of the bridging section opposite to the first end and extended through the swivel ring, the buckle, and the limiting section sequentially and fixed to the buckle. The other of the pair of extension sections is connected to the second end of the bridging section and extended through the other swivel ring, the other buckle, and the limiting section sequentially and fixed to the other buckle. The pair of buckles are respectively moveable on the pair of extension sections to adjust an overlapping length of the pair of extension sections between the pair of buckles. A head strap module is also provided herein.
A head-mounted device and a headband are disclosed. The head-mounted device includes a host and a headband. The host has two connecting parts located on opposite sides. The headband has two connecting ends located on opposite sides. The connecting ends are detachably and rotatably connected to the connecting part. A first buckle part of each of the connecting ends rotatably buckles a second buckle part of a corresponding one of the connecting parts. When the headband rotates to a detachable position relative to the host, the first buckle part and the second buckle part are separated.
A data classification method includes following steps. Text samples are obtained from a dataset. The text samples are converted into text embeddings in a semantic space. An outlier-inlier ranking of the text samples is generated based on an outlier detection algorithm according to distances between the text embeddings in the semantic space. Partial samples are selected from the text samples according to the outlier-inlier ranking. A manual input command is received to assign manual-input labels on the partial samples. A prompt message is generated according to the partial samples with the manual-input labels and unlabeled samples of the text samples. The prompt message is provided to a generative pre-trained transformer model for generating inlier-outlier prediction labels about the unlabeled samples.
A heads mounted display device is provided. The head-mounted display device includes a display, an optical system, and a processor. The display is configured to display a pre-warp image. The optical system is configured to receive the pre-warp image and output an undistorted image. The processor is configured to perform a tolerance enhancement operation on an original image to generate an enhanced image. Further, the processor is configured to apply a software distortion on the enhanced image to generate the pre-warp image. The software distortion is configured to compensate an optical distortion of the optical system.
A wearable device includes a PCB (Printed Circuit Board), a fan element, a radar module, an IMU (Inertial Measurement Unit), and a processor. The fan element is disposed on the PCB. The radar module is adjacent to the fan element. The radar module detects the rotation state of the fan element, so as to generate a detection signal. The IMU is disposed on the PCB. The IMU measures the movement state of the PCB, so as to generate measurement data. The processor calibrates the measurement data according to the detection signal, so as to output a calibration measurement result.
A controller includes a body and a surrounding part. The body has a control area for sending a control signal according to a movement of a thumb of a user. The surrounding part is connected to the body and used to surround and be fixed to a proximal phalange of an index finger of the user. The body is away from a joint between the proximal phalange and a metacarpal bone of the user.
A detection device for detecting an object includes a camera module, an image processing module, and a radar module. The camera module obtains an image of the object. The image processing module analyzes the image, so as to define the target sensing zone and generate a radar setting value. The radar module is controlled by the image processing module. The radar module is selectively operated in a first resolution mode or a second resolution mode. Initially, the radar module is operated in the first resolution mode. In the second resolution mode, the radar module detects a specific portion of the object within the target sensing zone according to the radar setting value.
G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a patternLocating or processing of specific regions to guide the detection or recognition
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
A tracking system is provided. A head-mounted display device is adapted to be mounted on a head of a user and comprises a camera and a processor. The camera obtains a body image includes a body portion of the user. The processor is configured to: determine a relative position between a body motion sensor of a body tracker and the body portion of the user based on the body image; and perform a target tracking of the body portion based on the relative position and sensor data of the body motion sensor of the body tracker. A body tracker is adapted to be mounted on the body portion of the user and comprises a body motion sensor. The body motion sensor obtains the sensor data. The sensor data indicates a movement and/or a rotation of the body portion.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
17.
Display screen or portion thereof with graphical user interface
A brightness adjustment system is provided. The brightness adjustment system includes a camera, a direction sensor, and a processor. The camera is configured to obtain an image based on an exposure setting and gain. The direction sensor is configured to obtain sensor data. The processor is configured to determine a current direction which the camera is facing based on the sensor data and adjust the exposure setting of the camera or the gain based on the current direction.
The embodiments of the disclosure provide a tracking accuracy evaluating system, a tracking accuracy evaluating device, and a tracking accuracy evaluating method. The method includes: detecting multiple distances between the tracking accuracy evaluating device and multiple reference positions in a rotating process associated with a rotating axis, wherein an accommodating space of the tracking accuracy evaluating device accommodates a tracking device during the rotating process, and the distance sensor, the rotating axis, and the tracking device accommodated in the accommodating space have a fixed relative position therebetween; estimating a first pose variation of the tracking device during the rotating process based on the distances and the fixed relative position; obtaining a second pose variation of the tracking device during the rotating process; and determining a tracking accuracy of the tracking device based on the first pose variation and the second pose variation.
The embodiments of the disclosure provide method for providing a visual cue in a visual content, a host, and a computer readable storage medium. The method includes: providing, by the host, the visual content, wherein the visual content comprises a virtual bearing object; determining, by the host, a relative position between a reference point and the virtual bearing object; and in response to determining that the relative position meets a predetermined condition, modifying, by the host, an appearance of the virtual bearing object based on the relative position.
A wearable device includes a carrier element, a wearable element, a SIP (System-In-Package) IC (Integrated Circuit), a first antenna element, and a second antenna element. The wearable element is connected to the carrier element. The SIP IC includes a first transceiver and a second transceiver. The SIP IC is disposed on the carrier element. The first antenna element is coupled to the first transceiver. The first antenna element is integrated with the SIP IC. The second antenna element is coupled to the second transceiver. The second antenna element is integrated with the carrier element.
A head-mounted device and a retractable headband are provided. The head-mounted device includes a host, a retractable headband, and an earphone. The host has a first connection part and a second connection part located on two opposite sides. The retractable headband has an earphone connection part and a first connection end and a second connection end located on two opposite sides. The first connection end is connected to the first connection part. The second connection end is connected to the second connection part. When the retractable headband is elongated by an elongation amount, a change in a distance between the earphone connection part and the first connection end is less than half of the elongation amount. The earphone is connected to the earphone connection part.
The present disclosure provides localization method and wearable device. The localization method is applicable to the wearable device, and includes: obtaining an environment information related to an environment where the wearable device is; determining a target map area in a map of the environment according to the environment information; and locating the wearable device in the map of the environment according to the target map area.
A calibration method is disclosed. The calibration method is suitable for an electronic device comprising an HMD device (head-mounted device). The calibration method includes the following operations: obtaining a first eye information of a first eye of a user when the user is gazing at a first calibration gazing point; and calculating at least one of a second eye information of the first eye of the user and a third eye information of a second eye of the user by mirror symmetrizing the first eye information.
A floating projection device includes a mirror structure, an optical structure, an optical coating and a display. The optical structure covers the mirror structure, and forms an accommodating space with the mirror structure. The optical coating is disposed on the optical structure. The display is disposed in the accommodating space formed between the optical structure and the mirror structure, and is configured to transmit a plurality of image light beams to the optical structure.
G02B 30/56 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels by projecting aerial or floating images
G02B 1/10 - Optical coatings produced by application to, or surface treatment of, optical elements
An image segmentation method includes following steps. An input image is provided to a prompter model for generating a first prompt indictor according to a task type of the prompter model. A prompt enhancement procedure, with reference to the task type of the prompter model, is performed to the first prompt indictor for generating a second prompt indictor. The prompt enhancement procedure includes converting a location, a size or a prompt type of the first prompt indictor into the second prompt indictor with reference to the task type. The input image and the second prompt indictor are provided to a segmentation foundation model for generating an output segmentation mask on the input image according to the second prompt indictor.
G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
G06T 11/20 - Drawing from basic elements, e.g. lines or circles
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/96 - Management of image or video recognition tasks
A pose calculating apparatus and method are provided. The pose calculating apparatus receives a plurality of real-time images and a plurality of inertial measurement parameters corresponding to at least one inertial sensor worn by a user. The pose calculating apparatus determines a pose calculating mode corresponding to each of a plurality of body regions of the user based on the real-time images and the inertial measurement parameters, wherein the pose calculating mode corresponds to a static mode or a motion mode. The pose calculating apparatus calculates a pose corresponding to each of the body regions based on the pose calculating mode corresponding to each of the body regions.
G06T 7/70 - Determining position or orientation of objects or cameras
A63B 24/00 - Electric or electronic controls for exercising apparatus of groups
G01C 21/16 - NavigationNavigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigatedDead reckoning by integrating acceleration or speed, i.e. inertial navigation
G01C 25/00 - Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
29.
COMMUNICATION SYSTEM AND WEARABLE DEVICE AND COMMUNICATION METHOD THEREOF
A communication system includes a contact lens element and a wearable device. The contact lens element has a communication function. The wearable device includes a transceiver circuit, a first metal structure, a second metal structure, and a reflector. The transceiver circuit includes a wireless communication circuit and a wireless charge circuit. The first metal structure is coupled to the wireless communication circuit. The first metal structure is configured to communicate with the contact lens element. The second metal structure is coupled to the wireless charge circuit. The second metal structure is configured to provide electric power for the contact lens element. The reflector is adjacent to the first metal structure and the second metal structure, so as to reflect radiation energy from the first metal structure and the second metal structure.
A simulated configuration evaluation apparatus is provided. The apparatus generates a virtual three-dimensional object placed in a first simulated pose based on a virtual three-dimensional object model in a virtual space, the virtual three-dimensional object includes transmitters, the transmitters are set on the virtual three-dimensional object in a first configuration, and the transmitters are configured to transmit a plurality of first signals. The apparatus receives second signals from the transmitters based on a viewpoint in the virtual space. The apparatus calculates a first estimated pose of the virtual three-dimensional object in the virtual space based on the second signals. The apparatus compares the first estimated pose and the first simulated pose to generate a first evaluating score corresponding to the first configuration.
A body tracking method is provided. The body tracking method includes: obtaining an environment map of a real world around a user; obtaining tracker data from a tracker, wherein the tracker is adapted to be mounted on a foot or a leg of the user; determining a ground shape of a foot location of the user based on the environment map and the tracker data; determining foot step information of the foot based on the ground shape; and determining a body pose of the user based on the foot step information.
A wearable device includes a host and a head strap module. The host has a pair of host connecting ends. The head strap module includes a head strap body and a pair of strengthening assemblies. The head strap body has a pair of head strap connecting ends. The pair of head strap connecting ends are respectively detachably assembled to the pair of host connecting ends. Each of the pair of strengthening assemblies has an outer cover and an inner cover. The outer cover and the corresponding inner cover are connected to each other to jointly cover and hold the corresponding host connecting end and the corresponding head strap connecting end. In addition, a head strap module applied to a wearable device is also provided.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable application programming interface (API) software for use in developing AI (artificial intelligence) platforms, namely, bots, virtual agents and virtual assistants; downloadable artificial intelligence personal assistant software for performing tasks or services on behalf of a user that is activated by user input, location awareness, and online information; downloadable artificial intelligence software for decision-making; downloadable artificial intelligence software for speech or language recognition and translation; downloadable artificial intelligence software for touch recognition; downloadable artificial intelligence software for visual perception; downloadable computer application software for mobile phones featuring software for question answering, text analytics, searching data, conversational artificial intelligence based on artificial intelligence in the fields of artificial intelligence with natural language processing technology; downloadable computer software featuring artificial intelligence (AI) models for customized customer solutions; downloadable computer software featuring artificial intelligence (AI) models optimized for deploying on data processing apparatus; downloadable computer software for creating artificial intelligence (AI) models for customized customer solutions; downloadable computer software for managing files, data sets, and artificial intelligence (AI) models for processing artificial intelligence projects; downloadable computer software for optimizing artificial intelligence (AI) models; downloadable computer software in the field of artificial intelligence, namely, software for building conversational query systems and digital assistants using artificial intelligence; downloadable computer software telecommunication platforms for based on artificial intelligence featuring software for question answering, text analytics, searching data, conversational artificial intelligence based on artificial intelligence in the fields of artificial intelligence with natural language processing technology; downloadable computer software using artificial intelligence (AI) for use in machine learning models to be trained; downloadable computer software using artificial intelligence (AI) for use in machine learning models trained by dataset; downloadable computer software, downloadable mobile application software, and downloadable computer application software for facilitating interaction and communication between humans and AI (artificial intelligence) platforms, namely, bots, virtual agents and virtual personal assistants; downloadable computer software, downloadable mobile application software, and downloadable computer application software for mobile devices and computers, all for using artificial intelligence for use as a digital companion; downloadable computer software, namely, an interpretive interface for facilitating interaction between humans and machines; downloadable electronic data files featuring artificial intelligence (AI) models for customized customer solutions; downloadable software featuring algorithms for training artificial intelligence (AI) models on existing datasets; downloadable software featuring algorithms for training artificial intelligence (AI) models on new datasets; downloadable software for using artificial intelligence for processing, generation, understanding and analysis of natural language into machine-executable commands; downloadable telecommunications software for artificial intelligence services featuring software for question answering, text analytics, searching data, conversational artificial intelligence based on artificial intelligence in the fields of artificial intelligence with natural language processing technology; recorded computer software featuring artificial intelligence (AI) models for customized customer solutions; recorded computer software featuring artificial intelligence (AI) models optimized for deploying on data processing apparatus; recorded computer software for creating artificial intelligence (AI) models for customized customer solutions; recorded computer software for managing files, data sets, and artificial intelligence (AI) models for processing artificial intelligence projects; recorded computer software for optimizing artificial intelligence (AI) models; recorded computer software for using artificial intelligence for processing, generation, understanding and analysis of natural language into machine-executable commands; recorded computer software using artificial intelligence (AI) for use in machine learning models to be trained; recorded computer software using artificial intelligence (AI) for use in machine learning models trained by dataset; recorded computer software, namely, an interpretive interface for facilitating interaction between humans and machines; recorded electronic data files featuring artificial intelligence (AI) models for customized customer solutions; recorded software featuring algorithms for training artificial intelligence (AI) models on existing datasets; recorded software featuring algorithms for training artificial intelligence (AI) models on new datasets; smart glasses (data processing apparatus); smartwatches; smart rings (data processing apparatus); wearable activity trackers; wearable computers in the nature of smartglasses; downloadable chatbot software using artificial intelligence (AI); computer software used for OOBE(Out-of-box experience); computer software for record user AI query history; computer software used for import, view and share camera image from glasses; computer software used for live translation settings; computer software used for live streaming settings Providing online non-downloadable software, software as a service (SaaS) services, and platform as a service (PAAS) services featuring software for facilitating interaction and communication between humans and AI (artificial intelligence) platforms, namely, bots, virtual agents and virtual personal assistants; providing online non-downloadable software, software as a service (SaaS) services featuring software, and platform as a service (PaaS) services featuring computer software platforms, all using artificial intelligence for decision-making; providing online non-downloadable software, software as a service (SaaS) services featuring software, and platform as a service (PaaS) services featuring computer software platforms, all using artificial intelligence for machine learning; providing online non-downloadable software, software as a service (SaaS) services featuring software, and platform as a service (PaaS) services featuring computer software platforms, all using artificial intelligence for speech or language recognition and translation; providing online non-downloadable software, software as a service (SaaS) services featuring software, and platform as a service (PaaS) services featuring computer software platforms, all using artificial intelligence for touch recognition; providing online non-downloadable software, software as a service (SaaS) services featuring software, and platform as a service (PaaS) services featuring computer software platforms, all using artificial intelligence for visual perception; providing online non-downloadable software, software as a service (SaaS) services, and platform as a service (PAAS) services featuring artificial intelligence personal assistant software for performing tasks or services on behalf of a user that is activated by user input, location awareness, and online information; providing online non-downloadable software, software as a service (SaaS) services, and platform as a service (PAAS) services featuring software for mobile devices and computers, all for using artificial intelligence for use as a digital companion; providing online non-downloadable software, software as a service (SaaS) services, and platform as a service (PAAS) services featuring software for use in developing AI (artificial intelligence) platforms, namely, bots, virtual agents and virtual assistants; providing online non-downloadable software, software as a service (SaaS) services, and platform as a service (PAAS) services featuring software for using artificial intelligence for conversational query; providing online non-downloadable software, software as a service (SaaS) services, and platform as a service (PAAS) services featuring software for using artificial intelligence for processing, generation, understanding and analysis of natural language into machine-executable commands; telecommunications software platforms being software-as-a-service (SAAS) based on artificial intelligence featuring software for question answering, text analytics, searching data, conversational artificial intelligence based on artificial intelligence in the fields of artificial intelligence with natural language processing technology; research and development services in the field of artificial intelligence; application service provider (ASP) featuring software using artificial intelligence (AI); artificial intelligence as a service (AIAAS) services featuring software using artificial intelligence
35.
Display screen or portion thereof with graphical user interface
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
(1) Computer chatbot software for simulating conversations with artificial intelligence; computer game programs with artificial intelligence; computer software applications for use with artificial intelligence; data processing apparatus with artificial intelligence; downloadable computer chatbot software for simulating conversations with artificial intelligence; downloadable computer programs using artificial intelligence for computational methods; downloadable computer programs using artificial intelligence for data analysis; downloadable computer programs using artificial intelligence for data models; downloadable computer programs using artificial intelligence for use in software development; downloadable computer programs using artificial intelligence in the field of search engines; downloadable computer programs using artificial intelligence to develop predictive models; downloadable computer programs using machine learning and artificial intelligence; downloadable computer software using artificial intelligence for computational methods; interactive software based on artificial intelligence; intercommunication apparatus with artificial intelligence; smart glasses [data processing apparatus]; smart rings [data processing apparatus]; smart watches [data processing apparatus]; software for the integration of artificial intelligence and machine learning in the field of Big Data; recorded software featuring algorithms for training artificial intelligence (AI) models on datasets; downloadable computer software, namely, an interpretive interface for facilitating interaction between humans and machines. (1) AI as a service [AIaaS] featuring software using artificial intelligence for machine-human interaction; AI as a service [AIaaS] featuring software using artificial intelligence for model creation, integration, management and optimization; AI as a service [AIaaS] featuring software using artificial intelligence for planning software task execution; AI as a service [AIaaS] featuring software using artificial intelligence for use in database management; AI as a service [AIaaS] featuring software using artificial intelligence for use in electronic setup, storage, backup and management of data; AI as a service [AIaaS] featuring software using artificial intelligence to automate and perform software tasks; AI as a service [AIaaS] featuring software using artificial intelligence for decision-making; AI as a service [AIaaS] featuring software using artificial intelligence for machine learning; AI as a service [AIaaS] featuring software using artificial intelligence for speech or language recognition and translation; AI as a service [AIaaS] featuring software using artificial intelligence for touch recognition; AI as a service [AIaaS] featuring software using artificial intelligence for visual perception; AI as a service [AIaaS] featuring software using artificial intelligence for performing tasks or services on behalf of a user that is activated by user input, location awareness, and online information.
37.
Display screen or portion thereof with graphical user interface
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
computer chatbot software for simulating conversations with artificial intelligence; computer game programs with artificial intelligence; computer software applications for use with artificial intelligence; data processing apparatus with artificial intelligence; downloadable computer chatbot software for simulating conversations with artificial intelligence; downloadable computer programs using artificial intelligence for computational methods; downloadable computer programs using artificial intelligence for data analysis; downloadable computer programs using artificial intelligence for data models; downloadable computer programs using artificial intelligence for use in software development; downloadable computer programs using artificial intelligence in the field of search engines; downloadable computer programs using artificial intelligence to develop predictive models; downloadable computer programs using machine learning and artificial intelligence; downloadable computer software using artificial intelligence for computational methods; interactive software based on artificial intelligence; intercommunication apparatus with artificial intelligence; smart glasses [data processing apparatus]; smart rings [data processing apparatus]; smart watches [data processing apparatus]; software for the integration of artificial intelligence and machine learning in the field of Big Data; recorded software featuring algorithms for training artificial intelligence (AI) models on datasets; Downloadable computer software, namely, an interpretive interface for facilitating interaction between humans and machines. AI as a service [AIaaS] featuring software using artificial intelligence for machine-human interaction; AI as a service [AIaaS] featuring software using artificial intelligence for model creation, integration, management and optimization; AI as a service [AIaaS] featuring software using artificial intelligence for planning software task execution; AI as a service [AIaaS] featuring software using artificial intelligence for use in database management; AI as a service [AIaaS] featuring software using artificial intelligence for use in electronic setup, storage, backup and management of data; AI as a service [AIaaS] featuring software using artificial intelligence to automate and perform software tasks; AI as a service [AIaaS] featuring software using artificial intelligence for decision-making; AI as a service [AIaaS] featuring software using artificial intelligence for machine learning; AI as a service [AIaaS] featuring software using artificial intelligence for speech or language recognition and translation; AI as a service [AIaaS] featuring software using artificial intelligence for touch recognition; AI as a service [AIaaS] featuring software using artificial intelligence for visual perception; AI as a service [AIaaS] featuring software using artificial intelligence for performing tasks or services on behalf of a user that is activated by user input, location awareness, and online information.
39.
SYNCHRONIZATION SIGNAL GENERATION CIRCUIT AND SYNCHRONIZATION METHOD BETWEEN MULTIPLE DEVICES
A synchronization signal generation circuit and a synchronization method among a plurality of devices are proposed. The synchronization signal generation circuit includes a clock signal generator and a controller. The clock signal generator generates a reference clock signal. The controller receives an input clock signal from a host end device and generates a plurality of candidate clock signals through a plurality of counting operations based on the reference clock signal. The controller selectively transmits one of the candidate clock signals to each peripheral device according to request information corresponding to each peripheral device. The candidate clock signals and the input clock signal have mutually aligned start time points in each frame period.
H03L 7/06 - Automatic control of frequency or phaseSynchronisation using a reference signal applied to a frequency- or phase-locked loop
H03K 5/22 - Circuits having more than one input and one output for comparing pulses or pulse trains with each other according to input signal characteristics, e.g. slope, integral
H03K 21/00 - Details of pulse counters or frequency dividers
A wearable device includes a host, a side head strap module, and an upper head strap module. The host has a sliding rail. The side head strap module is connected to the host. The upper head strap module includes a sliding base, a front buckle, and an upper head strap. The sliding base is detachably coupled to the sliding rail and slides along the sliding rail. The sliding rail has a first engaging part. The sliding base has a second engaging part. An engagement between the first engaging part and the second engaging part temporarily fixes the sliding base to the sliding rail. The front buckle is pivotally connected to the sliding base. The upper head strap is connected between the side head strap module and the front buckle. In addition, an upper head strap module applied to the wearable device is also proposed.
An electronic device and a noise cancelation method thereof are provided. The electronic device includes a driver, a driven device, and an inertial measurement device. The driver is configured to generate a driving signal, and generate a noise prediction signal according to the driving signal. The driven device receives the driving signal to execute an operation, wherein the driven device generates a vibration noise according to generated vibrations when executing the operation. The inertial measurement device is configured to sense a position status of the electronic device to generate sensing information. The inertial measurement device receives the noise prediction signal, and compensates the sensing information according to the noise prediction signal to generate compensated sensing information.
G10K 11/178 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effectsMasking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
42.
Method for controlling avatar in virtual environment by determining a difference in poses between a target object and a reference object, host, and computer readable storage medium
A solution that allows the avatar corresponding to the target object tracked by the external tracking device can be properly displayed in the visual content that corresponds to the field of view of the virtual camera, even if the coordinate systems used by the host and the external tracking device are different.
An encoding method for embedding a watermark into an audio is provided. A text watermark and an original audio are obtained. The text watermark is converted to an image watermark. The original audio is converted from a time domain to a frequency domain to generate a pre-process audio. The image watermark is embedded into the pre-processed audio to generate an encoded audio. The encoded audio is converted from the frequency domain to the time domain to generate an watermarked audio.
G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
G10L 19/032 - Quantisation or dequantisation of spectral components
G10L 25/21 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being power information
An eye tracking apparatus and method are provided. The eye tracking apparatus is configured to execute the following operations. The apparatus obtains a first single-eye image of a first eye at a first time point and a first single-eye image of a second eye at a second time point based on a plurality of eye images of a user. The apparatus calculates a first sight direction based on the first single-eye image of the first eye at the first time point and calculates a second sight direction based on the first single-eye image of the second eye at the second time point. The apparatus generates a combined gaze based on the first sight direction and the second sight direction at the second time point.
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
A61B 90/00 - Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups , e.g. for luxation treatment or for protecting wound edges
The present disclosure provides a wireless connection method applicable to a multi-device system, wherein the multi-device system includes a first electronic device and a second electronic device, and the wireless connection method includes: based on that the first electronic device is connected to a network access device, by the first electronic device, enabling a wireless communication function through a first channel, wherein a first wireless connection is established between the first electronic device and the network access device at the first channel; and by the first electronic device, establishing a second wireless connection with the second electronic device at the first channel.
A system and a method for interacting with an extended reality environment are provided. The method includes: generating, by a touch sensor with a detection area, a touch signal, wherein the touch sensor is included in a ring-type controller; providing, by a head-mounted display, an extended reality scene; determining, by the head-mounted display, whether an object is in the detection area according to the touch signal; in response to determining the object is in the detection area, generating, by the head-mounted display, a first command according to a movement of the ring-type controller; and moving, by the head-mounted display, a cursor in the extended reality scene according to the first command.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
A head mounted display device includes a host, two sliding parts, two locking parts and two brackets. The host has two connecting bases located on opposite sides of the host. The two sliding parts are respectively slidably connected to the corresponding connecting base. The two locking parts are respectively used to lock the corresponding sliding part in a retracted position or a pulled-out position relative to the corresponding connecting base. The two brackets are respectively pivotally connected to one end of the corresponding sliding part away from the host.
An optical element includes a holographic pinhole array. The holographic pinhole array includes a plurality of holographic pinhole grating sets. The holographic pinhole grating sets are configured to diffract light incident on the optical element into a plurality of light beams respectively. Each of the light beams has a field of view.
A head-mounted display and a method for image processing based on diopter adjustment are provided. The method includes: receiving a command corresponding to a first diopter setting; in response to the command, rendering an image according to a mapping table to generate a rendered image; and displaying the rendered image.
A head-mounted display device includes a front-end assembly, a wearing assembly and a light-shielding face mask. The wearing assembly is assembled to the front-end assembly to position the front-end assembly on a user's face. The light-shielding face mask includes a frame and a cover. The frame is connected to the front-end assembly. The cover is flexible and connected to the frame to cover the user's eyes. The cover has a forehead portion corresponding to the user's forehead and a pair of eye tail portions respectively corresponding to a pair of eye tails of the user. The forehead portion pushed by the user's forehead drives the pair of eye tail portions to approach the pair of eye tails of the user respectively.
A data alignment method and a multi-device system are provided. The multi-device system includes a host device and a client device. The data alignment method includes: by the client device, transmitting image data to the host device; by the host device, generating host-based spatial information of the client device in a host map of an environment established by the host device according to the image data; by the host device, transmitting the host-based spatial information to the client device; by the client device, generating data alignment information according to a difference between the host-based spatial information and client-based spatial information of the client device in a client map of the environment established by the client device; and by the client device, adjusting the client-based spatial information according to the data alignment information, to generate an aligned spatial information of the client device in the host map.
The embodiments of the disclosure provide a method for hand tracking. The method includes: obtaining, through a head-mounted device, a first image of a hand; determining, through the processor, a first pose of a first part of the hand based on the first image; obtaining, through a hand-held device, a second image of the hand; determining, through the processor, a second pose of a second part of the hand based on the second image, wherein the first part and the second part complementarily form an entirety of the hand; and determining, through the processor, a gesture of the hand based on the first pose and the second pose.
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
54.
INTENT CLASSIFICATION IN LANGUAGE PROCESSING METHOD AND LANGUAGE PROCESSING SYSTEM
A language processing method includes following steps. An initial dataset including initial phrases and initial intent labels about the initial phrases is obtained. A first intent classifier is trained with the initial dataset. Augmented phrases are produced corresponding to the initial phrases by sentence augmentation. First predicted intent labels about the augmented phrases and first confidence levels of the first predicted intent labels are generated by the first intent classifier. The augmented phrases are classified into augmentation subsets according to comparisons between the first predicted intent labels and the initial intent labels and according to the first confidence levels. A second intent classifier is trained according to a part of the augmentation subsets by curriculum learning. The second intent classifier is configured to distinguish an intent of an input phrase within a dialogue.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
computer hardware; computer network hardware; wearable computers; computer peripheral devices; wearable electronic devices; handheld digital electronic devices; virtual reality headsets; software; computer programs; computer software applications, downloadable; computer software, recorded; computer software platforms, recorded or downloadable; software development kits (SDKs); downloadable application software for virtual environments; application programs; application programming interface (API) software; computer application products; home automation hubs; smart home hubs; smartphones; electronic communication equipment and devices; apparatus and instruments for recording, transmitting, reproducing or processing sound, images or data; recorded and downloadable media, blank digital or analogue recording and storage media; media players; headsets; earphones; battery; battery chargers; electronic sensors; electronic identification system; monitoring system devices; remote control apparatus. Software as a service (SaaS); providing temporary use of on-line non-downloadable software and applications; application service provider (ASP) services; platform as a service (PaaS); blockchain as a service (BaaS); hosting virtual environments; hosting software platforms for virtual reality-based work collaboration; cloud computing; design and development of computer hardware and software; development of computer platforms; consultancy in the design and development of computer hardware; computer system design; information technology services; artificial intelligence consultancy; computer services; technical support services; enquiry and provision of Information; telecommunications technology consultancy; providing customized computer searching services; provision of Internet search engine services; creating and maintaining websites and webpages for others; computer rental; rental of computer software; monitoring of computer systems by remote access.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
computer hardware; computer network hardware; wearable computers; computer peripheral devices; wearable electronic devices; handheld digital electronic devices; virtual reality headsets; software; computer programs; computer software applications, downloadable; computer software, recorded; computer software platforms, recorded or downloadable; software development kits (SDKs); downloadable application software for virtual environments; application programs; application programming interface (API) software; computer application products; home automation hubs; smart home hubs; smartphones; electronic communication equipment and devices; apparatus and instruments for recording, transmitting, reproducing or processing sound, images or data; recorded and downloadable media, blank digital or analogue recording and storage media; media players; headsets; earphones; battery; battery chargers; electronic sensors; electronic identification system; monitoring system devices; remote control apparatus. Software as a service (SaaS); providing temporary use of on-line non-downloadable software and applications; application service provider (ASP) services; platform as a service (PaaS); blockchain as a service (BaaS); hosting virtual environments; hosting software platforms for virtual reality-based work collaboration; cloud computing; design and development of computer hardware and software; development of computer platforms; consultancy in the design and development of computer hardware; computer system design; information technology services; artificial intelligence consultancy; computer services; technical support services; enquiry and provision of Information; telecommunications technology consultancy; providing customized computer searching services; provision of Internet search engine services; creating and maintaining websites and webpages for others; computer rental; rental of computer software; monitoring of computer systems by remote access.
A circuit board and a layout method thereof are provided. The circuit board includes a first metal layer, a second metal layer, and a third metal layer. The first metal layer forms multiple first reference conductive wires. The second metal layer forms at least one signal transmission wire. The third metal layer forms multiple third reference conductive wires. The first metal layer, the second metal layer, and a third metal layer are overlapped with each other, and each of the first reference conductive wires is not completely overlapped with each of the second reference conductive wires.
The embodiments of the disclosure provide an active audio adjustment method. The active audio adjustment method includes: receiving, by a host, an ambient sound from a sound pickup device; analyzing, by the host, the ambient sound to obtain an ambient parameter of the ambient sound and determine an adjustment strategy; adjusting, by the host, an original parameter of an output audio to determine an optimized parameter based on the ambient parameter of the ambient sound and the adjustment strategy; generating, by the host, an optimized output audio based on the optimized parameter; and outputting, by the host, the optimized output audio to an audio output device.
G10K 11/175 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effectsMasking sound
G10K 11/178 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effectsMasking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
59.
METHOD FOR SAVING POWER, WEARABLE DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
The embodiments of the disclosure provide method for saving power, a wearable device, and a computer readable storage medium. The method includes: obtaining a motion detection result provided by a motion detector; obtaining a touch detection result provided by a touch detector; determining whether the wearable device is in a static state at least based on the motion detection result and the touch detection result; and switching the wearable device to a power saving mode in response to determining that the wearable device is in the static state.
A contact lens and an eye tracking device are provided. The contact lens includes a first type polarization structure and a second type polarization structure. The first type polarization structure is disposed in a first area of the contact lens, and the first area surrounds a center area of the contact lens. The second type polarization structure is disposed in a second area of the contact lens, and the second area surrounds the first area. The first type polarization structure and the second type polarization structure have different polarization directions.
The embodiments of the disclosure provide a method for generating a pass-through view with better scale and a host. The method includes: in response to determining that the tracker status of a tracker satisfies a predetermined condition, generating a target depth map based on a predetermined depth map or a first depth map associated with a field of view (FOV) of the host and a tracker information associated with the tracker; and rendering the pass-through view based on an image associated with the FOV of the host, a camera parameter, and the target depth map.
A head-mounted display device includes a display, two brackets, two buckles, and a headband module. The display has two buckling parts respectively located on opposite sides of the display. The brackets are respectively pivotally connected to opposite sides of the display. The buckles are respectively pivotally connected to the brackets. When the buckles are buckled onto the buckling parts, an unfolding angle of each bracket is limited to be greater than a locked angle. When the unfolding angle of each bracket is greater than an unlocked angle, the buckles are separated from the buckling parts. The unlocked angle is greater than the locked angle. When the buckles are separated from the buckling parts, the unfolding angle of each bracket is smaller than the locked angle. Opposite sides of the headband module are respectively detachably assembled to an end of each bracket away from the display.
A surveillance device is adapted to provide home care for a care object. The surveillance device includes a camera and a processor. The camera is configured to obtain an object image of the care object. The processor is configured to obtain a simultaneous localization and mapping (SLAM) map of an environment around the care object, obtain a current location of the camera, an estimated location of the camera, an object active area of the care object based on the SLAM map, obtain a current available field of view (FOV) of the camera according to the current location and the object active area based on the SLAM map, obtain an estimated available FOV of the camera according to the estimated location and the object active area based on the SLAM map, and determine a recommended location of the camera based on the current available FOV and the estimated available FOV.
The embodiments of the disclosure provide a method for generating a pass-through view in response to a selected mode and a host. The method includes: determining, by the host, the selected mode among a first mode and a second mode, wherein the first mode aims to achieve a control accurateness, and the second mode aims to achieve a visual smoothness; determining, by the host, a target depth map according to the selected mode; and rendering, by the host, the pass-through view based on an image associated with a field of view (FOV) of the host, a camera parameter, and the target depth map.
The embodiments of the disclosure provide a method for generating a pass-through view with better scale and to a host. The method includes: obtaining, by the host, a first depth map associated with a field of view (FOV) of the host; determining, by the host, tracker information associated with a tracker; generating, by the host, a target depth map by updating the first depth map based on the tracker information; and rendering, by the host, the pass-through view based on an image associated with the FOV of the host, a camera parameter, and the target depth map.
A detection device includes a plurality of capacitive sensors, a transmission antenna, a reception antenna, a radar module, a carrier module, and a processor. The capacitive sensors detect first information of a human body portion in a first direction, and detect second information of the human body portion in a second direction. The radar module uses the transmission antenna to transmit a radar signal to the human body portion. The radar module uses the reception antenna to receive a reflection signal from the human body portion. The radar module detects third information of the human body portion in a third direction according to the reflection signal. The capacitive sensors, the transmission antenna, and the reception antenna are disposed on the carrier element. The processor estimates status information of the human body portion according to the first information, the second information, and the third information.
A contact lens, suitable for a head-mounted display device, includes a first optical structure layer, a second optical structure layer and a third optical structure layer. The first optical structure layer receives an optical signal, wherein the first optical structure layer is divided into a plurality of partitions, and the partitions respectively have a plurality of structural bodies with different structures, and the structural bodies receive the optical signal and generate a plurality of imported optical signals. The second optical structure layer and the first optical structure layer are overlapped, and configured to transmit the imported optical signals to the third optical structure layer. The third optical structure layer and the second optical structure layer are overlapped, and configured to transmit the imported optical signals to a target area.
A method, an electronic device, and a non-transitory computer readable storage medium of visual assistance for a user in an extended reality environment are provided. The method includes: outputting an extended reality scene including a first virtual object and an interactive object; detecting the user; calculating a first distance between the interactive object and the user; and disabling the first virtual object in the extended reality scene in response to the first distance being greater than a first threshold.
An electronic device is disclosed. The electronic device includes a memory, several cameras, and a processor. The memory is configured to store a SLAM module. The several cameras are configured to capture several images of a real space. The processor is configured to: process the SLAM module to establish an environment coordinate system in correspondence to the real space and to track a device pose of the electronic device within the environment coordinate system according to several images; and perform a calibration process. The operation of performing the calibration process includes: calculating several poses of several cameras within the environment coordinate system according to several light spots within each of several images, in which several light spots are generated by a structured light generation device; and calibrating several extrinsic parameters between several cameras according to several poses.
A vibrating device and an operation method thereof. The vibrating device includes multiple electromyography sensors, a force sensor, multiple vibrators and a controller. The electromyography sensors are respectively disposed at different positions of a user to obtain multiple pieces of electromyography information respectively. The vibrators are disposed adjacent to or overlapping with the electromyography sensors. During a setting period, the controller makes the vibrators vibrate according to a preset vibration waveform. During the setting period, the controller records multiple pieces of force information generated by the force sensor corresponding to multiple different applied forces of the user and the pieces of electromyography information generated by the electromyography sensors. The controller obtains multiple characteristic frequency parameters according to the corresponding pieces of electromyography information. The controller establishes a relational model between the characteristic frequency parameters and the pieces of force information.
A non-fungible token generating system, method, and non-transitory computer readable storage medium thereof are provided. The system determines whether a control signal corresponds to a non-fungible token generating operation or a multimedia data generating operation. In response to the control signal corresponding to the non-fungible token generating operation, the system generates a first multimedia data through an image capturing device, and the system uploads the first multimedia data and an operator identity information to a blockchain to generate a non-fungible token corresponding to the first multimedia data based on a smart contract deployed on the blockchain. In response to the control signal corresponding to the multimedia data generating operation, the system generates a second multimedia data through the image capturing device.
G06Q 20/36 - Payment architectures, schemes or protocols characterised by the use of specific devices using electronic wallets or electronic money safes
G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentialsReview and approval of payers, e.g. check of credit lines or negative lists
A face tracking system is provided. The face tracking system includes a camera and a processor. The camera is configured to obtain a face image of a face of a user. The processor is configured to identify a facial feature of the face of the user based on the face image, determine a size range of a size of the facial feature based on the face image, and determine transformation relationship between the facial feature of the face of the user and a virtual facial feature of an avatar corresponding to the facial feature based on the size range of the size of the facial feature and a virtual size range of a virtual size of the virtual facial feature.
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting
G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
A control system is provided. The control system includes a ring device and a controller. The ring device includes an inertial measurement unit (IMU) sensor. The ring device is adapted to be worn on a finger of a user and the IMU sensor is configured to obtain sensor data. The controller is configured to receive the sensor data from the ring device and generate detection data based on the sensor data. The detection data is configured to indicate whether the ring device is rotated and whether the ring device is tapped. The controller is configured to perform a control operation in a virtual world displayed by the controller based on the detection data.
G06F 1/16 - Constructional details or arrangements
G06F 3/041 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
The embodiments of the disclosure provide an image quality adjusting method and a host. The method includes: providing, by a host, a visual content, wherein the visual content comprises a pass-through image; obtaining, by the host, a frame rate of the visual content and a loading of a graphic processing unit of the host; and dynamically adjusting, by the host, an image quality of the pass-through image based on the frame rate and the loading of the graphic processing unit of the host.
A tracking system is provided. The tracking system includes a first tracking device, a second tracking device, and a wearable tracking device. The first tracking device is disposed on a vehicle and is configured to obtain map information and first measurement information. The second tracking device is disposed on the vehicle and is configured to obtain second measurement information. The wearable tracking device is disposed on a user in the vehicle and is configured to obtain third measurement information. Further, the wearable tracking device is configured to obtain location position information of the user based on the map information, the first measurement information, the second measurement information, and the third measurement information. Furthermore, the local position information indicates a user position of the user within the vehicle.
H04W 64/00 - Locating users or terminals for network management purposes, e.g. mobility management
G01S 19/47 - Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
A communication device includes a signal source, a connection element, an antenna element, a piezoelectric element, a controller, and a reflector. The signal source generates an RF (Radio Frequency) signal. The antenna element is coupled through the connection element to the signal source. The antenna element generates a wireless signal according to the RF signal. The piezoelectric element adjusts the antenna element according to a control signal. The controller generates the control signal. The reflector is configured to reflect the wireless signal.
H01Q 3/01 - Arrangements for changing or varying the orientation or the shape of the directional pattern of the waves radiated from an antenna or antenna system varying the shape of the antenna or antenna system
H01Q 13/24 - Non-resonant leaky-waveguide or transmission-line antennas Equivalent structures causing radiation along the transmission path of a guided wave constituted by a dielectric or ferromagnetic rod or pipe
H01Q 19/10 - Combinations of primary active antenna elements and units with secondary devices, e.g. with quasi-optical devices, for giving the antenna a desired directional characteristic using reflecting surfaces
83.
REAL-TIME RENDERING GENERATING APPARATUS, METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM THEREOF
A real-time rendering generating apparatus, method, and non-transitory computer readable storage medium thereof are provided. The apparatus receives a plurality of character motion data of a plurality of virtual characters. The apparatus determines a rendering level corresponding to each of the virtual characters based on a classification rule related to a first virtual character and the character motion data, each of the rendering levels corresponds to one of a plurality of character level of detail, and each of the plurality of character level of detail corresponds to a range of a customized body part and a skeletal model. The apparatus generates a real-time rendering of each of the virtual characters based on the rendering level corresponding to each of the virtual characters.
An image displaying method is disclosed. The image displaying method includes the following operations: capturing a first image of a real space by a camera based on a first viewing direction when the camera is located at a first camera position, wherein the first image includes a first text image; detecting a text region according to the first image by a processor, wherein the text region includes the first text image; recognizing the first text image to obtain a first text content by the processor; obtaining several first feature points of the text region according to the first image by the processor; creating a first virtual surface according to the several first feature points by the processor; and displaying a first virtual image with the first text content appending to the first virtual surface by a display circuit.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06T 3/18 - Image warping, e.g. rearranging pixels individually
G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
A ray casting system is provided. The ray casting system include a display, a camera, an interactive sensor, and a processor. The display is configured to display a virtual environment. The camera is configured to obtain a hand image including a hand of the user. The interactive sensor is configured to obtain a user instruction from the user, wherein the interactive sensor is adapted to be mounted on the hand of the user. The processor is configured to generate a control ray in the virtual environment based on the hand image and apply a displacement to the control ray based on the user instruction.
A luminary system is provided. The luminary measurement system includes a processor and a camera. The camera is configured to obtain an object image of an object. The object includes a first luminary and a second luminary. The processor is configured to determine a first position of the first luminary and a second position of the second luminary based on the object image. The processor is configured to determine whether the first position and the second position are correct or not based on standard alignment information.
A head-mounted display, unlocking method, and non-transitory computer readable storage medium thereof are provided. The head-mounted display generates a wearing position distribution based on a plurality of real-time images including a user wearing at least one wearable device on at least one finger position, wherein the wearing position distribution indicates a wearing position of the at least one wearable device wear worn by the user. The head-mounted display generates an unlocking signal to unlock the head-mounted display based on the wearing position distribution.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06T 7/70 - Determining position or orientation of objects or cameras
88.
METHOD FOR IMPROVING VISUAL QUALITY OF REALITY SERVICE CONTENT, HOST, AND COMPUTER READABLE STORAGE MEDIUM
The embodiments of the disclosure provide a method for improving a visual quality of a reality service content, a host, and a computer readable storage medium. The method includes: generating a first virtual scene, an audio content, and a depth map based on a text, wherein the audio content includes an audio component and the depth map includes depth information; determining a sound attribute corresponding to an audio source based on the audio component and the depth information; adjusting the first virtual scene as a second virtual scene at least based on the sound attribute corresponding to the audio source; determining a 3D audio content at least based on the sound attribute and the audio content; and combining the 3D audio content with the second virtual scene into the reality service content.
A head-mounted display device and a zoom lens module are provided. The zoom lens module includes a first fixing frame, an arc zoom ring, a second fixing frame, and a first non-circular lens. The first fixing frame has an arc segment and a non-arc segment connected to each other. The arc segment has a slot. The arc zoom ring is disposed on an inner side of the arc segment and capable of sliding in a circumferential direction of the arc segment. An outer side of the arc zoom ring has a slide bar. An inner side of the arc zoom ring has a guide block. The slide bar passes through the slot and is adapted to slide along the slot. The second fixing frame is disposed on an inner side of the first fixing frame and the arc zoom ring and capable of sliding in an axial direction of the arc segment. An outer side of the second fixing frame has a guide rail. The guide block is embedded in the guide rail and adapted to slide along the guide rail. The first non-circular lens is disposed on an inner side of the second fixing frame. The guide block drives the second fixing frame to slide in the axial direction of the arc segment in response to the arc zoom ring sliding in the circumferential direction of the arc segment.
G02B 7/02 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses
G02B 15/14 - Optical objectives with means for varying the magnification by axial movement of one or more lenses or groups of lenses relative to the image plane for continuously varying the equivalent focal length of the objective
A head-mounted display device includes a body, a fan, a first valve, and a first shape memory alloy element. The body is configured to be worn on a face of a user. The body has an air channel, an air outlet, and a first air inlet. The first air inlet is communicated with the air outlet through the air channel. The fan is disposed in the air channel and is configured to drive air inside the air channel to flow. The first valve is disposed in the air channel. The first shape memory alloy element is connected between the first valve and the body, and is configured to move the first valve to adjust airflow inside the air channel.
A head mounted display device and a strap module thereof are provided. The strap module includes a casing, two straps, two elastic elements, a coupling element and a braking element. The ends of the two straps are respectively connected to opposite sides of a host. The two straps are at least partially overlapped and accommodated in the casing. Two ends of the first elastic element are respectively connected to the casing and the first strap. Two ends of the second elastic element are respectively connected to the casing and the second strap. The elastic recovery force of the two elastic elements is used to drive the two straps to move relative to the casing to increase the overlapping degree of the two straps. The coupling element is rotatably disposed at the casing and simultaneously couples the two straps. The braking element is movably disposed at the casing. When the braking element is in a brake position, the coupling element is braked to fix the overlapping degree of the two straps. The braking element is separated from the coupling element when it is in a movable position.
A control device is provided. The control device is adapted to control an object in a virtual world. The control device includes a display and a controller. The display is configured to display the virtual world. The controller is coupled to the display. The controller is configured to perform the following functions. In the virtual world, a control surface is formed around a user. A first ray is emitted from the object. Based on the first ray, a first control point is formed on the control surface. According to the first control point, a first control is performed on the object.
The present disclosure provides immersive content displaying method and display device. The display device includes front camera, processor and display panel. The immersive content displaying method includes: by the processor, obtaining first pose of the display device and second pose of the display device, wherein the first pose is corresponding to first timestamp at which the front camera captures frame image, and the second pose is corresponding to second timestamp at which the processor receives the frame image; by the processor, generating predicted movement of the display device after the second timestamp according to the first pose and the second pose; by the processor, processing partial area of the frame image according to the predicted movement, to generate base image; and by the display panel, displaying the base image to provide immersive content.
The present disclosure provides a control device, a control method and a virtual image display system. The control device controls a display. The control device includes an optical sensing component, a touch sensing component and a controller. The optical sensing component is configured to acquire optical sensing data of the control device when the control device moves in an environmental space. The touch sensing component is configured to acquire touch data of the control device when the control device moves on a plane. The controller is configured to generate a handwriting image according to the optical sensing data and the touch data, such that the display displays the handwriting image.
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/038 - Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
An antenna base and an antenna set. The antenna set includes a host and the antenna base. The host is fixed with a plurality of first connectors. The host is detachably assembled to the antenna base. The antenna base includes a body, a plurality of antennas and a plurality of second connectors. The antenna is installed at the body. The second connectors are electrically connected to the antennas. The assembling of the host and the body and the electrical and structural connection of the first connector and the second connector are completed in the same time.
A glasses type display device includes a front-end assembly, first and second temples, a pivot assembly, and a wire. The first temple is pivotally connected to the front-end assembly. The pivot assembly is between the front-end assembly and the second temple to pivotally connect the two. The pivot assembly includes first and second connecting portions respectively having first upper and lower pivot portions and second upper and lower pivot portions and connected to the front-end assembly and the second temple. The first and second upper pivot portions cooperate and the first and second lower pivot portions cooperate on a pivot axis, so that the second connecting portion is pivoted relative to the first connecting portion on the pivot axis. The wire is extended from the front-end assembly via a space between the first or second upper pivot portion and the first or second lower pivot portion to the second temple.
A hand pose construction method is disclosed. The hand pose construction method includes the following operations: capturing an image of a hand of a user from a viewing angle of a camera, wherein a hand image of the hand of the user is occluded within the image; obtaining a wrist position and a wrist direction of a wrist of the user according to a movement data of a tracking device wear on the wrist of the user; obtaining several visible feature points of the hand of the user from the image; and constructing a hand pose of the hand of the user according to the several visible feature points, the wrist position, the wrist direction, and a hand pose model.
A head-mounted device includes a host, a first cradle, a second cradle, a speaker, and a shape memory alloy element. The first cradle and the second cradle are connected to two opposite sides of the host. The speaker is movably disposed on the first cradle. The shape memory alloy element is connected between the speaker and the first cradle and is configured to move the speaker after being powered on and heated and shrinking.