A device connection method includes acquiring connection information between the application and the device, where the connection information includes fastest connection mark information, and the fastest connection mark information includes a marked connection mode for the fastest connection among concurrent connection modes; acquiring a connection mode with the largest proportion in the fastest connection mark information in a first predetermined period N and determining the connection mode with the largest proportion as a first connection mode; acquiring at least one connection mode that ranks top among connection modes sorted in descending order of proportion in the fastest connection mark information in a second predetermined period M and combining the at least one connection mode into a first set, where the number of the at least one connection mode that ranks top is a preset number, and M is greater than N.
The present application provides a video playback method and apparatus, and an electronic device. The method comprises: determining the size of a video playback window and the video resolutions respectively captured by at least two camera devices for playing back videos by means of the video playback window; on the basis of the video resolutions respectively captured by the at least two camera devices and the size of the video playback window, determining, from the video playback window, respective pane areas of the at least two camera devices; and playing back, in each pane area, a video filmed by a corresponding camera device.
The present disclosure relates to the technical field of image processing. Provided are an enhancement method and apparatus for an infrared image, and an electronic device and a storage medium. The method comprises: acquiring a source grayscale histogram of an infrared image to be processed, and performing iterative segmentation on the source grayscale histogram on the basis of distribution characteristic information of the source grayscale histogram, so as to obtain a target sub-histogram set; performing probability density correction on each target grayscale sub-histogram in the target sub-histogram set, so as to obtain a weighted grayscale histogram; fusing the source grayscale histogram and the weighted grayscale histogram, so as to obtain a target grayscale histogram; and determining a grayscale mapping curve of the target grayscale histogram, and performing grayscale mapping on said infrared image on the basis of the grayscale mapping curve, so as to obtain a contrast-enhanced image for said infrared image.
The present application relates to the technical field of videos. Provided are a code rate adjustment method and apparatus, and an electronic device and a storage medium. The method comprises: in the case of a dynamic picture, acquiring a first data-sending rate sent by a receiving-end device; on the basis of the data volume of dynamic-picture I frames and the data volume of a first preset number of dynamic-picture P frames that follow the current dynamic-picture I frames, determining a dynamic average data-generation rate; and when the dynamic average data-generation rate is less than or equal to a second preset multiple of the first data-sending rate, determining a first target code rate on the basis of a first cache duration, wherein the first cache duration is equal to the difference value between a first current moment and a moment at which first current cache data is written into a cache region, the first current cache data is the remaining live data when first live data generated on the basis of the current code rate is sent to the receiving-end device, and the first target code rate is used for generating new live data.
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
H04N 19/146 - Data rate or code amount at the encoder output
H04N 21/238 - Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidthProcessing of multiplex streams
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
5.
MISPLUG DETECTION METHOD AND APPARATUS FOR HDMI DEVICE INTERFACE, AND DEVICE AND STORAGE MEDIUM
The present disclosure relates to the technical field of device detection. Provided are a misplug detection method and apparatus for an HDMI device interface, and a device and a storage medium. The method comprises: acquiring a cable plugging signal corresponding to a target interface in an HDMI device; on the basis of the cable plugging signal, determining a cable plugging result corresponding to the target interface; and on the basis of the cable plugging result corresponding to the target interface and a target pin signal, determining a misplug detection result corresponding to the target interface.
G01R 31/69 - Testing of releasable connections, e.g. of terminals mounted on a printed circuit board of terminals at the end of a cable or a wire harnessTesting of releasable connections, e.g. of terminals mounted on a printed circuit board of plugsTesting of releasable connections, e.g. of terminals mounted on a printed circuit board of sockets, e.g. wall sockets or power sockets in appliances
H04N 5/765 - Interface circuits between an apparatus for recording and another apparatus
H04N 17/04 - Diagnosis, testing or measuring for television systems or their details for receivers
6.
ALARM SERVICE CONTROL METHOD AND APPARATUS, AND ELECTRONIC DEVICE
The present disclosure relates to the technical field of monitoring, and provides an alarm service control method and apparatus, and an electronic device. The method comprises: when alarm instruction information is detected, obtaining rated lamp control safety power consumption, wherein the alarm instruction information instructs to switch an imaging picture of a monitoring apparatus from a first imaging color to a second imaging color, and for light supplementing lamps, to switch from a first light supplementing lamp to a second light supplementing lamp, and one of the first light supplementing lamp and the second light supplementing lamp is a white light supplementing lamp; when it is determined that the lamp control safety power consumption is greater than switching power consumption of a dual filter switcher, controlling the white light supplementing lamp to be in an ON state, and on the basis of the lamp control safety power consumption, power consumption of the first light supplementing lamp and pre-starting power consumption of the second light supplementing lamp, controlling light intensity of the white light supplementing lamp in the ON state; and on the basis that the white light supplementing lamp is in the ON state, executing an alarm service process corresponding to the alarm instruction information.
Provided are an image data processing method and apparatus and a storage medium. The image data processing method includes acquiring an image; in the case where the image does not need to be segmented, caching the image frame by frame; and forming data of the image cached frame by frame into data packets frame by frame and sending the data packets.
G06F 12/0875 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
The present invention relates to the technical field of image processing, and provides a color cast correction method and apparatus for a video conferencing device, a device, and a storage medium. The method comprises: acquiring a lamp bead working current corresponding to a video conferencing device and the pixel size of at least one image (110); on the basis of each pixel size, determining a target distance between a target user and the video conferencing device (120); on the basis of the lamp bead working current, the target distance, and a color cast value under the target distance, determining a color deviation degree corresponding to the color cast value under the target distance (130), wherein the color cast value is an RGB value corresponding to a camera module on the video conferencing device; and on the basis of the color deviation degree and the color cast value, determining a target correction RGB value corresponding to the camera module (140), wherein the target correction RGB value is used for performing color compensation when the camera module collects statistics about automatic white balance information.
Provided are a target retrieval method and device, and a storage medium. The target retrieval method includes acquiring structured feature information that is input when information retrieval is performed on a preset information retrieval database; acquiring the totality of semi-structured feature information within a first preset period and a first preset range from the information retrieval database according to time information and range information contained in the input structured feature information and using the semi-structured feature information as to-be-retrieved thermal data; acquiring real-time video streams of the complete set of cameras within a second preset period and a second preset range; acquiring semi-structured feature information of a potential target in the real-time video streams; and comparing the semi-structured feature information of the potential target with the semi-structured feature information in the thermal data and determining whether the potential target is a retrieval target according to the comparison result.
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
Disclosed in the present application are a method and apparatus for performing people counting on the basis of a full-scene image, and a device and a medium. The method for performing people counting on the basis of a full-scene image comprises: performing detection on an image to be subjected to detection, so as to obtain a plurality of initial head-shoulder boxes and a plurality of full-body boxes in a full-scene image; on the basis of each full-body box, obtaining a head-shoulder box corresponding to each full-body box to serve as a full body-head-shoulder box; and on the basis of matching results of the plurality of initial head-shoulder boxes and a plurality of full body-head-shoulder boxes, selecting target head-shoulder boxes by means of screening, and determining a people counting result on the basis of initial head-shoulder boxes and full body-head-shoulder boxes after screening, wherein the target head-shoulder boxes comprise the initial head-shoulder boxes or the full body-head-shoulder boxes.
Provided are a method and apparatus for calibrating an installation error in a pitch angle of a traffic radar, and a storage medium. The method includes changing the frequency of a traffic radar according to a preset strategy within a preset frequency range; acquiring energy values of echo signals of a preset target at different frequencies in sequence; and in response to determining that the acquired energy values of the echo signals satisfy a preset condition, determining the frequency corresponding to the maximum echo signal energy value as the operating frequency of the traffic radar to calibrate the installation error in the pitch angle of the traffic radar. The preset target is disposed on the ground.
Provided are a method and an apparatus for lens focusing, a computer device, and a storage medium. The method includes: acquiring a test image obtained by a lens shooting a reference image at a current focusing position, and determining a low-frequency modulation transfer function value of the test image; in response to determining that the low-frequency modulation transfer function value meets a preset value range condition, determining a high-frequency modulation transfer function value of the test image, determining a movement step according to the high-frequency modulation transfer function value, and controlling the lens to move according to the movement step; and using the next focusing position which the lens moves to according to the movement step as a new current focusing position to focus the lens for one time.
A CP signal measurement circuit and a charging pile. The CP signal measurement circuit comprises a CP signal generation module (1), a rectification module (2) electrically connected to an output end of the CP signal generation module (1), and a hysteresis comparison circuit (3) and a voltage follower circuit (4) which are electrically connected to an output end of the rectification module (2); the CP signal generation module (1) is used for processing an input PWM signal, so as to output a first CP signal; the rectification module (2) is used for rectifying the first CP signal, so as to convert same into a second CP signal; the hysteresis comparison circuit (3) is used for outputting, on the basis of the second CP signal, a waveform signal, so as to measure the duty ratio of the first CP signal; and the voltage follower circuit (4) is used for outputting, on the basis of the second CP signal, a voltage amplitude signal, so as to measure the amplitude of the first CP signal, thus improving the electrical safety of charging systems.
G01R 31/69 - Testing of releasable connections, e.g. of terminals mounted on a printed circuit board of terminals at the end of a cable or a wire harnessTesting of releasable connections, e.g. of terminals mounted on a printed circuit board of plugsTesting of releasable connections, e.g. of terminals mounted on a printed circuit board of sockets, e.g. wall sockets or power sockets in appliances
B60L 53/60 - Monitoring or controlling charging stations
B60L 53/31 - Charging columns specially adapted for electric vehicles
14.
FOCUSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
The present disclosure relates to the technical field of photographing, and provides a focusing method and apparatus, an electronic device, and a storage medium. The method comprises: performing focus searching on the basis of a target focusing strategy, and obtaining a first target image captured by an electronic device when a focusing motor is at each first focus position; for each first target image, determining FVs of at least two image blocks in the first target image; for each target image block at the same position of each first target image, on the basis of the FVs of the target image block in all the first target images, determining a first maximum FV corresponding to the target image block; clustering the focus positions corresponding to all the first maximum FVs to obtain a first clustering result; and performing focusing on the basis of the first clustering result.
An angle adjustment method and apparatus for a solar device, and an electronic device (10) and a medium. The angle adjustment method for a solar device comprises: in the current adjustment period, on the basis of position information sent by a master device in a device cluster, determining the current solar angle (S110), wherein devices in the device cluster are all solar devices, and comprise the master device and slave devices; on the basis of a reference solar angle and the current solar angle, determining a solar deflection angle (S120); and sending the solar deflection angle to the device cluster, such that the solar devices in the device cluster perform angle adjustment on the basis of the solar deflection angle (S130).
A fill light control method and apparatus, a camera (10), and a storage medium. The fill light control method is applied to the camera (10). The camera (10) comprises at least two camera lenses and at least two fill lights, the camera lenses having one-to-one correspondence to the fill lights. The control method comprises: when a first camera lens and a corresponding first fill light rotate synchronously to a preset first limit region, during synchronous rotation of a second camera lens and a corresponding second fill light, monitoring in real time position information of the second fill light, wherein the first camera lens is a camera lens that rotates first among the at least two camera lenses, and the second camera lens is another camera lens among the at least two camera lenses other than the first camera lens; acquiring a second limit region of the second fill light; and when determining on the basis of the position information of the second fill light that the second fill light rotates to the boundary position of the second limit region, controlling the second fill light to remain stationary at the boundary position of the second limit region, and controlling the second camera lens to continue to rotate towards the second limit region or remain stationary.
A method includes: configuring an entity relationship between any two entity images, which are suspected to have the same entity, in an entity image sequence to obtain an entity triple; topologically connecting any two entity triples, which share the same entity relationship, through the entity relationship between two entity images in each of the two entity triples to construct at least one entity relationship graph; and segmenting at least one relationship sub-graph from the at least one entity relationship graph according to an affinity between entity images in the at least one entity relationship graph.
G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
The present application provides a vehicle tracking method, a vehicle system, a storage medium and a server. The vehicle tracking method comprises: recording vehicle feature information of a target vehicle when the target vehicle is driven in; performing multi-frame license plate recognition on the basis of the vehicle feature information; if a target license plate number that meets a recognition threshold earliest is present, using the target license plate number as a final license plate number of the target vehicle, and adding the final license plate number as a corresponding driving-in vehicle license plate number of a parking area corresponding to the target vehicle; and when a driving-out vehicle which cannot be recognized is present in the parking area, using, to match the current driving-out vehicle, the driving-in vehicle license plate number corresponding to the parking area, so as to determine vehicle information of the driving-out vehicle.
G06V 20/62 - Text, e.g. of license plates, overlay texts or captions on TV images
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G08G 1/017 - Detecting movement of traffic to be counted or controlled identifying vehicles
19.
AUDIO DATA STORAGE METHOD AND APPARATUS FOR MULTI-CAMERA DEVICE, AND ELECTRONIC DEVICE
The present disclosure relates to the technical field of photography, and provides an audio data storage method and apparatus for a multi-camera device, and an electronic device. The method comprises: obtaining first audio data acquired by camera devices located in the same region; determining a first similarity between every two pieces of first audio data; determining at least two pieces of reference audio data on the basis of the first similarities, and determining a camera device corresponding to each piece of reference audio data as a reference camera device; determining a first audio quality of each piece of reference audio data; and determining target audio data on the basis of each first audio quality, and correspondingly storing storage paths of the target audio data and identifiers of the reference camera devices.
Provided are a cruise method and apparatus of a heavy pan-tilt, a medium, and an electronic device. The method includes the following steps: A starting preset position of the heavy pan-tilt performing a preset position cruise in a current unit rotation stage of a current pan-tilt cruise cycle is determined, where one unit rotation stage corresponds to one rotation; the starting preset position is taken as a stop starting point, and at least two stop preset positions of the heavy pan-tilt in the current unit rotation stage are determined from preset positions divided in advance, where two adjacent stop positions in the same unit rotation stage are spaced by a preset number of preset positions; and the heavy pan-tilt is controlled to rotate sequentially on the at least two stop preset positions of the current unit rotation stage.
A data storage method and apparatus, an electronic device, and a storage medium. The data storage method comprises: according to a preset number of storage paths, determining a time granularity and a full coverage threshold respectively matching a plurality of photographing devices (S110); determining the number of full coverage devices within each time granularity, wherein the full coverage devices are photographing devices having a remaining memory less than or equal to a full coverage threshold matching a full coverage device (S120); and if it is determined that the number of full coverage devices within the target time granularity is less than a preset full coverage device number threshold, performing data storage according to the full coverage thresholds respectively matching the plurality of photographing devices (S130).
A video stream synchronization method and apparatus, a platform, and a storage medium. The method comprises: an input node respectively transmits at least two video blocks generated by cutting a video frame and a message packet to output nodes corresponding to the at least two video blocks, and sends the message packet to a frame synchronization control module when the transmission of the at least two video blocks and the message packet is completed; the frame synchronization control module processes an initial frame synchronization signal according to a first video identifier and a first frame number identifier in the message packet to generate a target frame synchronization signal, and respectively sends the target frame synchronization signal to the output nodes corresponding to the at least two video blocks; each output node parses the target frame synchronization signal to determine a second video identifier and a second frame number identifier, and when the first video identifier is the same as the second video identifier, and the first frame number identifier is the same as the second frame number identifier, the output node sends the video block of the first frame number identifier to a display device.
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video streamElementary client operations, e.g. monitoring of home network or synchronizing decoder's clockClient middleware
The present disclosure relates to the technical field of communications, and provided thereby are a communication connection method and apparatus, an electronic device, a system, and a readable storage medium. The method comprises: on the basis of a first communication connection request initiated by a first client terminal, dividing the quantity of first idle ports of a network connection device corresponding to a local device into a quantity of first selected ports and a quantity of first reserved ports; on the basis of the quantity of first selected ports, executing a port guessing operation, and when the port guessing is successful, building a first communication connection between the first client terminal and the local device; and when the port guessing fails, building a second communication connection between the first client terminal and the local device on the basis of the quantity of first reserved ports.
A video person re-identification method, comprising: acquiring a video to be detected containing a target person, and identifying different posture images of the target person in said video (S101); determining the image quality of each posture image, and determining at least one optimal posture image having the highest image quality under each posture (S102); respectively performing feature extraction and human body contour line feature extraction on the at least one optimal posture image in chronological order to respectively obtain a representation feature and a human body contour line feature under the same posture (S103); fusing the representation features and the human body contour line features under different postures in a channel dimension to obtain fusion features of the target person (S104); and performing similarity calculation according to the fusion features and a person feature of the target person, and identifying the target person according to the calculation results of the similarity from high to low (S105).
The present disclosure provides an image processing method and apparatus, an electronic device, and a storage medium, pertaining to the technical field of image processing. The image processing method comprises: determining a source grayscale histogram of a to-be-processed infrared image; on the basis of the source grayscale histogram, segmenting an input grayscale range and an output grayscale range to obtain at least two input grayscale sub-ranges and an output grayscale sub-range corresponding to each input grayscale sub-range; performing histogram equalization processing on the source grayscale histogram to obtain a target grayscale histogram, and on the basis of the target grayscale histogram, determining a mapping relationship between each input grayscale sub-range and the corresponding output grayscale sub-range to obtain a segmented grayscale mapping curve; and on the basis of the segmented grayscale mapping curve, performing grayscale mapping on the to-be-processed infrared image to obtain a contrast-enhanced image of the to-be-processed infrared image.
Provided are a blind image denoising method, an electronic device, and a storage medium. The blind image denoising method includes the following: A target noise parameter of a to-be-denoised image is determined according to an image noise calibration result obtained by pre-performing an image noise calibration on an image acquisition device of the to-be-denoised image; a preliminary filtering process is performed on the to-be-denoised image so that a preliminary filtered image of the to-be-denoised image is obtained; a noise level estimation result of the to-be-denoised image is determined according to the target noise parameter and the preliminary filtered image; and a final denoising process is performed on the to-be-denoised image according to the noise level estimation result so that a final blind denoising result of the to-be-denoised image is obtained.
A method includes that the data storage path information used by a data object in a target query time period is determined, that a target operation object through which the data object written from a source end to a destination end passes is determined from the data storage path information, and that a storage situation of the data object in the target query time period is predicted and determined by performing a storage-affecting event analysis on the target operation object.
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
29.
CLOUD PLATFORM DOCKING DEBUGGING METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
Provided are a cloud platform docking debugging method and apparatus, an electronic device, and a storage medium. The cloud platform docking debugging method includes steps below. According to debugging application information of a to-be-debugged application, a target Internet of Things device is determined from multiple candidate Internet of Things devices provided externally by the cloud platform, and the target Internet of Things device is allocated to the to-be-debugged application. Docking and debugging between the target Internet of Things device allocated to the to-be-debugged application and the to-be-debugged application on a development end side are controlled. The candidate Internet of Things device includes an Internet of Things device that is pre-built and accessed through the cloud platform and provides externally, as a public resource, a debugging service.
A multidirectional adjustment support includes a base, a first rotating member that is rotatably connected to the base, and a second rotating member that is rotatably connected to the first rotating member. A rotation plane formed by the rotation of the first rotating member intersects a rotation plane formed by the rotation of the second rotating member. The base is provided with a first limiting structure. The first rotating member is provided with a second limiting structure. The first limiting structure and the second limiting structure are configured to cooperate with and limit relative positions between each other. The second rotating member is provided with a third limiting structure. The first limiting structure and the third limiting structure are configured to cooperate with and limit relative positions between each other.
F16M 13/02 - Other supports for positioning apparatus or articlesMeans for steadying hand-held apparatus or articles for supporting on, or attaching to, an object, e.g. tree, gate, window-frame, cycle
F16M 11/12 - Means for attachment of apparatusMeans allowing adjustment of the apparatus relatively to the stand allowing pivoting in more than one direction
Provided are a method, apparatus and device for realizing shutter synchronization of a camera. The camera may include a plurality of sensors; the shutter types of the plurality of sensors are different; the method may include: acquiring a periodic synchronization reference signal and a synchronization signal corresponding to each sensor; and adjusting, with the synchronization reference signal as a reference, synchronization signal delays and/or exposure delays of the plurality of sensors to align the exposure areas corresponding to the plurality of sensors.
H04N 23/45 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders
H04N 23/741 - Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
32.
CAMERA LENS MODULE, CAMERA LENS OPTICAL AXIS ADJUSTING DEVICE, AND BINOCULAR CAMERA
A camera lens module includes a mount, a first lens, a second lens, and an adjusting mechanism. The first lens is fixedly mounted on the mount. The adjusting mechanism includes an adjusting substrate and a pitching adjusting assembly. A first support pillar is disposed on the adjusting substrate. The first support pillar is connected to the mount. The second lens is fixedly disposed on the front surface of the adjusting substrate. The axis of the first support pillar and the optic axis of the first lens are disposed on the same z-axis. The optic axis of the second lens is collinearly disposed with the axis of the first support pillar. The other end of the adjusting substrate is connected to the mount through the pitching adjusting assembly and is configured to adjust the pitch angle of the optic axis of the second lens around an x-axis and a y-axis.
09 - Scientific and electric apparatus and instruments
Goods & Services
Camcorders; Anti-intrusion alarms; Anti-theft alarms, other than for vehicles; Biometric locks; Call bells; Data processing equipment, namely, couplers; Downloadable computer application software for mobile phones, namely, software for use in electronic storage of data; Electric door bells; Electric sensors; Electrical and electronic burglar alarms; Electro-dynamic apparatus for the remote control of signals; Electronic locks; Lighting control apparatus; Magnifying peepholes for doors; Memory cards; Sound alarms
Disclosed in embodiments of the present application are a digital zoom method and apparatus, an electronic device and a storage medium. The method comprises: acquiring distortion correction parameters, rotation parameters and center deviation parameters obtained by means of pre-calibration; according to a digital zoom threshold and zoom time, determining digital zoom magnifications matching a plurality of image frames; according to a preset parameter determination rule and the distortion correction parameters, determining the distortion correction parameters matching the plurality of image frames, and according to the preset parameter determination rule and the rotation parameters, determining the rotation parameters matching the plurality of image frames; and sequentially performing, on the plurality of image frames, distortion correction according to the matched distortion correction parameters, translation according to the center deviation parameters, rotation according to the matched rotation parameters, and cropping and reconstruction according to the matched digital zoom magnifications so as to realize digital zoom.
H04N 5/208 - Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic for compensating for attenuation of high frequency components, e.g. crispening, aperture distortion correction
37.
STRIPE NOISE IMAGE OPTIMIZATION METHOD AND APPARATUS, AND DEVICE AND MEDIUM
The embodiments of the present application disclose a stripe noise image optimization method and apparatus, and a device and a medium. The method comprises: extracting candidate stripe noise in a stripe noise image, and determining a candidate stripe window according to adjacent candidate stripe noise (S110); determining a target stripe window according to a candidate center distance between the candidate stripe window and a region-of-interest window in the stripe noise image (S210); and adjusting a collection parameter of an image sensor according to a target center distance between the target stripe window and the region-of-interest window, such that the target center distance between the target stripe window and the region-of-interest window in a stripe noise image collected according to the adjusted collection parameter satisfies a preset distance condition (S310).
H04N 25/677 - Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction for reducing the column or line fixed pattern noise
38.
TILE MAP PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
The present disclosure relates to the technical field of maps, and provides a tile map processing method and apparatus, an electronic device and a storage medium. The method comprises: for each level of a tile map, acquiring marker clustering data and tile serial number distribution information of the level; determining the serial number of a map tile where the marker clustering data is located, and generating a clustering data distribution diagram according to the serial number and the tile serial number distribution information, the clustering data distribution diagram representing the distribution of the marker clustering data in the map tile of the level; determining a compression weight of each map tile in the level on the basis of the clustering data distribution diagram; and on the basis of the compression weight, compressing the map tile corresponding to the compression weight to obtain a tile compressed map after tile map compression.
G06F 16/587 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
Provided are an optical center determination method and apparatus, an electronic device, and a medium. The optical center determination method includes: obtaining at least one reference line by connecting edge points with the largest distance therebetween in a target image; determining, on the at least one reference line, at least two reference points with grayscale values in a preset ratio to a grayscale peak value, and determining the position information of an optical center is according to the at least two reference points and auxiliary points with grayscale values consistent with the grayscale value of a reference point.
Disclosed in the present application are an image processing method and apparatus, and a device and a storage medium. The image processing method comprises: if it is determined, according to historical brightness data of the historical image frame which is collected by an image collector, that a historical image frame is an overexposed image frame, continuing to perform image collection with a preset image collection parameter, so as to obtain an underexposed image frame; processing a target area of the underexposed image frame on the basis of at least one preset image gain parameter, and determining statistical brightness data of the target area which has been processed on the basis of the at least one preset image gain parameter; and determining a target image gain parameter according to the statistical brightness data, preset brightness data and the at least one preset image gain parameter, and determining a target image collection parameter according to the target image gain parameter and the preset image collection parameter, so as to continue performing image collection according to the target image collection parameter.
The present disclosure provides a mask wearing detection method and apparatus, and an electronic device, and relates to the technical field of image processing. The method comprises: acquiring a schlieren image of a subject under detection; and based on an airflow intensity corresponding to the schlieren image, determining a detection result corresponding to the subject under detection; wherein the detection result comprises not wearing a mask or wearing a mask.
A biometric identification method, an updating method, an electronic device (10) and a storage medium (18). The biometric identification method comprises: collecting current biological information, the current biological information comprising current human face information and/or current fingerprint information (S110); identifying the current biological information, so as to obtain a current identification result (S120); and determining an identification coefficient on the basis of the collection time of the current biological information, and adjusting the current identification result on the basis of the identification coefficient, so as to obtain a target identification result (S130).
G07C 9/00 - Individual registration on entry or exit
G07C 9/37 - Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
43.
CAMERA RAPID EXPOSURE METHOD AND APPARATUS, ELECTRONIC DEVICE, AND MEDIUM
The present disclosure relates to the technical field of cameras, and provides a camera rapid exposure method and apparatus, an electronic device, and a medium. The method comprises: when a camera is in a dormant state, adjusting an image acquisition frequency and an image acquisition resolution on the basis of currently measured object distance and object body volume; controlling a sensing unit in the camera, and acquiring and caching a pre-alarm image by using the adjusted image acquisition frequency and image acquisition resolution; on the basis of the brightness of a plurality of frames of the pre-alarm image cached by the camera when in the dormant state, estimating the brightness and exposure factor of a first frame image after the camera has started up; and once the camera is controlled to start up, performing exposure according to the estimated brightness and exposure factor of the first frame image after the camera has started up.
Provided are a method and an apparatus for detecting fire spots, an electronic device, and a storage medium. In this embodiment of the present application, the target position of a focus lens group is determined based on the minimum object distance and the maximum object distance in a collected image so that collected images based on the target position of the focus lens group can achieve maximum detection clarity. In the detection of a suspected fire spot region, moving the focus lens group from the closest position corresponding to the minimum object distance to the farthest position corresponding to the maximum object distance achieves dimensional movement of the camera lens focus group, which solves the problem of insufficient depth of field of the lens and the inability to cover all fire spots in a multi-object-distance scene.
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 10/12 - Details of acquisition arrangementsConstructional details thereof
H04N 23/67 - Focus control based on electronic image sensor signals
H04N 23/695 - Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
H04N 23/23 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from infrared radiation only from thermal infrared radiation
45.
AUTO-FOCUSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND MEDIUM
Provided are an auto-focusing method and apparatus, an electronic device, and a medium. The method includes: determining image block change information of a current frame image relative to a reference frame image in a scene monitoring area, where the scene monitoring area includes a pre-divided image effective area in a photographing picture; determining, according to the image block change information, whether to trigger a focusing operation on the scene monitoring area; and in response to determining to trigger the focusing operation, determining a focusing area of interest triggered by the movement of an object relative to the scene monitoring area, and performing the focusing operation on the focusing area of interest.
Disclosed in the embodiments of the present application are a color correction matrix optimization method and apparatus, and an electronic device and a medium. The method comprises: on the basis of a color correction matrix, performing correction processing on a color source matrix of a collected color card image, so as to obtain a color output matrix; determining a global hue error and a saturation constraint regular term according to Lab coordinates of the color output matrix and Lab coordinates of a color target matrix of a target color card image, and constructing a loss function; and updating the color correction matrix according to a loss function value until the loss function value converges, so as to obtain an optimal color correction matrix, such that on the basis of the optimal color correction matrix, color correction is performed on an image to be processed.
A multispectral multi-sensor synergistic processing method and apparatus, and a storage medium. The multispectral multi-sensor synergistic processing method comprises: acquiring images of a plurality of channels, an image of each channel being a monochrome image, and colors of the images of the different channels being different; according to the monochrome images, performing target detection, identification and tracking, generating a tracking box for a target, and storing information of the tracking box; and registering and fusing the images of the plurality of channels to generate a fused image, and overlaying the stored information of the tracking box to the fused image.
A radar-vision collaborative target tracking method and target tracking system. The radar-vision collaborative target tracking method comprises: obtaining a first image of a first vehicle at an entrance node to determine vehicle information, and sending, to an adjacent intermediate node, the vehicle information and a time point at which the first vehicle passes a recognition node corresponding to the entrance node; determining traveling information of the first vehicle at the intermediate node on the basis of radar tracking information, and sending, to a next node, the time point at which the first vehicle arrives at the recognition point and the vehicle information; determining the traveling information at an exit node on the basis of the radar tracking information, obtaining a second image of the first vehicle to determine vehicle information, and when the vehicle information determined according to the second image is consistent with the received vehicle information, determining that the tracking is correct. A vehicle is tracked by using a radar at the intermediate node, and when the present application is applied to a tunnel scene, the provision of cameras in a tunnel can be avoided, and high-precision tracking of vehicles in the tunnel is achieved.
The present application provides a device connection method and apparatus, an electronic device and a storage medium. The device connection method comprises: acquiring connection information of an application and a device, the connection information comprising fastest connection mark information, and the fastest connection mark information comprising a connection mode in which a connection is established fastest when a plurality of connection modes are parallel; acquiring the maximum proportion of connection modes in the fastest connection mark information in a first preset period N as first connection modes; acquiring at least one connection mode, the proportion of which is ranked top preset positions, among connection modes sorted in the order of descending proportions in the fastest connection mark information in a second preset period M to form a first set, M being greater than N; using the first connection modes as preferred connection modes of the first connection between the device and the application; and using the first set as a parallel connection mode of the first connection between the device and the application.
The invention relates to an image transmission method, electronic device, and a medium. The image transmission method comprises: according to a data volume of a to-be-transmitted image frame in each channel of a sending terminal and a transmission bandwidth between the sending terminal and a receiving terminal, determining a respective transmission time needed by the sending terminal to transmit the to-be-transmitted image frame in each channel; according to the respective transmission time needed by the sending terminal to transmit the to-be-transmitted image frame in each channel, segmenting theoretical time needed by the sending terminal to transmit to-be-transmitted image frames in multiple channels to obtain a respective time duration corresponding to each transmission time; and controlling the sending terminal to send the to-be-transmitted image frame corresponding to each transmission time to the receiving terminal within the respective time duration corresponding to each transmission time.
Provided are a white balance correction method and apparatus, a device, and a storage medium. The white balance correction method includes: inputting an image to be corrected into a pre-trained chromatic-aberration-free point model to obtain a chromatic-aberration-free point weight map of the image to be corrected; determining, according to the chromatic-aberration-free point weight map, an illumination color parameter of the image to be corrected; and performing, according to the illumination color parameter, white balance correction on the image to be corrected to obtain a corrected image.
H04N 23/88 - Camera processing pipelinesComponents thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
G06T 7/90 - Determination of colour characteristics
52.
Device cross-area access method, electronic device, and non-transitory computer-readable storage medium
Provided are a device cross-region access method and apparatus, an electronic device, and a storage medium. The method includes: receiving a cross-region access request from a target device and determining IP address information in the cross-region access request; determining information of a cross-region proxy server matching the target device according to the IP address information, where the cross-region proxy server is preset in a proxy region different from a target region where the target server is located, and the cross-region proxy server and the target server are in a distributed deployment; and transmitting the information of the cross-region proxy server to the target device to enable the target device to establish a connection to the cross-region proxy server.
A vector data processing method includes that each computing node receives newly added vector data and places the data in a cache; when detecting that an amount of the data in the cache of one computing node meets a preset amount, the one computing node sends the amount to a master node; the master node acquires an amount of all computing nodes, and in the case where the average amount reaches a preset average amount, the master node instructs each computing node to extract a training sample; each computing node extracts a training sample from and sends the training sample to a training node; the training node trains a classifier to obtain a target classifier; each computing node classifies the newly added vector data according to the similarity of vector features and establishes a feature classification index according to a classification result and performs vector data retrieval.
An image data processing method and apparatus, and a storage medium. The image data processing method comprises: acquiring an image; when the image does not need to be cut, caching the image by frames; and forming a data packet by frames by means of data of the image which is cached by frames, and then sending the data packet.
A temperature detection method includes: acquiring real-time temperatures of a detected person, determining a temperature changing trend of the detected person to be a rising trend or a declining trend, and acquiring a real-time temperature changing speed; determining a higher limit of the real-time temperature changing speed and a lower limit of the real-time temperature changing speed; determining a temperature difference value corresponding to the higher limit according to the higher limit, and determining a temperature difference value corresponding to the lower limit according to the lower limit; determining a heat balance temperature higher limit according to the real-time temperatures, the temperature difference value corresponding to the higher limit, and the temperature changing trend, and determining a heat balance temperature lower limit according to the real-time temperatures, the lower limit, and the temperature changing trend; and determining a temperature detection result.
A target retrieval method and device, and a storage medium. The target retrieval method may comprise: obtaining structured feature information inputted when information retrieval is performed on a preset information retrieval database; obtaining all semi-structured feature information in a first preset time period and a first preset range from the information retrieval database according to time information and range information comprised in the inputted structured feature information, and taking the semi-structured feature information as hot data to be retrieved; obtaining real-time video streams of all cameras in a second preset time period and a second preset range; obtaining semi-structured feature information of a possible target in the real-time video streams; and comparing the semi-structured feature information of the possible target with semi-structured feature information in the hot data, and according to a comparison result, determining whether the possible target is a retrieval target.
Provided are a wake-up method and device for a surveillance camera, a surveillance camera and a medium. The surveillance camera includes a detection sensor and an image sensor. The wake-up method for a surveillance camera includes: in response to the detection sensor detecting presence of an intrusion target in a surveilled region, controlling the image sensor to enter a first image acquisition mode; controlling the image sensor in the first image acquisition mode to acquire at least two frames of surveillance images; determining whether the intrusion target is a surveilled target according to the at least two frames of surveillance images; and in response to determining that the intrusion target is a surveilled target, switching the image sensor from the first image acquisition mode to a second image acquisition mode, and controlling the image sensor in the second image acquisition mode to acquire a surveillance image having the surveilled target.
H04N 5/335 - Transforming light or analogous information into electric information using solid-state image sensors [SSIS]
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G06V 20/40 - ScenesScene-specific elements in video content
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G08B 13/196 - Actuation by interference with heat, light, or radiation of shorter wavelengthActuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
H04N 23/65 - Control of camera operation in relation to power supply
H04N 23/667 - Camera operation mode switching, e.g. between still and video, sport and normal or high and low resolution modes
58.
Code rate control method and apparatus, image acquisition device, and readable storage medium
A code rate control method and apparatus, an image acquisition device, and a readable storage medium are provided. The method includes: acquiring the gain and exposure time of an image to be encoded from an image processing module of an image acquisition device; obtaining corresponding reference distortion degree according to the gain and exposure time of said image; calculating the difference between the distortion degree in a characteristic region of said image and the reference distortion degree; calculating a distortion tolerance degree of macro blocks constituting said image according to the difference between the distortion degree in the characteristic region of said image and the reference distortion degree; performing macro block predictions on the respective macro blocks in said image, to obtain an optimum macro block prediction mode; and encoding said image, which corresponds to the optimum macro block prediction mode.
H04N 19/147 - Data rate or code amount at the encoder output according to rate distortion criteria
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
H04N 19/137 - Motion inside a coding unit, e.g. average field, frame or block difference
H04N 19/159 - Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 17/00 - Diagnosis, testing or measuring for television systems or their details
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
59.
ENTITY IMAGE CLUSTERING PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
Disclosed in embodiments of the present application are an entity image clustering processing method and apparatus, an electronic device and a storage medium. The method comprises: configuring an entity relationship between any two entity images suspected to have the same entity in an entity image sequence to obtain a corresponding entity triple; performing topological connection on any two entity triples sharing the same entity relationship by means of the entity relationship between the two entity images in the entity triple, to construct at least one entity relationship graph; and segmenting at least one relationship sub-graph from the at least one entity relationship graph according to the intimacy between the entity images in the at least one entity relationship graph.
09 - Scientific and electric apparatus and instruments
Goods & Services
(1) Accumulators being electrical storage batteries; charging stations for electric vehicles; electric batteries for vehicles; electric vehicle chargers; external battery pack for use with smart phones; galvanic cells; general purpose batteries; induction chargers for electric vehicles; photovoltaic modules; power banks; rechargeable electric storage batteries; smart chargers for electric vehicles; solar batteries
09 - Scientific and electric apparatus and instruments
Goods & Services
galvanic cells; accumulators, electric; batteries, electric; solar batteries; Mobile power supply [rechargeable batteries]; battery chargers; Photovoltaic power plants; rechargeable batteries; Electric car charging piles; Auxiliary battery packs; Electrical storage batteries for household use.
62.
Video stream transmission control method and apparatus, device, and medium
A video stream transmission control method includes that: in response to detecting that a receiving time interval of two adjacent received video frames in multiple received video frames is less than an interval threshold, theoretical receiving time of a next video frame sent by a video stream sending device corresponding to the each received video frame is determined; multiple theoretical receiving times corresponding to the multiple video stream sending devices are sorted, and expected receiving time of a next video frame sent by each video stream sending device of the multiple video stream sending devices is determined according to a sorting result and interval adjustment time; and the video frame sending time interval of the each video stream sending device is adjusted according to the expected receiving time and the theoretical receiving time corresponding to the each video stream sending device.
H04N 21/658 - Transmission by the client directed to the server
H04N 21/647 - Control signaling between network components and server or clientsNetwork processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging between two different networks, e.g. between IP and wireless
09 - Scientific and electric apparatus and instruments
Goods & Services
Accumulators, electric; Batteries, electric; Battery packs; Battery charge devices; Galvanic cells; Photovoltaic solar panels for the production of electricity; Portable power chargers; Rechargeable batteries; Electric car charging piles; Electrical storage batteries; Solar batteries
64.
VEHICLE WINDOW COLOR FRINGE PROCESSING METHOD AND APPARATUS, STORAGE MEDIUM, AND ELECTRONIC DEVICE
Disclosed in embodiments of the present application are a vehicle window color fringe processing method and apparatus, a storage medium, and an electronic device. The method comprises: extracting a first vehicle window image in an image to be processed, the first vehicle window image being an image comprising color fringes; and inputting the first vehicle window image into a pre-trained vehicle window color fringe processing model to obtain a second vehicle window image output by the vehicle window color fringe processing model, the second vehicle window image being an image having no color fringe, or, the intensity of color fringes in the second vehicle window image being less than that of the color fringes in the first vehicle window image.
Provided are a watermark adding method and apparatus, a storage medium and a device. The method includes steps described below. To-be-added watermark information is acquired, and a time offset is determined according to the watermark information; a video frame rate is acquired, and time information of a frame image in a video is determined according to the video frame rate; and offset processing is performed on the time information according to the time offset to add the watermark information to the video.
Provided are a vehicle monitoring method and a vehicle monitoring system. The vehicle monitoring method includes that: a polarization angle of polarized light in a sky image reflected by a vehicle window in a monitoring scenario is calculated, and a light-filtering polarization angle is calculated according to the polarization angle of the polarized light in the sky image reflected by the vehicle window, where the polarized light in the sky image is formed by scattered sunlight in a sky region corresponding to the sky image; the polarized light in the sky image reflected by the vehicle window in the monitoring scenario is filtered out according to the light-filtering polarization angle; and the monitoring scenario is imaged to form a monitoring image.
G01J 4/04 - Polarimeters using electric detection means
G02B 27/28 - Optical systems or apparatus not provided for by any of the groups , for polarising
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 10/36 - Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given pointNon-linear local filtering operations, e.g. median filtering
67.
METHOD AND APPARATUS FOR LENS FOCUSING, COMPUTER DEVICE AND STORAGE MEDIUM
Embodiments of the present application disclose a method and apparatus for lens focusing, a computer device and a storage medium. The method comprises: acquiring a test image obtained by a lens photographing a reference image at a current focus adjustment position, and determining a low-frequency modulation transfer function value of the test image; in response to determining that the low-frequency modulation transfer function value meets a preset numerical range condition, determining a high-frequency modulation transfer function value of the test image, determining a motion step length according to the high-frequency modulation transfer function value, and controlling the lens to move according to the motion step length; and using the next focus position to which the lens arrives as a new current focus adjustment position, so as to perform one-time focusing.
Disclosed in embodiments of the present application are a cruise method and apparatus for a heavy gimbal, a medium, and an electronic device. The method comprises: determining an initial preset position of a heavy gimbal performing preset position cruising at the current unit rotating stage of the current gimbal cruise period, and correspondingly rotating by a circle at one unit rotating stage; using the initial preset position as a staying starting point, and determining at least two staying preset positions of the heavy gimbal at the current unit rotating stage from pre-divided preset positions, two adjacent staying positions of a same unit rotating stage being spaced by a preset number of preset positions; and controlling the heavy gimbal to perform sequential rotation at the at least two staying preset positions of the current unit rotating stage, and entering, after performing sequential rotation once, the next unit rotating stage of the current gimbal cruise period to perform rotation.
Disclosed in the present application are a traffic radar pitch angle installation error calibration method and apparatus, and a storage medium. The method comprises: changing the frequency of a traffic radar according to a preset strategy in a preset frequency range; sequentially acquiring echo signal energy values of a preset target at different frequencies; and in response to determining that the acquired echo signal energy values meet a preset condition, determining the frequency, which corresponds to the maximum echo signal energy value, as the working frequency of the traffic radar, so as to calibrate a traffic radar pitch angle installation error, wherein the preset target is arranged on the ground, and the distance between the preset target in the horizontal direction of the ground and a projection point of the traffic radar in the horizontal direction is the length of a projection of a central beam of the traffic radar in the horizontal direction when the traffic radar is installed at an expected pitch angle.
A camera lens module, a camera lens optical axis adjusting device, and a binocular camera. The camera lens module comprises a fixed seat (10), a first lens (1), a second lens (2) and an adjusting mechanism, wherein the first lens (1) is fixedly installed on the fixed seat (10); the adjusting mechanism comprises an adjusting substrate (31) and a pitching adjusting assembly; the adjusting substrate (31) is provided with a first supporting column (311); the first supporting column (311) is connected to the fixed seat (10); the second lens (2) is fixedly arranged on a front surface of the adjusting substrate (31); an axis of the first supporting column (311) and an optical axis of the first lens (1) are arranged to be collinear with a Z axis; an optical axis of the second lens (2) and an axis of the first supporting column (311) are collinear; and the other end of the adjusting substrate (31) is in floating connection with the fixed seat (10) by means of the pitching adjusting assembly, so as to adjust a pitching angle of the optical axis of the second lens (2) around an X axis and a Y axis. The camera lens optical axis adjusting device comprises the camera lens module and an adjusting reference piece (6). The binocular camera comprises the camera lens module.
Provided are an image exposure adjustment method and apparatus, a device, and a medium. The image exposure adjustment method includes performing human body detection on a collected image; in a case where a human body is detected, segmenting the image to determine a foreground region and a background region in the image; determining a mask image according to the foreground region and the background region; and determining an exposure weight table according to the mask image, and performing exposure value adjustment on the image according to the exposure weight table.
Disclosed in the embodiments of the present application are a cloud platform docking debugging method and apparatus, and an electronic device and a storage medium. The cloud platform docking debugging method comprises: according to debugging application information of an application to be debugged, determining a target Internet-of-Things device from among a plurality of candidate Internet-of-Things devices, which are externally provided by a cloud platform, and allocating the target Internet-of-Things device to the application to be debugged; and controlling the target Internet-of-Things device, which is allocated to the application to be debugged, to perform docking debugging with an application to be debugged of a development end side, wherein the candidate Internet-of-Things devices comprise an Internet-of-Things device which is pre-constructed and accessed by means of the cloud platform, and externally provides a debugging service as a public resource.
The present application provides an optical center determination method and apparatus, an electronic device, and a medium. The optical center determination method comprises: connecting edge points having the largest distance in a target image to obtain at least one reference line; determining at least two reference points on the at least one reference line of which grayscale values are in a preset ratio to a grayscale peak value; and determining position information of an optical center according to auxiliary points of which grayscale values are consistent with the grayscale values of the reference points, and the at least two reference points.
Embodiments of the present application disclose a blind image denoising method and apparatus, an electronic device, and a storage medium. The blind image denoising method comprises: according to an image noise calibration result obtained by performing image noise calibration on an image acquisition device of an image to be denoised in advance, determining a target noise parameter of the image to be denoised; performing preliminary filtering on the image to be denoised, so as to obtain a preliminarily filtered image of the image to be denoised; determining, according to the target noise parameter and the preliminarily filtered image, a noise level estimation result of the image to be denoised; and according to the noise level estimation result, finally denoising the image to be denoised, so as to obtain a final blind denoising result of the image to be denoised.
An image capturing method and apparatus, an electronic photography device and a computer-readable storage medium are provided. The image capturing method includes: in a snapshot mode, performing image acquisition at a first frame rate and buffering an acquired first captured image; in a live mode, acquiring a second captured image at a second frame rate, where the second frame rate is less than the first frame rate; and processing the buffered first captured image and the second captured image at the second frame rate.
Provided is an object recognition method which includes obtaining a first visible-light image acquired by the first camera device and a second visible-light image acquired by the second camera device; performing exposure processing on the first visible-light image according to the luminance information of the bright area image of the first visible-light image and performing exposure processing on the second visible-light image according to the luminance information of the dark area images of the first visible-light image and/or the second visible-light image, where the dark area image is an area image having a luminance value less than or equal to the preset value; and performing target object detection on the first visible-light image obtained after exposure processing and the second visible-light image obtained after exposure processing and recognizing and verifying a target object according to the detection result.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/58 - Extraction of image or video features relating to hyperspectral data
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
G06V 40/40 - Spoof detection, e.g. liveness detection
77.
Image capturing method and device, apparatus, and storage medium
Provided are an image capturing method and apparatus, a device and a storage medium. The method includes: at a new acquisition moment, predicting a predicted projection area position of a target object in a current captured image on an image sensor and estimated exposure brightness information of the target object in the predicted projection area position; adjusting, according to a type of the target object and the estimated exposure brightness information, an exposure parameter of the target object in the predicted projection area position when the new acquisition moment arrives; and acquiring a new captured image at the new acquisition moment according to the adjusted exposure parameter, where both the new captured image and the current captured image include the target object.
Disclosed in embodiments of the present application are a data retrieval prediction method, an apparatus, an electronic device, and a readable medium. The method comprises: determining data storage path information used by a data object during a target query time period; determining, from the data storage path information, a target operation object that the data object is subjected to when written from a source end to a destination end; and predicting and determining a storage situation of the data object during the target query time period by means of performing storage impact event analysis on the target operation object.
The present application provides a multidirectional adjustment support and a camera device. The multidirectional adjustment support comprises: a base, a first rotating member which is connected to the base and rotatable, and a second rotating member which is connected to the first rotating member and rotatable. A rotating plane formed by the rotation of the first rotating member intersects with a rotating plane formed by the rotation of the second rotating member. The base is provided with a first limiting structure. The first rotating member is provided with a second limiting structure for limiting in cooperation with the first limiting structure. The second rotating member is provided with a third limiting structure for limiting in cooperation with the first limiting structure.
F16M 13/02 - Other supports for positioning apparatus or articlesMeans for steadying hand-held apparatus or articles for supporting on, or attaching to, an object, e.g. tree, gate, window-frame, cycle
80.
METHOD, APPARATUS AND DEVICE FOR REALIZING SHUTTER SYNCHRONIZATION OF CAMERA
Provided in the present application are a method, apparatus and device for realizing shutter synchronization of a camera. The camera may comprise a plurality of sensors, wherein shutter types of the plurality of sensors are different. The method may comprise: acquiring a periodic synchronization reference signal and a synchronization signal corresponding to each sensor; and adjusting, by taking the synchronization reference signals as a reference, synchronization signal delays and/or exposure delays of the plurality of sensors, so as to align exposure areas corresponding to the plurality of sensors.
Provided are an image fusion method, a storage medium and an electronic device are to-be-fused are acquired. Luminance and chrominance separation is performed on the visible light image to extract a luminance component and a chrominance component. Luminance fusion is performed on the luminance component of the visible light image and the infrared image to obtain a luminance fusion result. Image reconstruction is performed according to the luminance fusion result and the chrominance component of the visible light image to obtain a fused image.
G06T 5/94 - Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
H04N 9/78 - Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase for separating the brightness signal or the chrominance signal from the colour television signal, e.g. using comb filter
82.
White balance abnormality determination method and apparatus, storage medium, and electronic device
Provided are a white balance abnormality determination method and apparatus, a storage medium, and an electronic device. The method includes: acquiring at least one target block in a first channel image, and acquiring at least one reference block in a second channel image for each target block, where the first channel image is adjacent to the second channel image; determining a color feature representative value of each target block and a color feature representative value of each reference block associated with a position of a respective target block; determining, a color feature difference value between the first channel image and the second channel image; determining whether a white balance abnormality image exists in the first channel image and the second channel image; and in a case where a white balance abnormality image exists in the first channel image and the second channel image, adjusting the white balance abnormality image.
A method for storing video data includes, when receiving the I-frame data to be stored, detecting whether the written data exists in the video cache space; when detecting that the written data exists in the video cache space, reading a target writing position of the I-frame data to be stored and determining whether the target writing position is located within a position range corresponding to the written data in the first cache space; when determining the target writing position is located within the position range, writing, based on the target writing position, the I-frame data to be stored to the first cache space for caching and detecting whether the first cache space is full; and when detecting that the first cache space is full, writing all the video data in the video cache space to a memory space of the terminal device for storage and emptying the video cache space.
Provided are a privacy protection method for a transmitting end and a receiving end, an electronic device, and a computer-readable storage medium. The method includes acquiring an initial image, acquiring an area to be scrambled in the initial image, performing backup and scrambling for the initial image based on the area to be scrambled to obtain a scrambled image and backup information, encrypting the backup information to obtain an encrypted result, and transmitting the scrambled image and the encrypted result.
Disclosed in embodiments of the present application are a method and apparatus for detecting fire spots, an electronic device, and a storage medium. In the embodiments of the present application, a target position of a focusing lens group is determined according to the minimum object distance and maximum object distance in an acquired image picture, so that image acquisition performed on the basis of the target position of the focusing lens group can achieve the greatest degree of detection clarity. During the detection of a suspected fire spot area, the focusing lens group moves from the nearest position corresponding to the minimum object distance to the farthest position corresponding to the maximum object distance so as to implement movement in the lens dimension of the focusing lens group, thereby solving the problem of a multi-object distance scene lens being unable to cover all fire spots due to the insufficient depth of field thereof.
Disclosed in embodiments of the present application are an auto-focusing method and apparatus, an electronic device, and a medium. The method comprises: determining image block change information of a current frame image relative to a reference frame image in a scene monitoring area, the scene monitoring area comprising a pre-divided image effective area in a photographing image; according to the image block change information, determining whether to trigger a focusing operation on the scene monitoring area; and in response to determining to trigger the focusing operation, determining a focusing region of interest triggered by the movement of a target relative to the scene monitoring area, and performing the focusing operation on the focusing region of interest.
Provided are a method and apparatus for processing map point location information and a server. The method includes steps described below. A total number of point locations within a to-be-marked region in an electronic map is acquired, and the to-be-marked region is divided into a multiple subregions according to the total number of point locations; where the multiple subregions have the same length in a longitude direction and the same length in a latitude direction. The multiple point locations are divided into the plurality of subregions respectively according to location information of the multiple point locations within the to-be-marked region. Numbers of point locations within the multiple subregions are acquired respectively, and the length of the multiple subregions in the longitude direction and the length of the multiple subregions in the latitude direction are adjusted according to the numbers of point locations within the multiple subregions.
Provided are a camera, a method, apparatus and device for switching between a day mode and a night mode, and a medium. The method includes steps described below. In response to a current camera mode being the night mode, a color temperature value of a current imaging picture is determined. Visible light illuminance of the current imaging picture is determined by utilizing the color temperature value and an infrared light contribution ratio of picture brightness obtained based on white balance statistical information of the current imaging picture and used as first visible light illuminance. It is determined whether to switch from the current night mode to the day mode by utilizing a magnitude relationship between the first visible light illuminance and a first preset illuminance threshold. According to the present application, the accuracy of determining the visible light illuminance in the night mode is improved, and thus the problem of repeated switching between the day mode and the night mode is effectively improved.
H04N 23/71 - Circuitry for evaluating the brightness variation
G06T 7/90 - Determination of colour characteristics
H04N 9/77 - Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
H04N 23/72 - Combination of two or more compensation controls
H04N 23/88 - Camera processing pipelinesComponents thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
89.
IMAGE TRANSMISSION METHOD, APPARATUS AND DEVICE, AND MEDIUM
Disclosed herein are an image transmission method, apparatus and device, and a medium. The image transmission method comprises: according to the data amount of a frame of an image to be transmitted on each channel of a sending end and the transmission bandwidth between the sending end and a receiving end, determining the transmission time that is required by the sending end for transmitting said image frame on each channel; according to the transmission time that is required by the sending end for transmitting said image frame on each channel, segmenting theoretical time corresponding to transmitting said image frame, so as to obtain a time period corresponding to the transmission time; and controlling the sending end to send, in the time period corresponding to the transmission time, said image corresponding to the transmission time to the receiving end.
Disclosed in embodiments of the present application are a device cross-area access method and apparatus, an electronic device, and a storage medium. The method comprises: receiving a cross-area access request of a target device, and determining IP address information in the cross-area access request; according to the IP address information, determining information about a cross-area proxy server that matches the target device, the cross-area proxy server being configured in advance in a proxy area that is different from the target area in which the target server is located, and the cross-area proxy server being in distributed deployment with the target server; and sending the information about the cross-region proxy server to the target device so that the target device establishes a connection with the cross-region proxy server.
A vector data processing method and system, a computing node, a master node, a training node and a storage medium. The method comprises: each node of a plurality of computing nodes receiving newly added vector data, and placing the newly added vector data in a cache (S110); where a computing node detects that the newly added vector data in the cache of the computing node meets a pre-set amount, the computing node sending amount information of the newly added vector data to a master node (S120); the master node acquiring amount information of newly added vector data of the plurality of computing nodes, and where the average amount of the newly added vector data of the plurality of computing nodes reaches a pre-set average amount, notifying each computing node to extract a training sample from the newly added vector data (S130); each computing node extracting a training sample from the newly added vector data, and sending the training sample to a training node (S140); the training node training a classifier according to the training sample, so as to obtain a target classifier (S150); on the basis of the target classifier, each computing node classifying the newly added vector data according to the similarity of newly added vector features (S160); and each computing node establishing a feature classification index according to a classification result, so as to perform vector data retrieval according to the feature classification index (S170).
Disclosed in embodiments of the present invention are a wake-up method for a surveillance camera, a device, a surveillance camera, and a medium. The surveillance camera comprises a detection sensor and an image sensor. The wake-up method for a surveillance camera comprises: in response to a detection sensor detecting an intrusion object in a monitored region, controlling an image sensor to enter a first image acquisition mode; controlling the image sensor in the first image acquisition mode to acquire at least two surveillance images; determining, according to the at least two surveillance images, whether the intrusion object is an object to be monitored; and in response to determining that the intrusion object is an object to be monitored, switching the image sensor from the first image acquisition mode to a second image acquisition mode, and controlling the image sensor in the second image acquisition mode to acquire surveillance images containing the object to be monitored.
A white balance correction method and apparatus, a device, and a storage medium. The white balance correction method comprises: inputting an image to be corrected into a pre-trained non-chromatic aberration point model to obtain a non-chromatic aberration point weight map of the image to be corrected (101); determining, according to the non-chromatic aberration point weight map, an illumination color parameter of the image to be corrected (102); and performing, according to the illumination color parameter, white balance correction on the image to be corrected to obtain a corrected image (103).
A temperature detection method and apparatus, a medium, and an electronic device (500). The temperature detection method comprises: acquiring the real-time temperature of a detected person and, on the basis of the real-time temperature, determining that the temperature change trend of the detected person is an increasing trend or a decreasing trend and, on the basis of the real-time temperature, acquiring a real-time temperature change speed (S110); on the basis of the real-time temperature change speed, determining a speed change upper limit and a speed change lower limit (S120); on the basis of the speed change upper limit and the speed change lower limit, respectively determining a temperature difference value corresponding to the speed change upper limit and a temperature difference value corresponding to the speed change lower limit (S130); on the basis of the real-time temperature and the temperature difference value corresponding to the speed change upper limit and the temperature difference value corresponding to the speed change lower limit, and on the basis of the temperature change trend, determining a thermal equilibrium temperature upper limit value and a thermal equilibrium temperature lower limit value (S140); and, on the basis of the relationship between the thermal equilibrium temperature upper limit value and thermal equilibrium temperature lower limit value and a warning temperature threshold, determining a temperature detection result of the detected person (S150).
Provided are a color adjustment method, a color adjustment device, an electronic device, and a computer-readable storage medium. The method includes: determining an adjustment area corresponding to an edge pixel, where the adjustment area includes multiple pixels; acquiring color information of the edge pixel and color information of each similar pixel in the adjustment area, where a pixel type of the similar pixel is consistent with a pixel type of the edge pixel; and adjusting a parameter value of the edge pixel according to the color information of the each similar pixel, the color information of the edge pixel, and a brightness parameter of the edge pixel.
H04N 9/77 - Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
H04N 23/84 - Camera processing pipelinesComponents thereof for processing colour signals
H04N 25/702 - SSIS architectures characterised by non-identical, non-equidistant or non-planar pixel layout
96.
Video encoding method and apparatus, electronic device, and computer-readable storage medium
Provided are a video encoding method and apparatus, an electronic device, and a computer-readable storage medium. The method includes: acquiring status information of each macroblock in an image to be encoded; dividing the image to be encoded into a plurality of status regions according to the status information of each macroblock; determining a quantizer parameter adjustment value of each of the plurality of status regions in the image to be encoded according to a preset quantizer parameter value table; acquiring a quantizer parameter encoding value of each macroblock in a reference frame image of the image to be encoded; determining a quantizer parameter encoding value of each macroblock in the image to be encoded; and compressing and encoding the image to be encoded.
H04N 19/52 - Processing of motion vectors by encoding by predictive encoding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
H04N 19/119 - Adaptive subdivision aspects e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
H04N 19/139 - Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/177 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
97.
Method and apparatus for automatically detecting and suppressing fringes, electronic device and computer-readable storage medium
Disclosed are a method and apparatus for automatically detecting and suppressing fringes, an electronic device, and a computer-readable storage medium. The method includes the following steps: an image shot by a camera is acquired, and a fringe of the image is recognized; at least one fringe action parameter is acquired among shooting parameters of the camera based on a recognition result obtained by recognizing the fringe of the image; and a parameter adjustment is performed on the acquired fringe action parameter by adopting a parameter adjustment strategy matched with the acquired fringe action parameter, to perform fringe suppression on the image shot by the camera.
Provided are a facial recognition method, an electronic device, a computer-readable storage medium, and a facial recognition system. The facial recognition method includes: obtaining a first required comparison value and a second required comparison value based on a facial feature; if the first required comparison value is less than a preset comparison threshold and the second required comparison value is also less than the preset comparison threshold, re-extracting a facial feature, and matching the re-extracted facial feature against a plurality of original images separately to obtain a third required comparison value; and if the third required comparison value is greater than or equal to the preset comparison threshold, determining that the recognition is successful.
Provided are a dual-spectrum image automatic exposure method and apparatus, and a dual-spectrum image camera. This method includes acquiring an original image collected by the image sensor; performing logical light splitting processing on the original image to obtain an infrared image and a visible light image; determining whether to use infrared cutoff filter; if it is determined to use the infrared cutoff filter, performing exposure processing on the visible light image by using a single-spectrum exposure algorithm to obtain a visible light image that conforms to a target exposure effect; and if it is determined not to use the infrared cutoff filter, performing the exposure processing on the infrared image and the visible light image by using a dual-spectrum exposure algorithm to obtain a visible light image that conforms to a first exposure effect and an infrared image that conforms to a second exposure effect.
H04N 23/72 - Combination of two or more compensation controls
H04N 23/12 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from different wavelengths with one sensor only
H04N 23/667 - Camera operation mode switching, e.g. between still and video, sport and normal or high and low resolution modes
H04N 23/71 - Circuitry for evaluating the brightness variation
H04N 23/11 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
100.
VIDEO STREAM TRANSMISSION CONTROL METHOD AND APPARATUS, DEVICE, AND MEDIUM
Disclosed are a video stream transmission control method and apparatus, a device, and a medium. The video stream transmission control method comprises: when it is detected that the reception time interval of two adjacent received video frames among a plurality of received video frames is less than an interval threshold, determining, according to the interval between the reception time of received video frames and the sending time of video frames of a video stream sending device corresponding to the received video frames among a plurality of video stream sending devices, the theoretical reception time of the next video frame sent by the video stream sending device; sorting a plurality of theoretical reception times corresponding to the plurality of video stream sending devices, and determining, according to the sorting result and the interval adjustment time, the expected reception time of the next video frame sent by each video stream sending device; and adjusting the sending time interval of the video frames of the video stream sending devices according to the expected reception time and the theoretical reception time corresponding to each video stream sending device.