A video may be captured by an image capture device. A stabilized view of the video may be generated by using a punchout of the video. The shape and/or size of the punchout may be dynamically changed based on stabilization performance of the video. The shape and/or size of the punchout may be changed when the punchout is moving within the video. The shape and/or size of the punchout may not be changed when the punchout is not moving within the video.
A constraint model that includes a representation of feasible viewing window placement within a source field of view of visual content may be generated by using a roll-pitch-yaw axes representation of viewing window placement and having a diagonal dimension of the viewing window that fit within vertical and horizonal dimensions of the source field of view. The constraint model may enable full horizon leveling of the visual content.
A computer accesses a feed of image frames from a camera. The computer determines, based on at least a first image frame from the feed, electronic image stabilization (EIS) data, and internal motion unit (IMU) data, a future image height. The computer enables or disables, for at least a second image frame from the feed, image lines on a sensor of the camera to obtain the determined future image height.
Salient regions within video frames of a video may be identified. Sizes of salient regions within video frames may be determined and used to identify saliency frames in the video. Salient segments of the video may be identified using the saliency fames in the video. The salient segments of the video may be used to generate a video edit.
A coupler (mount) is disclosed that is configured to releasably connect an image capture device and an accessory. The coupler includes a shaft and a lever that is operatively connected to the shaft, and is repositionable between unlocked and locked positions as well as between disengaged and engaged positions. When the coupler is in the unlocked positioned, the image capture device and the accessory are movable in relation to each other, and when the coupler is in the locked positioned, the image capture device and the accessory are fixed in relation to each other. When the coupler is in the disengaged position, the shaft is removable from the image capture device and the accessory, and when the coupler is in the engaged position, the shaft is non-removable from the image capture device and the accessory.
F16B 21/16 - Means without screw-thread for preventing relative axial movement of a pin, spigot, shaft, or the like and a member surrounding itStud-and-socket releasable fastenings without screw-thread by separate parts with grooves or notches in the pin or shaft
An image capture device includes a lens barrel disposed in a body of the image device, a bayonet, a replaceable lens module that is configured to releasably couple to the bayonet, and a biasing element coupled to the housing or coupled to the bayonet that is configured to bias the housing against the lens barrel. The replaceable lens module includes a housing and a lens positioned in a lens recess of the housing. Additionally, when an impact force that meets a predetermined threshold is applied to an exterior surface of the replaceable lens module, the housing is configured to compress the biasing element and disengage the lens barrel.
An image capture apparatus includes a heat sensitive assembly configured to support a battery. The image capture apparatus includes a heatsink spaced a distance from the heat sensitive assembly and a heat generating component that is spaced a distance from the heat sensitive assembly and the heatsink. The image capture apparatus includes a heat conductor that extends from the heat generating component to the heat sensitive assembly or the heatsink, and the heat conductor moves heat from the heat generating component to the heat sensitive assembly or the heatsink. The image capture apparatus includes an actuation mechanism that moves the heat conductor between the heat sensitive assembly and the heatsink.
H04N 23/52 - Elements optimising image sensor operation, e.g. for electromagnetic interference [EMI] protection or temperature control by heat transfer or cooling elements
G03B 17/55 - Details of cameras or camera bodiesAccessories therefor with provision for heating or cooling, e.g. in aircraft
H04N 23/62 - Control of parameters via user interfaces
H04N 23/667 - Camera operation mode switching, e.g. between still and video, sport and normal or high and low resolution modes
H05K 7/20 - Modifications to facilitate cooling, ventilating, or heating
11.
METHODS AND APPARATUS FOR AUGMENTING DENSE DEPTH MAPS USING SPARSE DATA
Systems, apparatus, and methods for augmenting dense depth maps using sparse data. Various embodiments combine single-image depth estimation (SIDE) techniques with structure-from-motion techniques for improved depth accuracy. In some examples, a machine learning (ML) model is used to generate a dense depth map based on one or more frames/images of a video. Structure-from-motion (SfM) analysis is performed on the video to determine depth information from camera movement in the video. The structure-from-motion techniques may generate more accurate data than the ML model, however, the data from the ML model may be denser compared with the SFM depth data. The dense ML model depth map may be augmented by the SFM depth data. Augmentation may include fitting the relative depths determined by the ML model depth map to the absolute depth in the SFM data resulting in more accurate dense depth information.
In a video capture system, a virtual lens is simulated when applying a crop or zoom effect to an input video. An input video frame is received from the input video that has a first field of view and an input lens distortion. A selection of a sub-frame representing a portion of the input video frame is obtained that has a second field of view smaller than the first field of view. The sub-frame is processed to remap the input lens distortion to a desired lens distortion in the sub-frame. The processed sub-frame is output.
A camera expansion device includes a housing and a power supply within the housing. The power supply is configured to power an imaging device. The camera expansion device includes a securing structure extending from the housing for securing the housing to a separate mounting device and an interface configured to facilitate power transfer between the camera expansion device and the imaging device.
H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders
H04N 23/663 - Remote control of cameras or camera parts, e.g. by remote control devices for controlling interchangeable camera parts based on electronic image sensor signals
15.
SYSTEMS AND METHODS FOR SPATIALLY SELECTIVE VIDEO CODING
A panoramic video frame is partitioned into a plurality of tiles. A viewport corresponding to a field of view within the panoramic video frame is identified. First tiles of the plurality of tiles corresponding to the viewport are encoded at a first bitrate to obtain first encoded tiles. Second tiles of the plurality of tiles outside the viewport are encoded at a second bitrate lower than the first bitrate to obtain second encoded tiles. The first encoded tiles and the second encoded tiles are transmitted to a user device for rendering.
H04N 19/137 - Motion inside a coding unit, e.g. average field, frame or block difference
H04N 13/00 - Stereoscopic video systemsMulti-view video systemsDetails thereof
H04N 19/107 - Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 23/45 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
H04N 23/698 - Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
A video edit may include one or more segment of a video. A graphical user interface may convey information that indicates video editing decisions made to generate the video edit. The graphical user interface may include a timeline element to represent the length of the video and one or more inclusion elements to visually indicate the segment(s) of the video included in the video edit. The graphical user interface may convey information on the segment(s) of the video that have been automatically included in the video edit and the segment(s) of the video that have been manually included in the video edit.
H04N 21/431 - Generation of visual interfacesContent or additional data rendering
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video streamElementary client operations, e.g. monitoring of home network or synchronizing decoder's clockClient middleware
The present teachings provide an image capture device that includes a lens barrel disposed in a body of the image capture device. A bayonet is coupled to the lens barrel and includes one or more fingers that project outward from the bayonet. The image capture device also includes a replaceable lens module configured to releasably couple to the bayonet. The replaceable lens module includes a retaining ring, a lens positioned in an opening of the retaining ring, and a spring plate coupled to an interior surface of the retaining ring. The spring plate is configured to engage the one or more fingers of the bayonet to releasably couple the replaceable lens module to the bayonet. Additionally, the spring plate is configured to elastically deform when the spring plate engages the one or more fingers of the bayonet to compress the replaceable lens module towards the lens barrel.
Visual content captured by an image capture device may be stabilized. A target depicted within visual content of a video may be identified manually by a user or automatically by a computing device. Dolly zoom visual content may be generated by using a viewing window to crop the visual content. The size and/or the position of the viewing window within the visual content may change based on size and/or the position of the target within the visual content. The combination of video stabilization, target tracking, and cropping of the visual content may produce the dolly zoom effect. The dolly zoom effect may compensate for the movement of the image capture device and work on video captured by an image capture device without optical zoom.
An image capture system that includes an image capture apparatus and at least one optical accessory. The image capture apparatus includes a body and at least one lens that is supported by the body so as to define a field-of-view for the image capture apparatus. The at least one optical accessory is configured to overlie the at least one lens and thereby shift the field-of-view outwardly away from the image capture apparatus so as to define at least one blind area that is configured such that the field-of-view is spaced from the body of the image capture apparatus.
A video may include visual content having a progress length. A user may interact with a mobile device to set framings of the visual content at moments within the progress length. The framings of the visual content may be provided to a video editing application. The video editing application may utilize the framings set via the mobile device to provide preliminary framings of the visual content at the moments within the progress length.
G11B 27/034 - Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
A video including visual content may be captured by an image capture device in motion. Stabilization performance information for the visual content may be determined. The stabilization performance information may characterize an extent to which desired stabilization is able to be performed using the visual content. The stabilization for the visual content may be changed based on the stabilization performance information.
An image capture system for enhanced electronic image stabilization (EIS) includes an image capture device and an adapter lens. The image capture device includes an image sensor, a lens housing, a processor, and a lens assembly that includes a first group of optical elements disposed within the lens housing. The first group of optical elements are used project an image onto the image sensor. The processor performs EIS. The adapter lens is used to enhance EIS of the image capture device. The adapter lens has an adapter lens housing that interfaces with the lens housing. The lens housing automatically detects the adapter lens.
Positions of an image capture device may be used to estimate a time-lapse video frame rate with which time-lapse video frames are generated. The time-lapse video frame rate may be adjusted based on apparent motion between pairs of generated time-lapse video frames. The adjusted time-lapse video frame rate may be used to generate additional time-lapse video frames.
An image capture device that includes a housing; a heat generating component enclosed by internal surfaces of the housing; and a first heatsink and a second heatsink that are each simultaneously and thermally connected with the heat generating component. One or both of the first and second heatsinks positioned on an external surface of the housing.
H04N 23/52 - Elements optimising image sensor operation, e.g. for electromagnetic interference [EMI] protection or temperature control by heat transfer or cooling elements
The present disclosure relates to methods and systems for providing alerts to a helmet user. A method includes determining a current location of the user, determining a transit area and a transit direction of the user based on the current location, retrieving records of transit alerts based on the transit area of the user, selecting relevant transit alerts from the records of transit alerts based on the transit direction of the user, and alerting the user of the relevant transit alerts.
A video may be captured by an image capture device in motion. A stabilized view of the video may be generated by providing a punchout of the video. The punchout of the video may compensate for rotation of the image capture device during capture of the video. Different field of view punchouts, such as wide field of view punchout and linear field of view punchout, may be used to stabilize the video. Different field of view punchouts may provide different stabilization margins to stabilize the video. The video may be stabilized by switching between different field of view punchouts based on the amount of stabilization margin needed to stabilize the video.
An image capture apparatus including a monolithic front heatsink, a connection mechanism, and an image sensor and lens assembly (ISLA). The monolithic front heatsink includes a planar surface and a connection mechanism located adjacent to the planar surface. The connection mechanism includes: a protrusion and connection projections extending from the protrusion. The ISLA is coupled to the monolithic front heat sink and a portion of the ISLA extends through the protrusion of the monolithic front heatsink.
H04N 23/52 - Elements optimising image sensor operation, e.g. for electromagnetic interference [EMI] protection or temperature control by heat transfer or cooling elements
Multiple punchouts of a video may be presented based on multiple viewing windows. The video may include visual content having a field of view. Multiple viewing windows may be determined for the video, with individual viewing window defining a set of extents of the visual content. Different punchouts of the visual content may be presented based on the different viewing windows. Individual punchout of the visual content may include the set of extents of the visual content defined by corresponding viewing window.
An image capture apparatus including a forward housing, a rear housing, and a front heatsink. The forward housing is a forward heatsink. The rear housing is a rear heatsink. The front heatsink houses all or a portion of an integrated sensor and lens assembly (ISLA).
H04N 23/52 - Elements optimising image sensor operation, e.g. for electromagnetic interference [EMI] protection or temperature control by heat transfer or cooling elements
H04N 23/54 - Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
34.
HEATSINKS AND THERMAL ARCHITECTURE OF AN IMAGE CAPTURE DEVICE
An image capture device including: heat generating devices, one or more batteries, and a sensor heat spreader. The heat generating devices generate a thermal load. The sensor heat spreader is in thermal communication with one or more of the heat generating devices and extends from the one or more of the heat generating devices to the one or more batteries so that all or a portion of the thermal load is transferred to the one or more batteries.
H04N 23/52 - Elements optimising image sensor operation, e.g. for electromagnetic interference [EMI] protection or temperature control by heat transfer or cooling elements
A constraint model that includes a representation of feasible viewing window placement within a source field of view of visual content may be generated by using a roll-pitch-yaw axes representation of viewing window placement and having a diagonal dimension of the viewing window that fit within vertical and horizonal dimensions of the source field of view. The constraint model may enable full horizon leveling of the visual content.
Devices and methods for determining a direction of audio arrival from Ambisonics channels using azimuth and elevation segments is described herein. A method includes generating multiple blocks of samples from Ambisonics signals for a time interval, determining an azimuth angle estimate and an elevation angle estimate for the time interval when a defined number of blocks in the multiple blocks of samples are valid, generating the azimuth angle estimate based on maximum number of azimuth angle estimates present in an azimuth segment amongst a defined number of azimuth segments, and generating the elevation angle estimates based on maximum number of elevation angle estimates present in an elevation segment amongst a defined number of elevation segments, where the direction of arrival of the Ambisonics signals is based on the azimuth angle estimate and the elevation angle estimate.
High dynamic range (HDR) image processing for low light conditions is performed by obtaining three images. The three images include a first long exposure image and a pair of digitally overlapped (DOL) multi-exposure images. The DOL multi-exposure images include a second long exposure image and a short exposure image. Respective RGB images are obtained from the first long exposure image and the pair of DOL multi-exposure images. The respective RGB images are fused to generate a low light HDR image.
H04N 23/741 - Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
H04N 23/73 - Circuitry for compensating brightness variation in the scene by influencing the exposure time
38.
SYSTEMS AND METHODS FOR PROVIDING FLIGHT CONTROL FOR AN UNMANNED AERIAL VEHICLE BASED ON OPPOSING FIELDS OF VIEW WITH OVERLAP
This disclosure relates to providing flight control for an unmanned aerial vehicle based on opposing fields of view with overlap. The UAV may include a housing, a motor, a first image sensor, a second image sensor, a first optical element having a first field of view greater than 180 degrees, a second optical element having a second field of view greater than 180 degrees, and one or more processors. The first optical element and the second optical element may be carried by the housing such that a centerline of the second field of view is substantially opposite from a centerline of the first field of view, and a peripheral portion of the first field of view and a peripheral portion of the second field of view overlap. Flight control for the UAV may be provided based on parallax disparity of an object within the overlapping fields of view.
Systems and methods are disclosed for image signal processing. For example, methods may include receiving an image from an image sensor, detecting, in a linear domain, color fringing areas in the image, correcting detected color fringing areas to obtain a corrected image, performing tone mapping to the corrected image to obtain a tone mapped image and storing, displaying, or transmitting an output image based on at least the tone mapped image.
Systems and methods are disclosed for image signal processing. For example, methods may include, receiving an image from an image sensor; storing a sequence of images captured after the image in a buffer; determining an orientation error between the orientation of the image sensor and an orientation setpoint during capture of the image; determining a rotation corresponding to the orientation error based on orientation estimates from the sequence of orientation estimates corresponding to the sequence of images; and invoking an electronic image stabilization module to correct the image to obtain a stabilized image, in which the electronic image stabilization module corrects the image for the rotation corresponding to the orientation error.
Multiple sets of framing for a video may define different positioning of multiple viewing windows for a video. The multiple viewing windows may be used to provide different punchouts of the video within a graphical user interface. The graphical user interface may enable creation/change in the sets of framing for the video. The graphical user interface for the punchouts may include a single timeline representation for the video. Framing indicators that represent different sets of framing for the video may be presented along the single timeline representation at different times.
Microphones are disposed on different surfaces of an image capture device to generate different microphone capture patterns. A microphone with three microphone elements is disposed on the top surface of the image capture device. The three microphone elements are arranged in an equilateral triangular configuration. A second microphone with at least two microphone elements is disposed on the front surface of the image capture device. The at least two microphone elements are disposed on the front surface of the image capture device in a vertical configuration. A third microphone with at least one microphone element is disposed on a back surface of the image capture device.
Video frames are captured by an image capture device and stabilized to generate stabilized video frames. Multiple stabilized video frames are combined into single motion blurred video frames. Combination of multiple stabilized video frames into single motion blurred video frames produces motion blur within the single motion blurred video frames that is both physical and real.
G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
H04N 23/68 - Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
H04N 23/951 - Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
An image capture apparatus includes first and second image sensors that generates heat and are opposed a space from each other and a housing assembly that encloses the first and second image sensors. The image capture apparatus includes first and second circuit boards that are connected respectively and separately with the first and second image sensors, and the first and second circuit boards include a peripheral edge that extends from the first and second image sensors to an outside of the housing assembly. The image capture apparatus includes a heatsink assembly positioned on the outside of the housing assembly and a heat conductor assembly that is extended between the heatsink assembly and the first and second circuit boards.
An image capture device with a series of elements, a forward element, and a sensor assembly. The series of elements are aligned along an optical axis and have a field of view of about 180 degrees or more. The forward element removably covers the series of elements, wherein the forward element is curved and configured to capture the field of view of about 180 degrees or more and provide optical power to the series of elements. A sensor assembly is located on the optical axis. The forward element is made of glass.
An image capture device includes a first housing, a second housing, a first integrated sensor-lens assembly (ISLA), and a second ISLA. The second housing is coupled to the first housing to form an internal compartment. The first ISLA includes a first image sensor coupled to a first lens in fixed alignment. The second ISLA includes a second image sensor coupled to a second lens in fixed alignment. The first ISLA is positively statically connected to the first housing, and the second ISLA is coupled to the first housing indirectly via the first ISLA.
Multiple images that include a first image, a second image, and a third image are received. The multiple images are such that the second image is temporally between the first image and the third image. The first image, the second image, and the third image are combined to obtain a long-exposure image. High dynamic range processing is applied to the second image and the long-exposure image to obtain an output image with a larger dynamic range than a dynamic range of the second image. The high dynamic range processing uses the second image as a short-exposure image. An image based on the output image is transmitted, stored, or displayed.
Positions of an image capture device during capture of a video may be transferred to a computing device before the video is transferred to the computing device. The positions of the image capture device may be used to determine a viewing window for the video before the video is obtained. The viewing window may be used to present a stabilized view of the video when the video is obtained. For example, a stabilized view of the video may be presented as the video is streamed to the computing device.
An image capture apparatus including an audio component and a housing that encloses the audio component. The housing includes a pattern of apertures and at least one audio aperture disposed at a location of the pattern of indents and extended through the housing. The image capture apparatus includes a membrane assembly that defines a channel intersected by a membrane, and the channel is aligned with the at least one audio aperture.
H04R 1/04 - Structural association of microphone with electric circuitry therefor
H04R 1/28 - Transducer mountings or enclosures designed for specific frequency responseTransducer enclosures modified by provision of mechanical or acoustic impedances, e.g. resonator, damping means
An image capture device is disclosed that includes a body defining a peripheral cavity and a door assembly that is movable between an open position and a closed position to close and seal the peripheral cavity. The door assembly includes a door body; a slider that is supported by the door body for axial movement between a first position and a second position; a biasing member that is configured for engagement (contact) with the slider; a door lock including a stop that is configured for engagement (contact) with the biasing member; and a sealing member that is fixedly connected to the door lock.
Systems, apparatus, and methods for selectively parsing a live stream for a connected device. GoPro® cameras capture 2 video streams for every recording: a main resolution video (MRV) and a frame aligned low-resolution video (LRV). Currently, the LRV is streamed to other devices for playback in real-time. Unfortunately, not all devices in the mobile ecosystem can keep up with these transfer and playback speeds of the LRV; thus, exemplary embodiments of the present disclosure pick frames from the LRV to provide an IDR-only version that can be replayed with minimal processing burden. As a related issue, operating codecs in parallel at different resolutions can result in misaligned frame structures since each codec independently determines its “group of pictures” (GOP), etc. The described techniques define a frame correspondence that can also be used to improve post-processing at best effort.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
An image signal processor accesses raw images from an image sensor. The image signal processor obtains adaptive acquisition control data for the raw images. The adaptive acquisition control data comprises at least one of a luminance value, a contrast value, a gain value, an exposure value, or a white balance value. The image signal processor obtains, in accordance with the adaptive acquisition control data, an indication of whether to use a star trails scene classification for the raw images. The image signal processor transmits, to buffers for storing data in accordance with the raw images, the indication of whether to use the star trails scene classification.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
A system including an image capture module and a handheld module. The image capture module includes a body; an image sensor; and a mechanical stabilization system comprising a first gimbal, a second gimbal, and a third gimbal connected to the body and configured to control an orientation of the image sensor of the image capture module relative to the body. The handheld module defines a slot that is keyed to the body of the image capture module. The image capture module, when located within the handheld module, has a low profile so that the third gimbal is protected from damage by the handheld module.
B64U 10/14 - Flying platforms with four distinct rotor axes, e.g. quadcopters
B64U 20/87 - Mounting of imaging devices, e.g. mounting of gimbals
B64U 101/30 - UAVs specially adapted for particular uses or applications for imaging, photography or videography
G03B 15/00 - Special procedures for taking photographsApparatus therefor
G05D 1/223 - Command input arrangements on the remote controller, e.g. joysticks or touch screens
G05D 1/224 - Output arrangements on the remote controller, e.g. displays, haptics or speakers
G05D 1/686 - Maintaining a relative position with respect to moving targets, e.g. following animals or humans
G05D 1/689 - Pointing payloads towards fixed or moving targets
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
An image capture device with beamforming for wind noise optimized microphone placements is described. The image capture device includes a front facing microphone configured to capture an audio signal. The front facing microphone co-located with at least one optical component. The image capture device further includes at least one non-front facing microphone configured to capture an audio signal. The image capture device further includes a processor configured to generate a forward-facing beam using the audio signal captured by the front facing microphone and the audio signal captured by the at least one non-front facing microphone, generate an omni beam using the audio signal captured by the at least one non-front facing microphone, and output an audio signal based on the forward facing beam and the omni beam.
Methods and apparatus for blending unknown pixels in overlapping images. In one embodiment, an action camera captures two hyper-hemispherical fisheye images that are stitched to a 360° panorama. In order to remove exposure differences between the two cameras, the images are pre-processed prior to multiband blending. The pre-processing leverages image information from pixels to make informed guesses about pixels that were not captured. In particular, various pixels with different knowability (e.g., known, unknown, consistent, and conflicting) may be handled differently so as to emphasize/de-emphasize their importance in pre-processing.
An image capture device with dynamic wind noise compression tuning techniques is described. A technique includes detecting of the presence of wind noise by measuring coherence between at least two microphones. For a compressor, adjusting a default compression threshold and default compression parameters based on the coherence measurements. For each microphone, applying by the compressor the adjusted compression parameters when an audio signal is above the adjusted compression threshold and applying the default compression parameters when the audio signal is below the adjusted compression threshold.
H04R 1/28 - Transducer mountings or enclosures designed for specific frequency responseTransducer enclosures modified by provision of mechanical or acoustic impedances, e.g. resonator, damping means
An image capture device may capture multiple audio content during capture of visual content. The field of view of the visual content may be used to generate modified audio content from the multiple audio content. The modified audio content may provide sound for playback of the visual content with the field of view.
An optical module for an image capture system is disclosed. The optical module includes a first integrated sensor-lens assembly (ISLA) that is oriented in a first direction and which defines a first optical axis and first mounting surfaces; a second ISLA that is oriented in a second, opposite direction and which defines a second optical axis coincident with the first optical axis and second mounting surfaces; and an adhesive that is located between the first mounting surfaces and the second mounting surfaces such that the first ISLA and the second ISLA are directly connected together.
A video may be captured by an image capture device in motion. A horizon-leveled view of the video may be generated by providing a punchout of the video. The punchout of the video may compensate for rotation of the image capture device during capture of the video. The placement of the punchout of the video may be changed based on different rotational positions of to provide a view in which a horizon depicted within the video is leveled.
The present disclosure relates to a helmet including a shell, a housing comprising a front panel and a rear panel that form a chin portion of the shell, an electronic board disposed within the housing, a battery disposed within the housing and coupled to the electronic board, a camera disposed within the housing and coupled to the electronic board and the battery. The camera comprises a lens that extends through the front panel of the housing. The housing does is embedded within an external profile shape of the helmet.
A42B 3/30 - Mounting radio sets or communication systems
A61B 5/00 - Measuring for diagnostic purposes Identification of persons
G08B 5/36 - Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmissionVisible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electromagnetic transmission using visible light sources
G08G 1/01 - Detecting movement of traffic to be counted or controlled
G08G 1/09 - Arrangements for giving variable traffic instructions
An image capture device includes a first lens assembly, a first image sensor, a second lens assembly, and a second image sensor. The first image sensor is in communication with the first lens assembly forming a first integrated sensor and lens assembly (a first ISLA) that faces in a first direction. The second image sensor is in communication with the second lens assembly forming a second integrated sensor and lens assembly (a second ISLA) that faces in a second direction that is opposite the first direction. A lens mount that connects the first ISLA and the second ISLA together forming one or more stress-free zones. A first retention system that connect a first connector and a flexible connector in a first connection zone. A second retention system that connects a second connector and a second flexible connector in a second connection zone.
An image capture system includes an image capture device and an integrated sensor-optical component accessory. The integrated sensor-optical component accessory includes at least one of a microphone, a processor, a motion sensor, or an audio sensor. The image capture device configured to control operation of the integrated sensor-optical component accessory with the image capture device when the integrated sensor-optical component accessory is releasably attached to the image capturing device. The image capture device may include a user interface configurable for operation with the integrated sensor-optical component accessory and the image capture device. The integrated sensor-optical component accessory may draw power from the image capture device. The integrated sensor-optical component accessory may include a power supply. The image capture system may include another integrated sensor-optical component accessory including at least one of a microphone, a processor, a motion sensor, or an audio sensor which is controllable by the image capture device.
An optical assembly for an image capture device that defines an optical axis and includes an optical module having: a lens holder; a first optical group supported by the lens holder; a lens barrel axially movable in relation to the lens holder along the optical axis; a second optical group supported by the lens barrel such that the second optical group is axially movable in relation to the first optical group along the optical axis to thereby adjust focus of the image capture device; and an adjustment member in engagement with the lens barrel such that rotation of the adjustment member causes axial movement of the lens barrel and the second optical group along the optical axis. In various embodiments, the adjustment member may be configured for rotation about an axis of rotation that extends in (generally) parallel relation or in (generally) orthogonal relation to the optical axis.
G02B 7/10 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification by relative axial movement of several lenses, e.g. of varifocal objective lens
G02B 7/02 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses
A camera system having a camera housing, a loudspeaker grille, a housing lip, and a loudspeaker system. The camera housing has an exterior that form sides of the camera housing. A loudspeaker grille is disposed within the camera housing. The loudspeaker grille has an exterior surface that is substantially flush with the exterior of the camera housing. A housing lip is located on an interior of the camera housing. A loudspeaker system is integrated into the camera housing. A waterproof membrane is configured to prevent moisture in the interior of the camera housing, and a support structure is located between the waterproof membrane and the loudspeaker system. The support structure has a rectangular outer perimeter. The housing lip is configured to mechanically support the loudspeaker system, the loudspeaker grille, the waterproof membrane, and the support structure such that a loudspeaker cavity exists between the loudspeaker grille and the waterproof membrane.
An image capture device is disclosed that includes a body defining a peripheral cavity and a removable door assembly that is configured to close and seal the peripheral cavity. The door assembly includes a door body; a locking mechanism; and a biasing member that is configured for engagement with the locking mechanism to resist unlocking of the locking mechanism until a threshold force is applied, at which time, the biasing member is moved from a normal position, in which the biasing member extends at a first angle in relation to the locking mechanism, to a deflected position, in which the biasing member extends at a second angle in relation to the locking mechanism. When locked, the door assembly is rotationally fixed in relation to the body of the image capture device, and when unlocked, the door assembly is rotatable in relation to the body of the image capture device.
Systems, apparatus, and methods adding post-processing motion blur to video and/or frame interpolation with occluded motion. Conventional post-processing techniques relied on the filmmaker to select and stage their shots. Different motion blur techniques were designed to fix certain types of footage. Vector blur is one technique that “smears” pixel information in the direction of movement. Frame interpolation and stacking attempts to create motion blur by stacking interpolated frames together. Each technique has its own set of limitations. Various embodiments use a combination of motion blur techniques in post-processing for better, more realistic outcomes with faster/more efficient rendering times. In some cases, this may enable adaptive quality post-processing that may be performed in mobile/embedded ecosystems. Various embodiments use a combination of video frame interpolation techniques for better interpolated frames with faster/more efficient rendering times.
An apparatus including a microphone, a speaker, and a processor. The speaker is configured to produce a sound that audibly communicates information to a user. The microphone is configured to receive the sound. The processor is configured to produce the sound through the speaker to audibly communicate information to the user; capture a video that includes audio; initiate a sound removal process to remove the sound from the audio; and after the sound is produced, stop the sound removal process.
Systems and methods are disclosed for replaceable outer lenses. For example, an image capture device may include a lens barrel in a body of the image capture device, the lens barrel including multiple inner lenses; a replaceable lens structure that is mountable on the body of the image capture device, the replaceable lens structure including an outer lens and a retaining ring configured to fasten the outer lens in a position covering a first end the lens barrel in a first arrangement and configured to disconnect the outer lens from the body of the image capture device in a second arrangement; and an image sensor mounted within the body at a second end of the lens barrel, the image sensor configured to capture images based on light incident on the image sensor through the outer lens and the multiple inner lenses when the retaining ring is in the first arrangement.
Systems, apparatus, and methods for adding post-processing motion blur to video. Conventional post-processing techniques relied on the filmmaker to select and stage their shots. Different motion blur techniques were designed to fix certain types of footage. Vector blur is one technique that “smears” pixel information in the direction of movement. Frame interpolation and stacking attempts to create motion blur by stacking interpolated frames together. Each technique has its own set of limitations. Various embodiments use a combination of motion blur techniques in post-processing for better, more realistic outcomes with faster/more efficient rendering times. In some cases, this may enable adaptive quality post-processing that may be performed in mobile/embedded ecosystems.
A method of assembling an image capture device that includes: positioning a first sealing member between a front housing portion and a mounting structure; connecting the mounting structure to the front housing portion such that the first sealing member forms a watertight seal therebetween; positioning a second sealing member between the mounting structure and an integrated sensor-lens assembly (ISLA); orienting the second sealing member such that a locating feature extending rearwardly from the mounting structure is aligned with a notch defined by the second sealing member to facilitate proper relative orientation of the mounting structure and the second sealing member; connecting the ISLA to the mounting structure such that the locating feature is positioned within the notch and the second sealing member forms a watertight seal between the ISLA and the mounting structure; and connecting a rear housing portion to the front housing portion.
H04N 23/52 - Elements optimising image sensor operation, e.g. for electromagnetic interference [EMI] protection or temperature control by heat transfer or cooling elements
Methods and apparatus for post-processing in-camera stabilized video. Embodiments of the present disclosure reconstruct and re-stabilize an in-camera stabilized video to provide for improved stabilization (e.g., a wider crop, etc.). In-camera sensor data may be stored and used to re-calculate orientation metadata in post-production. In-camera stabilization provides several benefits (e.g., the ability to share stabilized videos from the camera without additional post-processing as well as reduced file sizes of the shared videos). Camera-aware post-processing can reuse portions of the in-camera stabilized videos while providing additional benefits (e.g., the ability to regenerate the original captured videos in post-production and re-stabilize the videos). Camera-aware post-processing can also improve orientation metadata and remove sensor error. The disclosed techniques also enable assisted optical flow-based stabilization using the refined metadata.
An image capture device may analyze visual content to determine a smile aggregation value. The smile aggregation value satisfying a smile aggregation criterion may indicate that people depicted in the visual content are smiling. When the smile aggregation value satisfies the smile aggregation criterion, the capture of visual content may be started. When the smile aggregation value fails to satisfy the smile aggregation criterion, the capture of visual content may be stopped.
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
75.
Systems and methods for capturing visual content using celestial pole
An image capture device may capture visual content with an optical element having a field of view. The location of the image capture device, the direction of north with respect to the image capture device, and rotation of the image capture device may be used to determine location of celestial pole with respect to the field of view. A graphical user interface may be presented on an electronic display. The graphical user interface may indicate the location of the celestial pole within the field of view.
A video edit may include two videos arranged in a sequence. Motion within the one or both of the videos may be assessed. A transition effect may be selected based on the motion assessed within the video(s), and the video edit may be modified to include the transition effect between the videos. The transition effect may emphasize the motion assessed within the video(s) and/or create continuity of motion during transition between the two videos within the video edit.
An image capture device that includes a body and an interconnect mechanism that is connected to the body. The interconnect mechanism includes: a base plate defining a receptacle that is configured to threadably engage an accessory such that the image capture device is directly connectable to the accessory via the interconnect mechanism; a cover that is removably connected to the base plate and which is configured to thermally insulate the interconnect mechanism; and first and second fingers that are pivotably connected to the base plate about first and second pivot axes such that the interconnect mechanism is reconfigurable between a collapsed configuration, in which the first and second fingers are nested within the body of the image capture device, and an extended configuration, in which the first and second fingers extend outwardly from the body of the image capture device.
F16M 11/04 - Means for attachment of apparatusMeans allowing adjustment of the apparatus relatively to the stand
F16M 11/10 - Means for attachment of apparatusMeans allowing adjustment of the apparatus relatively to the stand allowing pivoting around a horizontal axis
78.
Systems and methods for presenting multiple views of videos
Multiple sets of framing for a video may define different positioning of multiple viewing windows for a video. The multiple viewing windows may be used to provide different punchouts of the video within a graphical user interface. The graphical user interface may enable creation/change in the sets of framing for the video. The graphical user interface for the punchouts may include a single timeline representation for the video. Framing indicators that represent different sets of framing for the video may be presented along the single timeline representation at different times.
An image capture device detects a wind whistle using two or more microphones. The image capture device includes a processor that obtains microphone signals from the two or more microphones and determines coherence values between the microphone signals across a frequency band. The processor determines a coherence value for each frequency bin of the frequency band. Based on a detection of an elevated coherence value in a frequency bin, the processor determines the presence of a whistle. The processor attenuates the frequency bin based on a determination that the elevated coherence value is above a threshold.
G10K 11/178 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effectsMasking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
Systems, apparatus, and methods for “fast” wake-up using virtualized addresses. Action cameras need to conserve power most of the time, but also be immediately responsive to catch action when it happens. Unfortunately, most general-purpose operating systems (e.g., Linux-based) lock up the processor during boot-up. Empirically, an OS boot process might take between 6-7 seconds from start to finish, even in highly streamlined boot sequences. This is undesirable, especially where one device (e.g., a smart phone) triggers an action camera to capture an image. Various embodiments of the present disclosure create “virtual action addresses” that directly expose interrupts as addressable space (via a Bluetooth Low Energy (BLE) network). In one such example, the interrupts trigger a capture or other action. The action camera can immediately service the interrupts with its real-time operating system (RTOS) while the general-purpose OS is booting up.
A device includes a mechanical stabilization system is used in image signal processing. The mechanical image stabilization system has an operating bandwidth and includes a motor to control an orientation of an image sensor. A processing apparatus of the device determines a temperature of the motor and adjusts a cutoff frequency of the operating bandwidth based on the temperature of the motor.
Apparatus and methods for the stitch zone calculation of a generated projection of a spherical image. In one embodiment, a non-transitory computer-readable apparatus comprising a storage apparatus, the storage apparatus comprising instructions configured to, when executed by a processor apparatus, cause a computerized apparatus to identify a stitch line associated with an equatorial area of a plurality of spherical images; re-orient the plurality of spherical images in accordance with the stitch line; and project the re-oriented plurality of spherical images to a selected image projection type.
G06T 3/12 - Panospheric to cylindrical image transformations
G06T 3/14 - Transformations for image registration, e.g. adjusting or mapping for alignment of images
G06T 3/4038 - Image mosaicing, e.g. composing plane images from plane sub-images
H04N 23/45 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
H04N 23/698 - Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
An image capture device with a bayonet, integrated sensor and lens assembly (ISLA), and fasteners. The bayonet includes an axis, mounting flange, fastener recesses and an inner flange. The axis extends through the bayonet. The mounting flange extends outward from the bayonet relative to the axis. Fastener recesses extending through the mounting flange. The inner flange extending inward from the bayonet towards the axis, wherein the inner flange is located between the mounting flange and the axis. The ISLA includes a forward end and internal lenses. The forward end that aligns with the inner flange to connect the ISLA to the bayonet. The internal lenses are located within the ISLA and aligned along an optical axis and the axis of the bayonet. The fasteners extend through the fastener recesses in the mounting flange to connect the bayonet to a first surface of the image capture device.
Apparatus and methods for enabling indexing and playback of media content before the end of a content capture. In one aspect, a method for enabling indexing of media data obtained as part of a content capture is disclosed. In one embodiment, the indexing enables playback of the media data during the capture and before cessation thereof. In one variant, the method includes generating an “SOS track” for one or more images. The SOS track does not contain the same information as a full index, but provides sufficient information to allow an index to be subsequently constructed. In one implementation, the provided information includes identifiable markers relating to video data, audio data, or white space, but it does not provide an enumerated or complete “table of contents” as in a traditional index.
G11B 27/28 - IndexingAddressingTiming or synchronisingMeasuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
87.
IMAGE STITCHING WITH ELECTRONIC ROLLING SHUTTER CORRECTION
Systems and methods are disclosed for image signal processing. For example, methods may include receiving a first image from a first image sensor, receiving a second image from a second image sensor; obtaining corrected images based on the first image and the second image; obtaining stabilized images based on the corrected images; applying a parallax correction to the stabilized images to obtain a composite image; obtaining a transformed image from the composite image; and encoding an output image based on the transformed image.
H04N 23/45 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
H04N 23/698 - Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
H04N 25/531 - Control of the integration time by controlling rolling shutters in CMOS SSIS
88.
Display screen of a computing device with a graphical user interface
Music may be selected to provide accompaniment for a video edit of a video. Characteristics of the music may be determined and used to select the types of visual effects that are applied in the video edit. The characteristics of the music may be extracted from MIDI file/metadata track containing MIDI information for the music.
A digital image capturing device (DICD) that includes: a first integrated sensor-lens assembly (ISLA) facing in a first direction; a second ISLA facing in a second direction generally opposite to the first direction; and a bridge member that is positioned between the first ISLA and the second ISLA, wherein the bridge member is configured as a discrete structure that is separate from the first ISLA and the second ISLA.
09 - Scientific and electric apparatus and instruments
Goods & Services
(1) Protective helmets; visors for helmets; bags specially adapted for protective helmets and sports helmets; helmet safety lights; protective helmets incorporating electronics; protective helmets incorporating electronic devices; Protective helmets for sports and motorcycle helmets featuring in-built communications, audio-visual and global positioning system software and apparatus; bicycle helmets; Helmets for motorcyclists; Motorcycle helmets; Motorcycle goggles; Protective helmets for cyclists; Protective helmets for sports; Protective sports helmets; Crash helmets; Crash helmets for cyclists; Helmets for use in sports; Protective helmets for motor cyclists; Sports helmets; Snowboarding helmets; Ski helmets; Riding helmets; Head guards for sports; Articles of protective clothing for wear by motorcyclists for protection against accident or injury; Helmets incorporating video cameras, namely, motorcycle helmets incorporating video cameras, Helmets incorporating Global Positioning Systems, namely, Motorcycle helmets incorporating GPS transceivers, Protective sports helmets incorporating GPS transceivers; Helmets incorporating audio and video equipment, namely, Motorcycle helmets incorporating apparatus for broadcasting, recording, transmission or reproduction of sound or images, Protective sports helmets incorporating apparatus for broadcasting, recording, transmission or reproduction of sound or images; Helmets incorporating barometers, altimeters, and gyroscopes, namely, Motorcycle helmets incorporating barometers, altimeters, and gyroscopes; Helmets incorporating tracking equipment and software, namely, motorcycle helmets incorporating GPS tracking devices and software for operating and controlling GPS tracking devices, Protective sports helmets incorporating GPS tracking devices and software for operating and controlling GPS tracking devices; Motion sensors and tracking sensors for use with helmets, namely, GPS tracking devices for use with protective helmets, motion detectors for use with protective helmets, and sensors for determining position, velocity and acceleration for use with protective helmets; Cameras for use with protective helmets; imaging equipment for use with helmets, namely, Video displays mounted in protective helmets, Eye pieces for helmet mounted displays used in protective helmets, Fixed and helmet mounted transparent electronic displays for providing aircraft crew members with navigational and operational information; Helmets with wireless and Internet connectivity, namely, Motorcycle helmets incorporating wireless transmitters and receivers, Protective sports helmets incorporating wireless transmitters and receivers; downloadable computer software for use in determining position, velocity, acceleration, motion, atmospheric pressure, distance, direction, altitude, and temperature of a helmet, and for wireless communication of audio and video, in the field of sports; downloadable software for controlling and operating electronic devices embedded in or attached to protective helmets.
09 - Scientific and electric apparatus and instruments
Goods & Services
Protective helmets; visors for helmets; bags specially adapted for protective helmets and sports helmets; helmet safety lights; protective helmets incorporating electronics; protective helmets incorporating electronic devices; protective helmets for sports and motorcycle helmets featuring in-built communications, audio-visual and global positioning system software and apparatus; bicycle helmets; helmets for motorcyclists; motorcycle helmets; motorcycle goggles; protective helmets for cyclists; protective helmets for sports; protective sports helmets; crash helmets; crash helmets for cyclists; helmets for use in sports; protective helmets for motor cyclists; sports helmets; snowboarding helmets; ski helmets; riding helmets; head guards for sports; articles of protective clothing for wear by motorcyclists for protection against accident or injury; helmets incorporating video cameras, namely, motorcycle helmets incorporating video cameras; helmets incorporating Global Positioning Systems, namely, motorcycle helmets incorporating GPS transceivers; protective sports helmets incorporating GPS transceivers; helmets incorporating audio and video equipment, namely, motorcycle helmets incorporating apparatus for broadcasting, recording, transmission or reproduction of sound or images; protective sports helmets incorporating apparatus for broadcasting, recording, transmission or reproduction of sound or images; helmets incorporating barometers, altimeters, and gyroscopes, namely, motorcycle helmets incorporating barometers, altimeters, and gyroscopes; helmets incorporating tracking equipment and software, namely, motorcycle helmets incorporating GPS tracking devices and software for operating and controlling GPS tracking devices; protective sports helmets incorporating GPS tracking devices and software for operating and controlling GPS tracking devices; motion sensors and tracking sensors for use with helmets, namely, GPS tracking devices for use with protective helmets; motion detectors for use with protective helmets, and sensors for determining position, velocity and acceleration for use with protective helmets; cameras for use with protective helmets; imaging equipment for use with helmets, namely, video displays mounted in protective helmets, eye pieces for helmet mounted displays used in protective helmets, fixed and helmet mounted transparent electronic displays for providing aircraft crew members with navigational and operational information; helmets with wireless and Internet connectivity, namely, motorcycle helmets incorporating wireless transmitters and receivers, protective sports helmets incorporating wireless transmitters and receivers; downloadable computer software for use in determining position, velocity, acceleration, motion, atmospheric pressure, distance, direction, altitude, and temperature of a helmet, and for wireless communication of audio and video, in the field of sports; downloadable software for controlling and operating electronic devices embedded in or attached to protective helmets.
95.
Methods and Apparatus for Metadata-Based Processing of Media Content
Methods and apparatus for metadata-based cinematography, production effects, shot selection, and/or other content augmentation. Effective cinematography conveys storyline, emotion, excitement, etc. Unfortunately, most amateur filmmakers lack the knowledge and ability to create cinema quality media. Various aspects of the present disclosure are directed to, among other things, rendering media based on instantaneous metadata. Unlike traditional post-processing techniques that rely on human subjectivity, some of the various techniques described herein leverage the camera's actual experiential data to enable cinema-quality post-processing for the general consuming public. Instantaneous metadata-based cinematography and shot selection advisories and architectures are also described.
An image capture device may capture media items (e.g., images, videos, sound clips). An identifier of a mobile device of a user transmitted to the image capture device when the mobile device is in proximity of the image capture device may be used to identify media items that may include and/or be of interest to the user of the mobile device. An identifier and time of the image capture device transmitted to a mobile device of a user when the image capture device is in proximity of the mobile device may be used to identify media items that may include and/or be of interest to the user of the mobile device.
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
G06F 16/78 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
A mounting system for an image capture device that includes three pivotably connected arms is disclosed. The first arm includes a first fastener that is lockable and unlockable to allow for connection and disconnection of the image capture device, and a second fastener that is lockable and unlockable to regulate the position of the image capture device relative to the first arm. The mounting system further includes a third fastener that is lockable and unlockable to regulate the relative positioning between the first arm and the second arm, and a fourth fastener that is lockable and unlockable to regulate the relative positioning between the second arm and the third arm. Whereas the first fastener is removable, each of the second, third, and fourth fasteners is captive to (nonremovable from) the mounting system.
Apparatus and methods for the pre-processing of image data so as to enhance quality of subsequent encoding and rendering. In one embodiment, a capture device is disclosed that includes a processing apparatus and a non-transitory computer readable apparatus comprising a storage medium have one or more instructions stored thereon. The one or more instructions, when executed by the processing apparatus, are configured to: receive captured image data (such as that sourced from two or more separate image sensors) and pre-process the data to enable stabilization of the corresponding images prior to encoding. In some implementations, the pre-processing includes combination (e.g., stitching) of the captured image data associated with the two or more sensors to facilitates the stabilization. Advantageously, undesirable artifacts such as object “jitter” can be reduced or eliminated. Methods and non-transitory computer readable apparatus are also disclosed.
H04N 23/68 - Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
G06T 3/4038 - Image mosaicing, e.g. composing plane images from plane sub-images
H04N 19/15 - Data rate or code amount at the encoder output by monitoring actual compressed data size at the memory before deciding storage at the transmission buffer