2024
|
Invention
|
System and method for maintaining dependencies in a parallel process.
A method includes: dequeui... |
|
Invention
|
Method for automatic hybrid quantization of deep artificial neural networks.
A method includes, ... |
|
Invention
|
Method and tensor traversal engine for strided memory access during execution of neural networks.... |
|
Invention
|
Processor system and method for increasing data-transfer bandwidth during execution of a schedule... |
|
Invention
|
System and method for maintaining dependencies in a parallel process. A method includes: dequeuin... |
2023
|
Invention
|
Method for automatic hybrid quantization of deep artificial neural networks. A method includes, f... |
|
Invention
|
Deep vision processor. Disclosed herein is a processor for deep learning. In one embodiment, the ... |
2022
|
Invention
|
System and method for profiling on-chip performance of neural network execution. A method include... |
|
Invention
|
System and method for profiling on-chip performance of neural network execution.
A method includ... |
|
Invention
|
System and method for queuing commands in a deep learning processor. A method includes: dequeuing... |
|
P/S
|
Microprocessors; integrated circuit modules; multichip
modules; downloadable computer software d... |
2021
|
Invention
|
A processor system and method for increasing data-transfer bandwidth during execution of a schedu... |
2020
|
Invention
|
Method for static scheduling of artificial neural networks for a processor.
A method for schedul... |