chihcheng
Newbie
The data width (256 bytes) of the GPU/AI L1/L2 cache is very large, which makes my invention not directly applicable to it.
Now I have invented a new SRAM macro partition, which reduces the power consumption of GPU/AI L1/L2 cache to 15%~30%.
Now I have invented a new SRAM macro partition, which reduces the power consumption of GPU/AI L1/L2 cache to 15%~30%.