\n
LOS ALTOS, Calif., May 31, 2022 (GLOBE NEWSWIRE) — NEUCHIPS is excited to announce its first ASIC, RecAccelTM<\/sup>\u00a0N3000 using TSMC 7nm process, and specifically designed for accelerating deep learning recommendation\u00a0models (DLRM). NEUCHIPS has partnered with industry leaders in Taiwan’s semiconductor and cloud server\u00a0ecosystem and plans to deliver its RecAccel\u2122 N3000 AI inference platform on Dual M.2 modules for Open Compute Platform compliant servers as well as PCIe Gen\u00a05 cards for standard data center servers during the 2H’2022.<\/p>\n<\/div>\n<\/div>\n“In 2019, when Facebook open sourced their Deep Learning Recommendation Model and challenged the industry to deliver a balanced AI inference chip platform, we decided to pursue the challenge,” said Dr. Lin, NEUCHIPS CEO, Co-Founder of\u00a0Global Unichip Corp, subsidiary of TSMC and Professor at National Tsing Hua University, Taiwan. “Our continued improvements in\u00a0MLPerf DLRM benchmarking and\u00a0whole-chip emulation give us confidence that our RecAccel\u2122 AI hardware\u00a0architecture co-designed with our software will scale to deliver industry leadership and exceed our target of 20M inferences\u00a0per second at 20 Watts.”<\/p>\n
NEUCHIPS RecAccel\u2122 N3000 Inference platform includes sophisticated hardwired accelerators, patented query\u00a0scheduling and a comprehensive software stack\u00a0optimized to provide high accuracy and hardware utilization while maintaining energy efficiency required\u00a0in data centers. Other key features include the following:<\/p>\n
\n- \u00a0Proprietary 8-bit coefficient quantization, calibration and hardware support that deliver 99.95% of FP32 accuracy.<\/li>\n
- \u00a0 Patented embedding engine with novel cache design and DRAM traffic optimization that reduces LPDDR5 access by 50% and increases bandwidth utilization by 30%.<\/li>\n
- \u00a0Dedicated MLP compute engines that deliver state-of-the-art energy efficiency at engine level, and 1 microjoule per inference at SOC level.<\/li>\n
- \u00a0Proven software stack that delivers very high scalability across multiple cards.<\/li>\n
- \u00a0Support for leading recommender AI models including DLRM, WND, DCN, and NCF.<\/li>\n
- Robust security based on hardware root of trust.<\/li>\n<\/ul>\n
About Neuchips:<\/strong><\/p>\nNEUCHIPS develops purpose-built AI inference chip platforms from the ground up, by co-developing hardware and software to meet our customer’s requirements for performance, accuracy, power, and cost-efficiency. NEUCHIPS is a founding member of MLCommons\u2122. For more information, please visit\u00a0https:\/\/www.neuchips.ai<\/a>\u00a0or contact\u00a0contact@neuchips.ai<\/a><\/p>\nRelated Images<\/b><\/strong><\/p>\n<\/a><\/p>\nImage 1: NEUCHIPS INC.<\/b><\/strong><\/a><\/p>\nNEUCHIPS INC. Logo<\/p>\n
This content was issued through the\u00a0press release distribution service at Newswire.com<\/a>.<\/p>\n