AN Charge AI, a startup building hardware to accelerate AI processing, today raised $21.7 million in Series A funding led by Anzu Partners, with participation from AlliCorp, Scout Ventures, Silicon Catalyst Angels, Shams Ventures, E14 Fund and alumni. Ventures. Co-founder and CEO Naveen Verma told TechCrunch in an email that the proceeds will be used to fund hardware and software development as well as new customer engagements.
“Now was the right time to scale up because the technology has been extensively validated through previous R&D up to the compute stack,” Verma said. “[It] It lays the charging ground for both a clear path to productivity (without new technology development) and a value proposition in AI-facing customer applications for market impact… Many edge applications are in the developing stage, with great opportunities for value. AI is still defined.
Incharge AI was conceived by Verma, Areeroga and Kailash Gopalakrishnan. Verma is the director of Princeton’s Keller Center for Innovative Engineering Education, while Gopalakrishnan (until recently) was a fellow at IBM and worked at the technology giant for 18 years. Iroga, on the other hand, previously led the semiconductor company Macom’s communications business unit as VP and GM.
EnCharge stems from federal grants Verma received in 2017 with partners at the University of Illinois at Urbana-Champaign. DARPA’s ongoing Electronics Resurgence Initiative, which aims to broadly advance computer chip technology, has led Verma to invest $8.3 million in research into new nonvolatile memory devices.
Unlike the “volatile” memory in today’s computers, non-volatile memory can store data without a continuous power supply, making it theoretically more energy efficient. Flash memory and most magnetic storage devices, including hard disks and floppy disks, are examples of non-volatile memory.
DARPA has funded Verma’s research in machine learning calculations – “in memory” – here, referring to running calculations in RAM to reduce the latency introduced by storage devices.
EnCharge was launched to commercialize Verma’s research with hardware built on the standard PCIe form factor. InCharge’s custom plug-in hardware InCharge’s custom plug-in hardware can accelerate AI applications in servers and “network edge” machines, Verma explained.
When iterating the hardware, the team in charge had to go through a number of engineering challenges. A memory computer is sensitive to voltage fluctuations and temperature rise. So charge it He designed his chips using capacitors instead of transistors; Capacitors that store electrical energy can be manufactured with high precision and are not affected by voltage fluctuations.
EnCharge needed to create software that would allow customers to adapt their AI systems to custom hardware. Verma says that once the software is done, it will allow the in-charge hardware to work with different neural networks (i.e., AI algorithms) at scale.
“EnCharge products offer command-line gains in energy efficiency and performance,” said Verma. “This is enabled by highly robust and scalable next-generation technology, seen in generations of experimental chips, expanding to advanced nodes and architectures. InCharge faces existing memory and compute efficiency bottlenecks and fundamental technology barriers and limited validation across compute stacks.” It differs from both digital technologies that suffer from the digital technologies they face.
Those are lofty claims, and it’s worth noting that EnCharge hasn’t mass-produced the hardware, and neither do customers. (Verma says the company is pre-revenue.) In another challenge, Encharge is facing well-funded competition in an already saturated AI accelerator hardware market. Axelera and GigaSpaces are both developing in-memory hardware to accelerate AI workloads. NeuroBlade raised $83 million last October to make embedded memory chips for data centers and peripheral devices. Not to be missed, Synthient is offering in-memory, speech processing AI edge chips.
But the funding it’s received so far suggests investors at least have faith in EnCharge’s roadmap.
“As edge AI continues to drive business automation, there is a strong demand for sustainable technologies that can deliver dramatic improvements in end-to-end AI inference capabilities along with cost and power efficiency,” Anzu Partners’ Jimmy Kahn said in a press release. “Encharge technology solves these challenges and is successfully proven in silicon, fully compatible with audio production.”
EnCharge has approximately 25 employees and is based in Santa Clara.