Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
[ad_1]
SqueakThe AI chip has announced the start of the AI chip, which has grew $ 144 million so far, EN100,
AI accelerator built on accurate and size analog billing in memory.
Designed to laptops, workstations and edge devices to bring advanced AI capabilities in EN100
200-increased peaks (used to deliver 200 additional peaks (AI performance size) of the total computing power within the power restrictions of outbuts and customer platforms.
The company spread from Princeton University to be reduced to the AI processing and expenditures of analog membranes.
“EN100 represents a radical turn in AI calculation innovations, which is risky with a risky processing and software risky with many generations of silicon development,” he said. “These innovations are presented as products to use today’s digital solutions of today’s digital solutions to the expandable, programmable, programmable EU.
Previously, models that manage the next generation of AI economy-multimodal and thinking systems, the required mass data center processing power. Conditions of cloud addiction, delay and security shortcomings are impossible in the impossible countless AI applications.
EN100 hits these restrictions. As the basis of the AI inference, the developers can now place developed, reliable, individual applications now.
This progress allows organizations to develop rapidly developed and democrats strong EI technologies and achieves a high-performance result.
The EN100, the first of Encharge has an optimized architectural feature that effectively processes AI tasks in minimizing energy. Available in two form factors – M.2 for PCIE for laptops and workstations is an engineer to change device capabilities:
For laptops, up to 8 + compatibility capacity, EN100 M.2, EN100 M.2 M.2 provides complex AI applications in laptops without battery life or transportation.
● PCIE for workstations: Reaching about 1 petaop of four NPUs, En100 PCIE card is GPU level calculation capability for professional AI applications that use complex models and large data.
Enchart AI’s comprehensive software set offers full platform support along the model view that develops the maximum efficiency. For this purpose, the ecosystem, special optimization tools, high-performance design and extensive development resources – combine extensive development resources that support popular frames such as Pytorch and Tensorflow.
Compared to competitive solutions, the EN100 demonstrates better performance for ~ 20x for a watts along a variety of AI workload. With the 58GB high density, with the bandwidth that reaches LPDDR Memory and 272 GB / s, the implementation of the computer generally manages the computer visually impactive AI tasks that require special information central equipment. EN100 software provides optimized performance of AI models today and to match tomorrow’s AI models.
“The real magic of the EN100 is that it can be used to easily access our partners for our results,” Ram Rangarajan, Enwarge AI’s high vice-president Ram Rangarajan. “The EN100 for customer platforms can only be more reliable, but also more reliable and individual, but also activating a new generation of subtle apps, but also to bring complex AI capabilities on the device.”
Early adoption partners have started to work in advance to map the EN100 how to convey the transformative AI practices, such as Multimodal AI agents and real-time real-time implementation.
The first round of the EN100 can now learn more about the Full Login Program Completely, Interesting Developers and OEMS’s upcoming 2 Early Access Program, which can be registered to gain a unique opportunity to gain a competitive advantage in the www.encharge.ai/en100.
Encharge does not compete directly with many great players because you have a little different attention and strategy. Our approach is the most attractive energy efficiency of energy efficiency is the most attractive to the energy of our advantage in terms of our advantage.
He said that in Enprarge, a few different competition in the scene of the chip. For one, ENGare’s chip has higher energy efficiency (about 20 times higher) than leading players. Chip can run the most advanced AI models by making an extremely competitive offer for any use of any use of a data center.
Secondly, the analog calculation approach of the memory calculation of the chips, compared to the usual digital architecture, compared to about 30 summits, laptops, smartphones and other portable devices, which is a premium level in space. OEMS can combine powerful AI capabilities without compromising in size, weight or shape factor, allows them to create smoother, more compact products while still delivering them.
In March 2024, the University of Princeton and Princeton University and Princeton University (Optima) Optima Optima Optima Optima Optima Optima Optima Optima Optima Optima Optima Optima Optima Optima is $ 78 million in size for speed, efficient and expandable report accelerators.
Engarge’s inspiration came from solving a critical problem in AI: the opportunity to meet the needs of the traditional calculation architecture EU. The company was created to solve the problem, the AI models fight each other’s size and complexity, including traditional chip architecture (as GPU), but also to cause effective energy requirements, as well as effective energy requirements. (For example, a large language model can consume electricity up to 130 US houses over a year.)
Special technical inspiration, Naveen Verma, Naveen Terms and Research, Naveton University’s research in the next generation of calculation architecture. He and his employees, more than seven years, examined various innovative calculation architectures that cause a leap with a checkout memory analogue.
This approach aims to significantly increase energy efficiency by reducing noise and other difficulties that hinder the noise and past analog calculation efforts. This technical achievement, which is risky and risky on numerous generations of many generations of silicone, was key to installing the ENGARE AI to commercialize an analogue of AI’s lack of disabilities.
The semi-presenter and AI launched the EU in 2022, led by a team with a system experience. The team pays attention to the AI inference chip and escort program, which is a firm and expandable analogue of Princeton University.
The company was able to switch to analog and memory chip architecture using the exact metal wire key capacers instead of noise trends. The result is a fully accumulated architecture that is 20 times more energy efficient than the availability of existing or frequent digital AI chip solutions.
With this technology, Enwarge changes how and where the AI calculation happens and where it happens. Their technology dramatically reduces energy requirements for AI calculation, brings a leading AI business burden on the data center and laptops, workstations and external devices. AI Inference information provides a new generation of a device and a device that is impossible before the energy, weight or measurement restrictions, which is created and approaching the location of data, safety, weight and cost.
As AI models grow in size and complexity in size and complexity, their chip and related energy requirements have been skyroyged. Today, the vast majority of the results of the AI’s consequence, cloud information centers are carried out with mass groups of energy intensive chips in the warehouse. This creates expense, delay and security barriers to use the AI to use the circumstances that require the device’s calculation.
Will you be able to get out of the AI data center with transformative increases only in calculation effects and save local data of data with sizes, weight and strength or ship or privacy or privacy or privacy or privacy requirements? The value of the leading AI can be dramatically low flow effects from the cost and access to the consumer electronics, to aerospace and protection.
Privation to data centers also offers the risks of the supply chain Bottleneck. Only high-level graphics processing units (GPU) can increase the total requirement of the demand for certain components with 30% or more in 2026. However, it is more likely to increase a balance of about 20% or more and causing a lack of chip. The company already buys all available shares of AI companies waiting for large numbers for mass expenditures for mass expenses for the latest GPU and long-standing lists.
Environmental and energy requirements of these information centers are also not suitable for the existing technology. The power use of a Google search has increased from 7.9 watts from 7.9 watts from 7.9 watts from 7.9 to 7.9 per hour. In the aggregate, the International Energy Agency (IAE) in 2026, electricity consumption of electricity consumption in 2026 will double and about the current general consumption of Japan.
Investors include Tiger Global Management, Samsung Ventures, IQT, RTX Ventures, Venturetech Alliance, Anzu Partners, Angzu Partners, Venturetech Alliance, AlleyCorp and ACVC partners. The company has 66 people.
[ad_2]
Source link