Describing themselves as "a stealth startup .... focused on architecting low power, scalable and easily programmable hardware accelerators for AI/ML applications", Areanna is structured to design hardware accelerators for AI/ML applications. The effort is to develop a fast, scalable and area- and power-efficient Matrix Multiplier for machine learning applications. At the heart of all machine learning algorithms and most computationally expensive task in these applications - Matrix multiplication Most hardware accelerator solutions store inputs, weights and partial sums in memory and retrieve them sequentially in order to perform matrix multiplication. The data movements between memory and computational units dominate the overall power consumption and latency of the system. By performing computations in memory, a significant power and area savings can be achieved, says Behdad Youssefi, CEO of Areanna. Areannas architecture does just that. Namely, all computation is performed in a modified SRAM where multiplication and summing are performed digitally and via analog processing respectively. The architecture is capable of better than 100 TOPS/W and is general purpose and thus can serve a number of different use cases ranging from voice and image recognition for Edge devices to Digital Signal Processing for IoT applications to processing in the Cloud.