Krishna Rangasayee is Founder and CEO of SiMa.ai. Previously, Krishna was COO of Groq and at Xilinx for 18 years, where he held multiple senior leadership roles including Senior Vice President and GM of the general business, and Executive Vice President of world sales. While at Xilinx, Krishna grew the business to $2.5B in revenue at 70% gross margin while creating the inspiration for 10+ quarters of sustained sequential growth and market share expansion. Prior to Xilinx, he held various engineering and business roles at Altera Corporation and Cypress Semiconductor. He holds 25+ international patents and has served on the board of directors of private and non-private corporations.
What initially attracted you to machine learning?
I’ve been a student of the embedded edge and cloud markets for the past 20 years. I’ve seen tons of innovation within the cloud, but little or no towards enabling machine learning at the sting. It’s a massively underserved $40B+ market that’s been surviving on old technology for a long time.
So, we launched into something nobody had done before–enable Effortless ML for the embedded edge.
Could you share the genesis story behind SiMa?
In my 20 + profession, I had yet to witness architecture innovation happening within the embedded edge market. Yet, the necessity for ML on the embedded edge increased within the cloud and elements of IoT. This proves that while corporations are demanding ML at the sting, the technology to make this a reality is just too stodgy to truly work.
Subsequently, before SiMa.ai even began on our design, it was vital to know our customers’ biggest challenges. Nevertheless, getting them to spend time with an early-stage startup to attract meaningful and candid feedback was its own challenge. Luckily, the team and I were capable of leverage our network from past relationships where we could solidify SiMa.ai’s vision with the fitting targeted corporations.
We met with over 30 customers and asked two basic questions: “What are the largest challenges scaling ML to the embedded edge?” and “How can we help?” After many discussions on how they desired to reshape the industry and listening to their challenges to realize it, we gained a deep understanding of their pain points and developed ideas on easy methods to solve them. These include:
- Getting the advantages of ML with no steep learning curve.
- Preserving legacy applications together with future-proofing ML implementations.
- Working with a high-performance, low-power solution in a user-friendly environment.
Quickly, we realized that we would have liked to deliver a risk mitigated phased approach to assist our customers. As a startup we needed to bring something so compelling and differentiated from everyone else. No other company was addressing this clear need, so this was the trail we selected to take.
SiMa.ai achieved this rare feat by architecting from the bottom up the industry’s first software-centric, purpose-built Machine Learning System-on-Chip (MLSoC) platform. With its combination of silicon and software, machine learning can now be added to embedded edge applications by the push of a button.
Could you share your vision of how machine learning will reshape every thing to be at the sting?
Most ML corporations deal with high growth markets corresponding to cloud and autonomous driving. Yet, it’s robotics, drones, frictionless retail, smart cities, and industrial automation that demand the newest ML technology to enhance efficiency and reduce costs.
These growing sectors coupled with current frustrations deploying ML on the embedded edge is why we imagine the time is ripe with opportunity. SiMa.ai is approaching this problem in a very different way; we intend to make widespread adoption a reality.
What has to this point prevented scaling machine learning at the sting?
Machine learning must easily integrate with legacy systems. Fortune 500 corporations and startups alike have invested heavily of their current technology platforms, but most of them is not going to rewrite all their code or completely overhaul their underlying infrastructure to integrate ML. To mitigate risk while reaping the advantages of ML, there must be technology that permits for seamless integration of legacy code together with ML into their systems. This creates a straightforward path to develop and deploy these systems to handle the applying needs while providing the advantages from the intelligence that machine learning brings.
There are not any big sockets, there’s nobody large customer that’s going to maneuver the needle, so we had no selection but to give you the option to support a thousand plus customers to actually scale machine learning and really bring the experience to them. We discovered that these customers have the need for ML but they don’t have the capability to get the training experience because they lack the interior capability to accumulate they usually don’t have the interior fundamental knowledge base. So that they wish to implement the ML experience but to achieve this without the embedded edge learning curve and what it really quickly got here to is that now we have to make this ML experience very effortless for patrons.
How is SiMA capable of so dramatically decrease power consumption in comparison with competitors?
Our MLSoC is the underlying engine that basically enables every thing, it will be important to distinguish that we are usually not constructing an ML accelerator. For the two billion dollars invested into edge ML SoC startups, everybody’s industry response for innovation has been an ML accelerator block as a core or a chip. What people are usually not recognizing is to migrate people from a classic SoC to an ML environment you would like an MLSoC environment so people can run legacy code from day one and steadily in a phased risk mitigated way deploy their capability into an ML component or in the future they’re doing semantic segmentation using a classic computer vision approach and the subsequent day they may do it using an ML approach but by some means we allow our customers the chance to deploy and partition their problem as they deem fit using classic computer vision, classic ARM processing of systems, or a heterogeneous ML compute. To us ML just isn’t an end product and subsequently an ML accelerator just isn’t going to achieve success by itself, ML is a capability and it’s a toolkit along with the opposite tools we enable our customers in order that using a push button methodology, they’ll iterate their design of pre-processing, post-processing, analytics, and ML acceleration all on a single platform while delivering the very best system wide application performance at the bottom power.
What are a number of the primary market priorities for SiMa?
Now we have identified several key markets, a few of that are quicker to revenue than others. The quickest time to revenue is sensible vision, robotics, industry 4.0, and drones. The markets that take a bit more time on account of qualifications and standard requirements are automotive and healthcare applications. Now we have broken ground in the entire above working with the highest players of every category.
Image capture has generally been on the sting, with analytics on the cloud. What are the advantages of shifting this deployment strategy?
Edge applications need the processing to be done locally, for a lot of applications there just isn’t enough time for the information to go to the cloud and back. ML capabilities is prime in edge applications because decisions must be made in real time, as an example in automotive applications and robotics where decisions should be processed quickly and efficiently.
Why should enterprises consider SiMa solutions versus your competitors?
Our unique methodology of a software centric approach packaged with a whole hardware solution. Now we have focused on a whole solution that addresses what we wish to call the Any, 10x and Pushbutton because the core of customer issues. The unique thesis for the corporate is you push a button and also you get a WOW! The experience really must be abstracted to a degree where you wish to get hundreds of developers to make use of it, but you don’t wish to require them to all be ML geniuses, you don’t want all of them to be tweaking layer by layer hand coding to get desired performance, you wish them to remain at the very best level of abstraction and meaningfully quickly deploy effortless ML. So the thesis behind why we latched on this was a really strong correlation with scaling in that it really must be an easy ML experience and never require loads of hand holding and services engagement that can get in the way in which of scaling.
We spent the primary yr visiting 50 plus customers globally trying to know if all of you wish ML but you’re not deploying it. Why? What is available in the way in which of you meaningfully deploying ML and or what’s required to actually push ML right into a scale deployment and it really comes right down to three key pillars of understanding, the primary being ANY. As an organization now we have to unravel problems given the breadth of consumers, and the breadth of use models together with the disparity between the ML networks, the sensors, the frame rate, the resolution. It’s a really disparate world where each market has completely different front end designs and if we actually just take a narrow slice of it we cannot economically construct an organization, we actually must create a funnel that’s able to taking in a really wide selection of application spaces, almost consider the funnel because the Ellis Island of every thing computer vision. People could possibly be in tensorflow, they could possibly be using Python, they could possibly be using camera sensor with 1080 resolution or it could possibly be a 4K resolution sensor, it really doesn’t matter if we are able to homogenize and convey all of them and for those who don’t have the front end like this you then don’t have a scalable company.
The second pillar is 10x which implies that there’s also the issue why customers are usually not capable of deploy and create derivative platforms because every thing is a return to scratch to accumulate a brand new model or pipeline. The second challenge is little doubt as a startup we want to bring something very exciting, very compelling where anybody and everybody is willing to take the danger even for those who’re a startup based on a 10x performance metric. The one key technical merit we deal with solving for in computer vision problems is the frames per second per watt metric. We must be illogically higher than anybody else in order that we are able to stay a generation or two ahead, so we took this as a part of our software centric approach. That approach created a heterogeneous compute platform so people can solve your complete computer vision pipeline in a single chip and deliver at 10x in comparison with every other solutions. The third pillar of Pushbutton is driven by the necessity to scale ML on the embedded edge in a meaningful way. ML tool chains are very nascent, steadily broken, no single company has really built a world class ML software experience. We further recognized that for the embedded promote it’s vital to mask the complexity of the embedded code while also giving them an iterative process to quickly come back and update and optimize their platforms. Customers actually need a pushbutton experience that offers them a response or an answer in minutes versus in months to realize effortless ML. Any, 10x, and pushbutton are the important thing value propositions that became really clear for us that if we do a bang up job on these three things we’ll absolutely move the needle on effortless ML and scaling ML on the embedded edge.
Is there anything that you prefer to to share about SiMa?
Within the early development of the MLSoC platform, we were pushing the boundaries of technology and architecture. We were going all-in on a software centric platform, which was a completely recent approach, that went against the grain of all conventional wisdom. The journey in figuring it out after which implementing it was hard.
A recent monumental win validates the strength and uniqueness of the technology we’ve built. SiMai.ai achieved a serious milestone In April 2023 by outperforming the incumbent leader in our debut MLPerf Benchmark performance within the Closed Edge Power category. We’re proud to be the primary startup to participate and achieve winning leads to the industry’s hottest and well recognized MLPerf benchmark of Resnet-50 for our performance and power.
We began with lofty aspirations and to at the present time, I’m proud to say that vision has remained unchanged. Our MLSoC was purpose-built to go against industry norms for delivering a revolutionary ML solution to the embedded edge market.