Home News Razi Raziuddin, Co-Founder & CEO of FeatureByte – Interview Series

Razi Raziuddin, Co-Founder & CEO of FeatureByte – Interview Series

0
Razi Raziuddin, Co-Founder & CEO of FeatureByte – Interview Series

Razi Raziuddin is the Co-Founder & CEO of FeatureByte, his vision is to unlock the last major hurdle to scaling AI within the enterprise.  Razi’s analytics and growth experience spans the leadership team of two unicorn startups. Razi helped scale DataRobot from 10 to 850 employees in under six years. He pioneered a services-led go-to-market strategy that became the hallmark of DataRobot’s rapid growth.

FeatureByte is on a mission to scale enterprise AI, by radically simplifying and industrializing AI data. The feature engineering and management (FEM) platform empowers data scientists to create and share state-of-the-art features and production-ready data pipelines in minutes — as a substitute of weeks or months.

What initially attracted you to computer science and machine learning?

As someone who began coding in highschool, I used to be fascinated with a machine that I could “talk” to and control through code. I used to be immediately hooked on the infinite possibilities of recent applications. Machine learning represented a paradigm shift in programming, allowing machines to learn and perform tasks without even specifying the steps in code. The infinite potential of ML applications is what gets me excited each day.

You were the primary business hire at DataRobot, an automatic machine learning platform that allows organizations to change into AI driven. You then helped to scale the corporate from 10 to 1,000 employees in under 6 years. What were some key takeaways from this experience?

Going from zero to at least one is difficult, but incredibly exciting and rewarding. Each stage in the corporate’s evolution presents a special set of challenges, but seeing the corporate grow and succeed is a tremendous feeling.

My experience with AutoML opened my eyes to the unbounded potential of AI. It’s fascinating to see how this technology might be used across so many various industries and applications. At the tip of the day, making a latest category is a rare feat, but an incredibly rewarding one. My key takeaways from the experience:

  • Construct a tremendous product and avoid chasing fads
  • Don’t be afraid to be a contrarian
  • Give attention to solving customer problems and providing value
  • All the time be open to innovation and trying latest things
  • Create and inculcate the proper company culture from the very start

Could you share the genesis story behind FeatureByte?

It’s a widely known fact within the AI/ML world – that Great AI starts with great data. But preparing, deploying and managing AI data (or Features) is complex and time-consuming. My co-founder, Xavier Conort, and I saw this problem firsthand at DataRobot. While modeling has change into vastly simplified due to AutoML tools, feature engineering and management stays an enormous challenge. Based on our combined experience and expertise, Xavier and I felt we could truly help organizations solve this challenge and deliver on the promise of AI all over the place.

Feature engineering is on the core of FeatureByte, could you explain what that is for our readers?

Ultimately, the standard of knowledge drives the standard and performance of AI models. Data that’s fed into models to coach them and predict future outcomes is known as Features. Features represent details about entities and events, reminiscent of demographic or psychographic data of consumers, or distance between a cardholder and merchant for a bank card transaction or variety of items of various categories from a store purchase.

The means of transforming raw data into features – to coach ML models and predict future outcomes – is known as feature engineering.

Why is feature engineering probably the most complicated facets of machine learning projects?

Feature engineering is super necessary because the method is directly liable for the performance of ML models. Good feature engineering requires three fairly independent skills to come back together – domain knowledge, data science and data engineering. Domain knowledge helps data scientists determine what signals to extract from the information for a specific problem or use case. You wish data science skills to extract those signals. And at last, data engineering helps you deploy pipelines and perform all those operations at scale on large data volumes.

Within the overwhelming majority of organizations, these skills live in numerous teams. These teams use different tools and don’t communicate well with one another. This results in quite a lot of friction in the method and slows it all the way down to a grinding halt.

Could you share some insight on why feature engineering is the weakest link in scaling AI?

Based on Andrew Ng, renowned expert in AI, “Applied machine learning is essentially feature engineering.” Despite its criticality to the machine learning lifecycle, feature engineering stays complex, time consuming and depending on expert knowledge. There’s a serious dearth of tools to make the method easier, quicker and more industrialized. The trouble and expertise required holds enterprises back from with the ability to deploy AI at scale.

Could you share a number of the challenges behind constructing a data-centric AI solution that radically simplifies feature engineering for data scientists?

Constructing a product that has a 10X advantage over the establishment is super hard. Thankfully, Xavier has deep data science expertise that he’s employing to rethink all the feature workflow from first principles. We have now a world-class team of full-stack data scientists and engineers who can turn our vision into reality. And users and development partners to advise us on streamlining the UX to best solve their challenges.

How will the FeatureByte platform speed up the preparation of knowledge for machine learning applications?

Data preparation for ML is an iterative process that relies on rapid experimentation. The open source FeatureByte SDK is a declarative framework for creating state-of-the-art features with just just a few lines of code and deploying data pipelines in minutes as a substitute of weeks or months. This enables data scientists to concentrate on creative problem solving and iterating rapidly on live data, moderately than worrying concerning the plumbing.

The result is just not only faster data preparation and serving in production, but in addition improved model performance through powerful features.

Are you able to discuss how the FeatureByte platform will moreover offer the power to streamline various ongoing management tasks?

The FeatureByte platform is designed to administer the end-to-end ML feature lifecycle. The declarative framework allows FeatureByte to deploy data pipelines mechanically, while extracting metadata that’s relevant to managing the general environment. Users can monitor pipeline health and costs, and manage the lineage, version and correctness of features all from the identical GUI. Enterprise-grade role-based access and approval workflows ensure data privacy and security, while avoiding feature sprawl.

Is there the rest that you prefer to to share about FeatureByte?

Most enterprise AI tools concentrate on improving machine learning models. We have made it a mission to assist enterprises scale their AI, by simplifying and industrializing AI data. At FeatureByte, we address the largest challenge for AI practitioners: Providing a consistent, scalable strategy to prep, serve and manage data across all the lifecycle of a model, while radically simplifying all the process.

If you happen to’re an information scientist or engineer serious about staying on the innovative of knowledge science, I’d encourage you to experience the facility of FeatureByte free of charge.

LEAVE A REPLY

Please enter your comment!
Please enter your name here