Home News Mohammad Omar, Co-Founder & CEO of LXT – Interview Series

Mohammad Omar, Co-Founder & CEO of LXT – Interview Series

Mohammad Omar, Co-Founder & CEO of LXT – Interview Series

Mohammad Omar is the Co-Founder & CEO of LXT, an emerging leader in AI training data to power intelligent technology for global organizations, including the biggest technology firms on the planet. In partnership with a global network of contributors, LXT collects and annotates data across multiple modalities with the speed, scale, and agility required by the enterprise.  Founded in 2010, LXT is headquartered in Toronto, Canada with a presence in the US, Australia, India, Turkey, UK and Egypt.

Could you share the genesis story behind LXT?

LXT was founded in response to an acute need for data that my employer from twelve years ago was facing. At the moment, the corporate needed Arabic data but didn’t have the precise suppliers from which to source it. Being a risk-taker and entrepreneur by nature, I made a decision to resign from my role, arrange a brand new company, and switch right back around to supply our services to my former employer. Immediately we got a few of their most difficult projects which we successfully delivered on, and things just grew from there. Now over 12 years later, we now have built a robust relationship with this company, becoming a go-to supplier for high-quality language data.

What are a number of the biggest challenges behind deploying AI at scale?

That’s an incredible query, and we actually included that in our latest research report, The Path to AI Maturity. The highest challenge that respondents cited was integrating their existing or legacy systems into AI solutions. This is smart given the undeniable fact that we surveyed larger firms that might almost certainly have an array of tech systems across their organizations that have to be rationalized right into a digital transformation strategy. Other challenges that respondents ranked highly were a scarcity of expert talent, lack of coaching or resources, and sourcing quality data. I wasn’t surprised by these responses as they’re commonly cited, and likewise in fact because the info challenge is our organization’s reason for being.

On the subject of data challenges, LXT can each source data and label it in order that machine learning algorithms could make sense of it. We’re equipped to do that at scale and with agility, meaning that we deliver high-quality data in a short time. Clients often come to us once they are preparing for a launch and wish to be sure that their product is well received by customers, 

By working with us to source and label data, firms can address their resource and talent shortages by allowing their teams to deal with constructing modern solutions.

LXT offers coverage for over 750 languages, but there are translation and localization challenges that transcend the structure of language itself. Could you discuss how LXT confronts these challenges?

There actually are translation and localization challenges – especially when you branch out beyond probably the most widely spoken languages that are likely to have official status and the extent of standardization that goes together with that. Most of the languages that we work in don’t have any official orthography, so managing consistency across a team becomes a challenge. We address these and other challenges – e.g. detection of fraudulent behavior – by having rigorous processes in place for quality assurance. Again it was very apparent within the AI maturity research report that for many organizations working with AI data, quality sat at the highest of the list of priorities. And most organizations surveyed expressed willingness to pay more to get this. 

For firms who require data sourcing and data annotation, how early on in the applying development journey should they start sourcing this data?

We recommend that organizations create a knowledge strategy as soon as they discover their AI use case. Waiting until the applying is in development can result in a variety of unnecessary rework, because the AI may learn the flawed things and need to be retrained by quality data, which might take time to source and integrate into the event process.

What’s the rule of thumb for knowing the frequency that data needs to be updated?

It really is dependent upon the style of application you’re developing and the way often the info that supports it changes in a major way. Which means that data is a representation of real life, and over time, the info should be updated to offer an accurate reflection of what is going on on the planet. We call this phenomenon model drift, of which there are two types, each requiring the retraining of algorithms.

  • Concept drift occurs when a major difference between the training data and the AI output changes, which might occur suddenly or more regularly. As an example, a retailer might use historical customer data to coach an AI application. But when an enormous shift in consumer reality occurs, the algorithm will have to be retrained in an effort to reflect this.


  • Data drift takes place when the info used to coach an application now not reflects the actual data encountered when it enters production. This may be brought on by a spread of things, including demographic shifts, seasonality or the situation of an application in a brand new geographic region.

LXT recently unveiled a report titled “The Path to AI Maturity 2023”. What were a number of the takeaways on this report that took you by surprise?

It probably shouldn’t have come as a surprise, however the thing that basically stood out was the variety of applications. You may have expected two or three domains of activity to dominate, but once we asked where the respondents planned to focus their AI efforts, and where they planned to deploy their AI, it initially looked like chaos – the absence of any trend in any respect. But on sifting through the info, and searching on the qualitative responses, it became clear that the absence of a trend is the trend. At the very least through the eyes of our respondents, if you will have an issue, then there may be an actual possibility that somebody is working on an AI solution to it.

Generative AI is taking the world by storm, what’s your view on how far language generative models can take the industry?

My personal tackle that is that central to the actual power of Generative Artificial Intelligence – I’m selecting to make use of the words here moderately than the abbreviation for emphasis – is Natural Language Understanding. The ‘intelligence’ of AI is learned through language; the power to deal with and ultimately solve complex problems is mediated through iterative and cumulative natural language interactions. With this in mind, I consider language generative models might be in lockstep with other elements of AI all the way in which.

What’s your vision for the long run of AI and for the long run of LXT?

I’m an optimist by nature and that can color my response here, but my vision for the long run of AI is to see it improve quality of life for everybody; for it to make our world a safer place, a greater place for future generations. At a micro level, my vision for LXT is to see the organization proceed to construct on its strengths, to grow and turn into an employer of selection, and a force for good, for the worldwide community that makes our business possible. At a macro level, my vision for LXT is to contribute in a major, meaningful option to the achievement of my optimistically skewed vision for the long run of AI.

Thanks for the good interview, readers who want to learn more should visit LXT.


Please enter your comment!
Please enter your name here