And, from experience we’ve learned that running AI on a laptop for production volumes just doesn’t cut it. The demands are too high and time is too precious. In fact, it wasn’t long ago our own machine learning system’s entire code base sat on a single server. It quickly became apparent that existing AI infrastructure solutions did not support the complex ensemble models we needed to achieve high-precision results. We needed a new system, built from the ground up, and complete with model orchestration, data pipelines, elastic server capacity, and more—and we’d had to do it all without the GAMFA’s (Google, Amazon, Microsoft, Facebook, and Apple) advantages of deep talent in AI, large computing power, and lots of data.
At GLYNT we’ve done the development, testing, and vetting to create a machine learning system with the infrastructure and scale to support operations, untethering engineers from time consuming laptop operations and powering more efficient products. AI is no longer a collection of fun projects. At GLYNT, we understand AI—at scale—can be the heart and soul of operations, marketing and sales.
The following are the five primary ingredients we’ve discovered for making AI scalable:
1. Self-Service is Better
As well intentioned as they are, 1-800 customer service numbers can be obstacles to innovation. On the other hand, self-service allows for a wide variety of ideas to be tried and tested with lots of possible unintended positive outcomes. Self-service enforces a product definition that delivers so much more than just business model scalability.
2. Elastic Workbench
AI models are only as good as their engineering infrastructure. Al models are already complex. Pile on version control, file management, security, Big Data, and you are asking a lot from the underlying support systems. Besides, who wants to blow the entire budget on compute costs? Elasticity in server provision is paramount. We tackled these issues week after week. Sadly there is no shortcut. But now we have the capability to add new models to our Elastic AI Workbench – in addition to our GLYNT offering of text extraction, supporting a host of AI solutions at production-grade scale.
3. Low Shot Learning
Simply put, Low Shot Learning is a machine learning feature that reduces the scale of training by 99%. In the application area of GLYNT – extracting unstructured data from complex documents – typical machine learning systems require training data sets of 15,000 – 300,000 documents. For GLYNT, Low Shot Learning reduces that number to below 10. With data flows and training scale at only 1% of the former mark, everything moves faster with less risk. Faster and less risk also means cheaper. It’s a rare triple win that speeds AI to scale.
4. Transparency and Interpretability
The market now demands AI be transparent. Customers will no longer accept “AI Magic” and the Black Box solutions. With that in mind, GLYNT is designed with a series of intermediate computational models used to debug and fine-tune the machine learning results. GLYNT internal details allow us to see exactly what the AI sees and how the data is being utilized. This is the basis of tracking down biases in models and data, bringing questions to the surface. When AI is transparent it fosters demand for scale.
5. 10x Improvement
A software engineering rule of thumb states customers won’t adopt a new product unless it displays 10x improvement over their current solution. It’s easy to see how this applies to obvious product features, such as accuracy or speed. But is also applies to the engineering infrastructure, deep into the product. When scalability is built in, delivering results is less risky. Thus good, scalable engineering infrastructure, such as our Elastic AI Workbench, is welcomed with relief by customers’ software teams The low drama way to scale is part of the 10X that delivers scalability itself.