Skip to main content


The Center for Business Analytics is pleased to recognize Intel as an honoree of the 2023 Drexel LeBow Analytics 50 Awards. Read more about how Intel used analytics to solve a business challenge.


Melvin Greer, PhD, Fellow and Chief Data Scientist


Intel Corporation



Business Challenge

One of the most significant roadblocks hindering organizations that want to start or expand their use of deep learning models is the level of complexity in moving from the training stage to the inference and deployment stage. Deep learning has become the de facto approach for developing highly accurate models for image classification, object detection, real-time video analytics and many other problems. These advancements in accuracy have, however, increased the complexity of model development, training and deployment.

Analytics Solution

Melvin Greer, PhD, fellow and chief data scientist at Intel Corp., created an AI tool that integrates and automates the training and deployment phases into a single workflow. Instead of relying on a data scientist to select, fine-tune and craft a neural network architecture during training, Greer’s solution employs an innovative architecture design approach that automatically crafts a custom neural network architecture for the labeled training data being used. This simplifies model optimization for deployment during training. The result is a more streamlined workflow with less room for error, more models deployed and a simplified model management experience. This architecture performs its innovative architecture discovery and training on efficient Intel Field Programmable Gate Array (FPGA) accelerators — the same hardware acceleration that many organizations deploy for boosting the performance of models in production.


This full ecosystem approach, employing advanced technology for both the training and deployment stages, means that models created during the training stage can be used in production deployment. There is no quantization, no pruning and no fusing to be performed. The model does not need to be converted, translated or reevaluated since the training optimization produced the most efficient design for the data. This change dramatically reduces time from days to minutes, decreases the cost to deploy to a fraction of the traditional method and requires fewer data science resources. The result reduces ongoing system complexity, improves overall financial performance and increases technical agility to deploy newer models as the system needs change with time.