Dec 19, 2016 | Atlanta, GA
Written by Gary Goettling
A child born today will probably ride to middle school in a driverless car or bus, guided safely along its route by machine-learning algorithms.
Machine learning is a branch of artificial intelligence in which computers are trained to learn from data so as to perform tasks on their own, whether detecting anomalies in a secure computer network, accurately predicting customer demand, or navigating an autonomous vehicle through traffic.
For Georgia Tech engineers, machine learning presents both opportunities and challenges — and the H. Milton Stewart School of Industrial & Systems Engineering (ISyE) is well positioned to take a leading role.
“The ISyE School at Georgia Tech has worked very hard over many years to establish an outstanding reputation for combining innovation with applications in engineering,” said Sebastian Pokutta, David M. McKenney Family Associate Professor and Associate Director of Research for the Center for Machine Learning @ Georgia Tech (ML@GT). “This makes us a logical place for the development and deployment of machine learning — perhaps the hottest area of engineering today. We have the benefits of an established research infrastructure with a tradition of close collaboration among all Tech colleges and schools. This tradition extends to our numerous industry partnerships as well. Also, we have terrific, capable people. Our faculty, students, and staff enjoy solving complicated problems — and they’re very good at it.”
All analytical tools share a common purpose in that they extract meaningful information from large sets of data to inform decision making. What sets machine learning apart is its predictive accuracy. Unlike standard computation, where a computer follows explicit instructions from a human programmer to perform a defined task, machine-learning algorithms are trained to look for and remember certain patterns in data that are relevant to the performance of a specified task, which is expressed as a mathematical model. When provided new data — algorithms provide insights, outcomes, and “what-if” scenarios but they cannot create new data — the algorithms then act on their previously gained knowledge, their experience, to solve other problems or perform tasks by adjusting the model outcomes accordingly. For example: Show a thousand pictures of a dog to a machine-learning algorithm, and it will learn the characteristics of dogs and then be able to pick them out from a gallery of animal photos.
Machine learning is not new. It’s the technology that powers search engines as well as recommendation systems used by Facebook, Amazon, Netflix, eHarmony, and thousands of other sites.
What’s new is the rapidly increasing number and scope of applications for machine learning, boosted by the availability of tremendous amounts of data, cheap data storage, advances in high-performance computing, and the development of increasingly sophisticated machine-learning algorithms.
Applications run a diverse gamut from supply chains and logistics to computer vision and object recognition; from autonomous vehicles and natural language processing to health data analysis and manufacturing.
From an industrial engineer’s perspective, machine learning is a powerful way to automate the optimization process in large, complex systems involving petabytes of data — a scale too large to be handled by traditional computation.
At ISyE, machine-learning techniques go hand-in-hand with the school’s traditional research and education mission that emphasizes the development of methodologies to solve real- world problems.
“We’re interested in developing and devising machine- learning and optimization methodologies to see how they interact with each other and whether they can be made to interact in a very integrated way,” said Pokutta. “Then deploy these technologies or methods in real-world applications in a variety of different domains.”
While machine learning is concerned with studying data to obtain new insights, “you still have to do something with those insights,” he continued, “and that’s where the optimization part comes into play because you have to make those insights actionable by turning them into decisions — that’s whenthe connection between machine learning and optimization becomes apparent.”
ISyE faculty are involved with machine learning in a number of key areas including:
Supply Chains and Logistics
When it comes to optimizing supply chains regarding production, supply, product deployment, distribution, and delivery, machine-learning research and development at ISyE are having a profound impact in two ways. One, they automate routine supply chain decisions, enhancing speed and efficiency. And two, machine-learning algorithms provide human decision makers with predictive analysis and guidance for the choices they make in response to changing circumstances. In effect, machine learning is making supply chains smarter, which dovetails with the emerging concept of the physical internet.
The physical internet takes some of the basic characteristics of the information internet — open access, standardization, interconnectedness, digitization, speed — and applies them to the operation of supply chains and logistics, said Benoit Montreuil, Coca-Cola Material Handling and Distribution Chair, Professor, and Director and founder of the Physical Internet Center.
“In this context,” he continued, “we’re taking advantage of the physical internet’s digital aspect to hyperconnect all facets of a given business, and then using machine learning to understand the business more deeply. We have a project right now with an industry partner where we’re using machine- learning techniques to model product availability through their autonomous dealer networks, their factory, and warehouses in order to help them make better decisions.”
Montreuil anticipates that machine-learning algorithms will eventually automate a significant portion of product movement through the adoption of smart, standardized modular cargo containers — a key component of the physical internet. Fitted with sensors and two-way communication with the shipper through a wireless computer network, containers will learn how to operate in a dynamic environment and be largely self-directing, he said.
“You will communicate to the container that it has to be in Chicago in 18 hours, let’s say, and it will work as autonomously as possible to get there, interacting with humans or more sophisticated logistics agents only as necessary, using machine learning as one of the reasoning mechanisms to make sure the modular container goes where it is supposed to go.”
Machine-learning algorithms and techniques being developed by ISyE researchers Schneider National Chair in Transportation and Logistics Chelsea White, Coca-Cola Professor Alan Erera, H. Milton Stewart Associate Professor Mohit Singh, and Assistant Professor He Wang, also support supply chain forecasting models that more accurately predict the impact of demand variables, and thereby help decision makers optimize their supply chain operations. Common influences on demand include new product introductions, seasons and holidays, consumer preferences and trends, product promotions, weather, and material shortages.
The work will enable companies to better manage inventory levels, improve delivery times, enhance customer satisfaction, and ultimately save money. Machine-learning analysis also provides management with a dynamic view and a high level of visibility of the core processes and elements of their supply chain.
Algorithms work with historical data as well as data drawn from contemporary sources such as social media, where the first inkling of important consumer trends often appear. In a method called online learning, data becomes available in a sequential order, and decisions are made in real time. For example, when a company launches a new product, it can use online learning to track the sales data and adjust its pricing and inventory strategies for this product.
Machine learning also facilitates a rapid response to unexpected disruptions to the supply chain, such as a natural disaster in an area that’s the source of a particular raw material, thereby enabling companies to seek alternatives.
“There’s a lot of ground work still to be done in machine learning as it relates to supply chains and logistics,” said Montreuil. “It’s a very pioneering domain and a fantastic time for researchers.”
The U.S. health care system is awash in data: electronic health records; claims data; medical procedure results such as EKG readings, lab test findings, and genetic analysis; and health- monitoring information from wearable devices.
This wealth of data is fueling innovative analytics- based research for improving health care in three main areas: efficiency (cost), effectiveness (outcomes), and public health, particularly in terms of equity in the system and ensuring that the needs of vulnerable populations are addressed. In addition, data analytics can make it possible to tailor medical treatment to the needs and characteristics of individual patients, said Julie Swann, Harold R. and Mary Anne Nash Professor and Co-director of the Interdisciplinary Research Center for Health & Humanitarian Systems, adding that “we’re still in the early stages of what is being called individualist or precision medicine.
“You have to have a large enough set of data so that you can start to draw conclusions from it,” she continued. “These datasets may be deep, very detailed, and could also be wide in the sense that you’re bringing together heterogeneous data types.”
While data is becoming increasingly available, it takes a lot of resources and advanced methodologies to fully harness its power, according to Swann.
Machine learning is one set of analytic techniques that’s being used to turn data into information and knowledge for application at either the policy level or patient population level.
Algorithms create a mathematical model specific to a particular area of inquiry, such as patient care at a particular hospital, that imparts a big picture understanding of the system and how certain decisions or actions affect the system as a whole. More important, the algorithms can predict the consequences of one decision versus another regarding whatever metric is being studied.
“Data can drive decision making,” she said. “For example, you can use data to inform what kinds of things insurance should be required to pay for or what interventions associated with a specific disease or illness are most effective at reducing the burden on the entire system.”
Machine learning and analytics can reach down to the clinician level in exceptional detail to look at, say, continuous patient monitoring in an intensive care unit. Researchers can learn what kinds of medical situations qualify for monitoring and for how long, what medical actions were taken related to the monitoring, and if a personalized treatment approach would be possible, based on certain patient characteristics.
“The modeling of systems that have tens of thousands of constraints or variables can be used to evaluate access to health care too,” said Nicoleta Serban, Coca-Cola Associate Professor with the ISyE Health Analytics research group. “For example, collecting health care utilization data involving millions of individuals for events such as hospitalizations can be used in estimating the cost savings of preventive care. Modeling to assess for the occurrence of severe health outcomes, applied to data for millions of individuals, can be used to characterize what impacts health care utilization behaviors. Distributed computing is used to improve the computational effort of such methods.”
In one problem solving application of machine learning to health care, ISyE faculty are analyzing data from Geisinger, a hospital network in Pennsylvania, to help predict the risk for sepsis and septic shock in patients before they are admitted to the hospital. Sepsis and septic shock are the dominant causes of death in intensive care units in the U.S., accounting for up to three million cases yearly. The next step is to facilitate prevention measures by applying the analytical techniques developed by ISyE to real-time patient data at each of Geisinger’s various locations.
A machine-learning framework called DAMIP discovers gene signatures that can predict vaccine immunity and efficacy on an individual basis.
Developed by Professor Eva Lee and her colleagues from Emory and the Centers for Disease Control and Prevention (CDC), the work marks an important advance in both developing new vaccines and better vaccines to fight emerging infections and improve monitoring for poor responses in people with weak immune systems.
DAMIP-implemented results for yellow fever demonstrated that a vaccine’s ability to immunize a patient could be successfully predicted with greater than 90 percent accuracy within a week after vaccination. Results for flu vaccine demonstrated DAMIP’s applicability to both live-attenuated and inactivated vaccines. Similar results in a malaria study enabled targeted delivery to individual patients.
Additional health care applications of machine learning by ISyE faculty include quantifying disparities in access to pediatric primary care, evaluating the impact of access to pediatric asthma care on severe health outcomes, projecting the impact of Medicaid expansion in Georgia on access to adult primary care, and estimating the cost savings on preventive dental care for young children.
Swann emphasized that health care research at ISyE is collaborative. “Many of us are affiliated with other entities on campus such as the Center for Health & Humanitarian Systems and the Health Analytics group. We also draw upon the expertise of statisticians, computer scientists, optimization experts, systems engineering experts, and others in the area of advanced analytics, and machine learning is one part of that work toward improved health care decision making.”
ISyE researchers at the Strategic Energy Institute (SEI) are helping the companies that produce and distribute electricity maintain their capital assets and provide uninterrupted service to their customers. These assets — transformers, turbines, generators — have been fitted with thousands of sensors that continuously stream performance data in real time to central monitoring centers scattered around the country. At the centers, data is analyzed for anomalies or abnormal behavior, which would trigger repair or maintenance actions to avoid a catastrophic system failure.
“You need really efficient analytical tools to analyze this data,” said Nagi Gebraeel, Georgia Power Associate Professor and Associate Director of SEI, “and the underlying basis of these tools is machine learning.”
Going a step further, machine-learning algorithms can predict the risk of asset failure over time. This knowledge allows utilities to optimize operations in terms of pricing, customer demand, and energy production, while maximizing their investment in multi-million-dollar assets, many of which are operating well beyond their design lifetime. For instance, data may show that derating the load on a particular generator by a specific amount would extend its life by a certain percentage.
The machine-learning algorithms developed for optimizing the power grid are applicable to any situation where abundant sensor data could accommodate trend analysis, such as for monitoring the performance of jet engines, locomotives, diesel generators on ships, or the engines of a truck fleet.
Interactive Optimization and Learning
At ISyE’s Laboratory for Interactive Optimization and Learning (IOL), research takes place at the “intersection of optimization and machine learning,” according to Pokutta.
Working across academic boundaries, IOL researchers have completed more than 20 projects so far in areas including logistics and supply chain management, manufacturing, predictive analytics, big data, digital services, energy, transportation, and medical and health care systems.
In addition, IOL is involved in a number of significant activities designed to advance basic science and drive innovation.
One project on the medical diagnosis side involves digital scanning technology that uses 3-D cameras and Kinect sensors to produce 3-D models of individuals for medical diagnoses. Machine-learning and optimization techniques look for certain anomalies in the model scan data, and a report is generated.
At present, the technology could serve two potential applications. One is to screen pregnant women for cephalopelvic disproportion (CPD), a condition where the baby’s head or body is too large to pass through the mother’s pelvis. It is a preventable contributor to infant and maternal mortality in developing countries where neither ultrasound testing nor Cesarean delivery are available in rural areas. Women who are determined to be at high risk for CPD could then be referred to urban clinics for medically supervised labor or a Cesarean procedure.
Another potential diagnostic application of the ISyE invention is for the early detection of lymphedema, a severe, permanent swelling of an arm or leg that often follows surgery, chemotherapy, or radiation for breast cancer. Lymphedema is caused by a fluid buildup in lymph nodes damaged by the cancer treatment and afflicts an estimated four million people in the U.S.
The scanning technology could support a low-cost diagnostic device at home to detect the first signs of lymphedema by tracking patients’ limb-fluid volumes over time. This early warning would give patients enough time to begin taking preventive measures that could thwart the onset of disease.
ISyE’s System Informatics and Control (SIAC) group contributes to machine-learning capabilities by providing a new scientific base for the design, analysis, and control of complex manufacturing and service systems, anchored to the effective and seamless integration of physical and analytical models with empirical data-driven methodologies, according to Jianjun Shi, Carolyn J. Stewart Chair and Professor.
In practical terms, SIAC faculty develops quantitative models unified with data extraction and engineering knowledge integration capabilities, and then deploys these models in the analysis and control of complex manufacturing and service systems.
SIAC’s research involves faculty with complementary backgrounds in manufacturing and service systems, quality and reliability engineering, diagnostics and prognostics, industrial statistics and data mining, and automation and control.
Shi’s research centers on monitoring, diagnosis, and control of manufacturing systems. “I am working on multiple data fusion for abnormality detection in the semiconductor manufacturing process,” said Shi, who studies the massive amounts of data continuously streamed from hundreds of sensors embedded into the manufacturing equipment. “The challenges are how to extract useful information from the data, learn the system’s behavior, and improve its performance.”
The analytical issues are complicated by sensing data’s high dimensionality, variety, and velocity; and intricate spatial and temporal structures.
ISyE faculty solve these challenges by developing scaleable and agile machine-learning techniques that provide effective modeling and analysis of multi-sensor data streams, allowing researchers to extract essential information for manufacturing improvement.
In addition to real-time monitoring and fault diagnosis and control, machine learning facilitates online product inspection and can predict potential failures in the manufacturing process, thereby allowing time for planning corrective and preventive actions.
In related research, Assistant Professor Yao Xie focuses on detecting changes in massive data streams, which usually signal anomaly and novelty, as quickly as possible, and then analyzes them in real time.
She develops real-time change-point detection algorithms based on statistical and optimization theory for high-dimensional streaming data, which are usually dynamic in nature, corrupted, and carry incomplete data.
Xie has solved problems including seismic network data processing, social network event detection, environmental monitoring, and power network monitoring. She is currently working on accelerating the processing of the massive amount of sequential data generated from material science for the Materials Genome Initiative.
Professor Xiaoming Huo is developing fast algorithms to build predictive models from distributed inference, meaning the information is scattered among various locations and cannot be transported to a centralized database. The algorithms must be communication efficient, requiring only a minimal amount of communication between data locations.
A practical example is a company such as Walmart, which houses transactional data at its thousands of stores. Huo’s algorithms would allow the retailer to utilize this distributed data to create predictive models for logistics purposes.
Another approach to utilizing distributed data may be termed “physics based and data-driven analytics,” said Jeff Wu, Coca-Cola Chair in Engineering Statistics and Professor, and is a useful computational technique for some engineering and sciences applications. Here, data is derived from physics and then subjected to statistical data analysis.
Wu cited the example of designing the next generation of rockets for the U.S. Air Force. The physics, typically described or modeled by partial differential equations, must be understood first before a physical model can be built. The model is then refined by using a statistical analytic approach on large or small data.
Theory of Machine Learning and Optimization
Over the past decade or so, machine learning has become perhaps the most “intelligent customer” of convex optimization, and the major outer driving force influencing the development of convex optimization, according to Arkadi Nemirovski, John Hunter Chair and Professor.
Numerous mathematical models arising in machine learning are of an optimization nature, which is why optimization algorithms form a significant part of the machine-learning toolbox, he pointed out. Typically, optimization problems of machine- learning origin are extremely large-scale. Their numerical processing requires the most advanced optimization techniques and is possible primarily when the problems are convex and well-structured.
Convexity as it pertains to mathematical optimization is a term denoting problems in which local information can be used to determine key global characteristics of a problem. This “local implying global” feature is also true of linear programming, the older, traditional programming technique that allows the computation of optimal decisions efficiently, assuming that the world is linear. Linear problems are also convex problems, with applications that include how to allocate time on a communications satellite among competing users, or for studying the relationship between traffic delay times and the number of cars on the road.
But not every problem is linear. Supply chain efficiency, for instance, is not a linear function of resources available. Convex optimization models provide a way of more accurately representing problems that account for real-world uncertainty.
Associate Professor Santanu Dey offered an example of the use of convex optimization in design of electrical power systems dispatch algorithms, a joint project with faculty colleague Assistant Professor Andy Sun and graduate student Burak Kocuk.
“Among other things, our results indicate that older generation algorithms that use a linear approximation of physical laws such as Ohm’s law and Kirchhoff’s current law produce inferior performance in comparison to our new methods using the more powerful general convex optimization methods.”
Convex problems form a “solvable case” in optimization, Nemirovski continued. “These are the problems for which high-accuracy approximations to optimal solutions can be found in an efficient fashion.”
The close connection between optimization and machine learning goes beyond the fact that optimization forms the computational building blocks for the majority of machine-learning methods. Results from the optimization field have also been used to analyze data efficiency in machine learning. As an example, Assistant Professor Huan Xu works in the intersection between robust optimization (an optimization paradigm to address uncertainty) and machine learning. He has shown that some popular machine-learning algorithms are implicitly solving robust optimization formulations, and this robustness provides a unified tool that establishes favorable statistical properties of these algorithms. In effect, learning happens because uncertainty is carefully addressed.
Nemirovski, along with Professor Alexander Shapiro, Associate Professor George Lan, and Professor Anatoli Juditsky from Université Joseph Fourier, play a leading role in the design of efficient stochastic optimization methods, which are at the core of modern machine-learning approaches.
These methods can make progress by only using limited information. In fact, they can often find solutions of acceptable quality without the need to observe the entire dataset. Consequently, they are the focus of much interest and research in computer science, industrial engineering, and other research communities interested in big data.
Many distributed, large-scale optimization problems involve translating large data sets into effective actions, which is a research interest of George Nemhauser, A. Russell Chandler III Chair and Institute Professor, and Shabbir Ahmed, Dean’s Professor and Stewart Faculty Fellow. Ahmed illustrated with the example of coordinating the movement of autonomous vehicles in a network. The vehicles could be school buses or a fleet of delivery trucks that deliver loads from many sources to several destinations.
If each vehicle makes an independent decision as to which route it follows moving from A to B, some links within the network may become overly congested. Thus for smooth operation, the vehicles need to collect and learn from the traffic information in the network and adapt accordingly.
The information includes historical traffic data and real-time data from auxiliary sources such as road sensors and other autonomous vehicles on the road. This collective learning enables each vehicle to access a more accurate representation of the surrounding world.
“This logistics problem can be set up as a decentralized stochastic routing problem for which some of the approaches that ISyE faculty are working on can be adapted,” Ahmed noted.
Another example where machine learning would make decisions in an uncertain environment may be found in health care. Consider a cancer patient who receives a certain cancer-fighting drug every day. The problem is that doctors cannot know in advance the impact of any given drug on any given patient. If one is applying drug A, and the tumor is not growing but not shrinking either, should one switch to a different drug, with the hope that it might shrink the tumor? What is the optimal way to administer different drugs over time?
This particular optimization problem is known as the “multi-armed bandit” problem, referring to the choices facing someone playing several slot machines but who does not know the payoff of each arm in advance, and thus must decide which arms to play and when.
Similar trade-offs arise in many industries ranging from online advertising to logistics, in which one must decide how to allocate resources over time between different alternatives whose benefits and costs are uncertain.
“Fundamental to such problems is the question of how to combine ideas from probability — due to the inherent uncertainty in the problem — with ideas from optimization since one wants to select between alternatives as intelligently as possible,” said David Goldberg, A. Russell Chandler III Assistant Professor, whose study of probability and optimization are core to his expertise as well as to the entire field of operations research.
“One particular question I am studying is how to use ideas from probability theory to understand what an ‘optimal’ machine-learning algorithm will do when presented with trade-offs, how simpler heuristics compare to this optimal algorithm, and how to use these insights to design new algorithms and insights for machine learning.”
As the extraction of useful, actionable information from data, analytics undergirds everything at the ISyE school, which specializes in the development and application of cutting-edge analytics tools based on statistics, operations research, and optimization.
“We have a long history of being a world leader, way before ‘analytics’ became a buzzword,” said Joel Sokol, Associate Professor and Director of the Master of Science in Analytics degree. “Machine learning is currently one of the hottest analytics tools being applied to analyze large and complex data sets, and ISyE has several specialists in both machine-learning theory and its application in a variety of industries.”
One of Sokol’s research interests is sports analytics, which uses machine learning and other computational techniques for predictive and optimization tasks.
While sports teams — particularly baseball — have long sought guidance from statistics, the availability of massive datasets covering virtually every conceivable metric offer a far better basis for optimal decision making.
Analytics techniques may be applied to sports management operations, game strategy, player performance and draft choice evaluation, and can even help a coach determine the optimal lineup for any given opponent.
Sokol devised a mathematical model for predicting the outcome of NCAA Division I basketball tournament games. With data input consisting only of which two teams played, who held home-court advantage, and the margin of victory, the model has consistently outperformed standard ranking systems.
Machine Learning @ Georgia Tech
Underscoring the growing importance of machine- learning research across campus, ML@GT was launched this past summer and named one of Georgia Tech’s Interdisciplinary Research Centers. One of the focus areas of the center is to develop and study machine-learning processes and applications within or with close ties to the engineering disciplines. Pokutta, one of the Associate Directors of the Center, emphasizes that this is what makes the Tech center unique.
“While there are other machine-learning centers in the U.S., they are focused mainly on the interactions between statistics and computing,” he noted. “The distinguishing feature of our Center is its incorporation of the engineering component.”
In fact, the Center is an interdisciplinary effort involving all colleges and schools across campus. It is expected to serve as a nexus for collaborative machine-learning research as well as a one-stop resource for partnerships between Tech and industry.
“Machine learning will significantly impact the way we solve problems by making the process more dynamic and realistic,” Pokutta says. “ISyE definitely has a strong stake in the machine-learning center and also has high interest in interacting with it and making it work.”
Click on image(s) to view larger version(s)
Stewart School of Industrial & Systems Engineering