2021 External Research Projects
Laura Albert (UW)
Dynamic Workflow Optimization and Planning for Insurance Applications
Machine learning (ML) tools that recognize patterns or predict claims following a storm have the potential to improve service and reduce costs in insurance industry settings, where calls for service are routed to claims agents after undergoing triage or assessment. However, these tools introduce complexity and new decision contexts. To fully achieve the benefits offered by ML enabled tools, they must be operationalized in decision support systems for planning and real-time routing. This project
seeks to take a step forward in ML-enabled insurance settings by proposing a new approach for planning and real-time workflow routing using an optimization modeling framework. The research studies how to employ an optimization methods based on stochastic programming and Markov decision processes to prescribe innovative, dynamic workflow routing decisions that builds upon an ML foundation, balances the workload across claims agents, improves patient satisfaction, and is cost-effective. The models and ideas can be applied to problems as diverse as disaster response, call center optimization, fire and emergency medical service vehicle dispatching, and last mile delivery.
Kaiping Chen (UW)
Reducing Bias in Human-AI Conversation
As intelligent assistants become an inseparable part of our daily lives (e.g., chatbots in financial services, smart healthcare), the need for fair AI is increasingly critical. Unfortunately, current deployed AI models in the real world may produce responses biased towards the dominant groups, while marginalizing underrepresented populations’ needs. This project will use a novel framework and a series of methods to mitigate inequality in AI decision-making to empower underrepresented groups’ voices through reducing unfairness in algorithmic responses. Our research activities consist of two aims: 1) curate a large-scale language dataset (through online crowdsourcing) to examine the ways in which on-the-shelf intelligent chatbots (e.g., Google home) bias their responses toward dominant groups in our society and to also understand what conversations underrepresented populations desire to have with the machine that they perceive as trustworthy and useful and 2) use the large dataset we built under aim 1 to develop an effective optimization method to reduce the response quality gap between dominant and minority groups.
Min Chen (UW)
Facilitating wildfire insurance business with big data and machine learning
The recent wildfires across the Western U.S. have caused large environmental hazards and economic losses, and created an insurance crisis across several states. Unfortunately, there are substantial challenges for accurately predicting wildfire risk with either physics-based or empirical models. Physics-based models are often too computationally demanding for rapid and wide-spread assessments and usually have low accuracy. Empirical approaches, especially machine-learning models, are widely used for predicting wildfire occurrence but also with important limitations, e.g., existing approaches lack historical legacies and are hard to incorporate physical principles or physical constraints, which limit their accuracy and interpretability. This project aims to prototype a near-real-time wildfire predicting framework that will improve prediction of both wildfire probability and severity from daily-weekly to monthly-seasonal scales. The framework will design and build on a cutting-edge machine learning model, i.e., the attention-augmented sequence prediction model, and combine it with a wide range of climate and satellite remote sensing datasets as well as fire occurrence data to facilitating more accurate wildfire prediction.
Sharon Li (UW)
Safe and Reliable Machine Learning through Out-of-distribution Detection
As machine learning reaches society at large, it must entail strong safety and reliability guarantees. Current machine learning models commonly make the closed-world assumption, i.e., the training and test data distributions must be identical. Unfortunately, in reality, a deployed machine learning model may fail to recognize anomalous out-of-distribution (OOD) data—i.e. input simply does not follow the assumed
data distribution. In the insurance industry, such unexpected data can be a fraudulent transaction or claim submitted to the system. Ideally, the machine learning model should (i) be able to detect such unexpected
data, and (ii) do not blindly follow its prediction based on analyzing normal and genuine claims. This gives rise to the importance of out of distribution detection, which can automate the process of flagging anomalous data and reduce the labor load of manual reviews. This project tackles the fundamental problem in machine learning, with the goals of (i) developing an algorithmic framework that can automate detecting and mitigating unexpected OOD data, (ii) advancing the state-of-the-art of OOD detection with strong safety guarantees, and (iii) disseminating the power of OOD detection through collaboration with American Family Insurance. The OOD algorithms developed through this research may be applied in claim fraud detection and many other risk scenarios.
Kevin Ponto (UW)
Developing Novel Mixed Reality Tools for Consumer Insurance Documentation
Documentation and assessment of personal property in situations of accidents or disasters is a major component of the claims process that takes place between an insurance company and their customer. One method to try to increase efficiency in this process is to enable customers to document events through photographs on digital platforms. The challenge in this approach is in assisting customers to provide the needed views and perspectives to capture the pertinent information. Recent advances in depth sensors have created new possibilities for the capture of real world objects in 3D. This project will develop methods to combine tracking data from AR tracking algorithms with the color and depth sensor information to create mixed reality models. The developed application will provide an interactive visualization to help guide users through the capture process, guiding them as to what has and what has not yet been captured. The result of this scanning process will be a 3D model along with standard 2D photographic information in means of providing new methods for insurance related documentation.
Vikas Singh (UW)
Lightweight Self-Attention for Detection and Image Classification
Modern machine learning and computer vision methods that drive applications ranging from voice recognition, search completion and image recognition make use of a model commonly known as a "Transformer" whose parameters are estimated via "training" on large amounts of data. These models have hundreds of millions of parameters and training on large datasets can typically take many weeks or even months on specialized hardware. The overarching goal of this project is to design approximation strategies that enable efficient training of such models, for potential use in natural language processing and object recognition for images. This work will build upon previous success (and lessons learnt) over the last year where for large-scale document analysis tasks, we have been able to obtain rather significant improvements over state-of-the-art alternatives. This work will expand the scope of these ideas to tackle important problems in image understanding and natural language processing, within a productive and longstanding collaboration with researchers at American Family Insurance.
Shivaram Venkataraman (UW)
Data-Aware Model Recycling
Machine learning models are used to power a number of applications ranging from risk assessment to recommendation engines. Data scientists spend considerable time training and fine-tuning machine learning models used in such applications. Further, these models need to be updated as new data is collected, a common scenario in many enterprise settings. This research looks at developing software tools that can automate and accelerate both the training and fine-tuning process by intelligently reusing past computations. A framework will be built that can automatically capture the characteristics of models trained and datasets used in the past. Following that, when a data scientist begins a training a new model, this software framework can quickly retrieve the most relevant model trained in the past. By reusing past computations in this fashion, the amount of time and resources spent on fine-tuning new models can be reduced.
Ramya Korlakai Vinayak (UW)
Query Design for Crowdsourced Clustering: Efficiency vs. Noise Trade-off
Crowdsourcing is one of the most popular ways of collecting labeled data for supervised learning. However, obtaining granular labels, e.g., species of birds, from a non-expert crowd can be very difficult. We consider leveraging simpler comparison tasks followed by clustering. Crowdsourced clustering refers to the task of clustering a set of items using answers from non-expert crowd workers who can cluster a small subset of items, for example, by answering whether two items i and j are from the same cluster. Since the workers are not experts, they provide noisy answers. Further, due to budget constraints, all possible comparisons cannot be made. Ideally, one would want to collect as much (quantity) information of good quality as possible under a given budget. While there has been a lot of work on collecting crowdsourced data for various tasks, as well as works on algorithms to denoise specific types of data collected, our understanding of how the bounds on the ability of humans to learn and retain new concepts affect the quality of answers provided by crowd workers is limited. This project aims to fill in this gap for the crowdsourced clustering task (i) by systematically studying how the noise in the answers given by the crowd workers relates to the cognitive load – to the number of items being compared per question and the contextual bias – the cluster membership of items involved in the queries, and (ii) by proposing new error models that capture these effects and clustering algorithms that leverage the new models.
Jerry Zhu (UW)
Fast Machine Learning with Rich Human-Machine Interactions
Machine learning is training-data hungry: it requires many training examples to learn a good model, which can take a long time for human annotators to prepare. While there have been methods to speed up training, most noticeably active learning, they still require a nontrivial
amount of human annotation. This poses a hurdle for organizations like American Family Insurance which need the agility to frequently building new machine learning models. The goal of the project is to design new interactive training methods that are theoretically guaranteed to out-perform active learning. Phase 1 of the project is on-going, where a novel Minimum Contrast Projection (MCP) interaction protocol for human annotators to provide richer information to machine learning has been developed. Preliminary empirical results show that MCP is faster than active learning, and theoretical analysis establishes a novel PAC guarantee for MCP. Phase 2 of this research will significantly expand the interactions between humans and machines. Instead of viewing interactive machine learning methods as merely a matter of user interface design, learning-theoretic implications of different protocols will be analyzed. Then, weaknesses in existing interactive machine learning methods will be identified to create stronger human-machine interaction protocols -- MCP is one example of such success. The expected product of this project will be a set of novel interactive machine learning methods that are guaranteed to be better than active learning.
2020 External Research Projects
Colin Dewey | UW | Machine Learning Approaches for Metadata Standardization | This project will develop methods for automating the task of metadata standardization in large heterogeneous datasets. The methods will be based on framing the problem as a machine learning task and the use of controlled vocabularies from community-curated ontologies. A state-of-the-art natural language processing model will be used to address the machine learning task. To train the model with minimal manual effort, active learning algorithms will be adopted and developed, which allow the machine learning system to select the records and features of those records for which it would most benefit from human expert input. | |
David Noyce | UW | Improving Traffic Safety Outcomes Through Data Science Methodologies | The project research vision translates advances in automotive technology, big data, and data science into tools that will improve driver safety and bolster safety performance of technology. Towards this goal, the objective of this project is to conduct collaborative data science research based on traffic safety data integration and the application of data science tools to 1) develop algorithms to incentivize positive driver behavior, and 2) quantify safety benefits from advanced driver assistance systems (ADAS), automation in vehicles, and other driver safety factors. | |
Irene Ong | UW | Scalable Causal Modeling for Understanding Customer Behavior | Understanding the behavior of customers is crucial to success in any business, but typical customer re- search, such as surveys, interviews, and A/B testing, can be labor-intensive, costly, and subject to adversarial responses. Analogously, our goal in a healthcare setting is to understand the effects of actions taken by our customers in order to be able to realize the promise of personalized medicine. Personalized medicine is be- coming more feasible due to the increasing maturity and ubiquity of electronic health record (EHR) systems. However, bringing personalized medicine into effective practice depends on detailed and accurate knowledge of the causal systems underlying human health, systems that can involve thousands of variables. | |
Jeff Linderoth | UW | Integer Programming for Mixture Matrix Completion | Completing a data matrix is one of the most fundamental problems in data science. In this work, algorithms for solving a mixture matrix completion problem (MMCP) will be developed, where each column of the underlying data matrix comes from a collection of low-rank matrices. The MMCP generalizes most known matrix completion problems and has important applications in computer vision, recommender systems, data inference, and outlier detection. Key to the approach will be the development and application of advanced algorithmic techniques from integer programming, a powerful mathematical tool for solving optimization problems involving discrete choices. The work will pave the way towards the application of integer programming for a broad class of large-scale data science problems. | |
Jerry Zhu | UW | Ultra Fast Training for Novel Categories in Text Classification | Classic text classification assumes that categories are fixed. In reality, new text categories often must be added due to business needs. A text classifier then needs to be re-trained in order to incorporate such new categories. However, because the business need is new, it is often the case that there is little training data for the new categories. Such lack of data poses great difficulty for the engineer who must quickly add the new categories to the classifier. This project aims to develop a novel workflow to allow ultra-fast addition of text categories to a classifier. | |
Jon Eckhardt | UW | Using Data to Foster Entrepreneurship and Innovation in the Madison Ecosystem | The goal of this project is to continue to support the work of the Academic Entrepreneurship Study Team at the University of Wisconsin-Madison that is currently funded by American Family Insurance. This team is using data analysis techniques to attempt to produce findings with the purpose of enhancing the impact and the management of entrepreneurship programs at UW-Madison while also producing publishable insights. In addition, insights from the data analysis funded by this research proposal will be used to support additional grants for the purpose of creating evidence-based interventions to increase the prevalence and effectiveness of student entrepreneurship. | |
Joseph Austerweil | UW | Question Asking with Differing Knowledge and Goals | A recent machine learning method addresses why people are better at answering questions by asking multiple, reformulated versions of a human question, providing multiple answers, and learning to select the answer that is most likely to satisfy a person. However, this is done purely from data and does not incorporate psycholinguistic research demonstrating that people prefer simpler answers that are tailored to their personal goals and knowledge. This project incorporates psycholinguistic factors to improve automated question-answering methods. Each factor (and combination) are tested using behavioral experiments. Not only does this improve these methods, it also tests psycholinguistic factors outside of laboratory studies, which illuminates how people answer questions in real-world situations. | |
Kangwook Lee | UW | Data Augmentation Across Manifolds for Improved Test Performance | While mixup algorithms have been shown useful for improving generalization for a wide class of tasks, they have a few critical limitations. Mixup sometimes degrades generalization, restricting their applicability to certain applications. Moreover, current mixup algorithms do not have any theoretical performance guarantees. To address these challenges, we aim to develop a computationally efficient GAN-based mixup algorithm. In particular, our key idea is to leverage a novel nonlinear, distribution-dependent data mixing method. We will also develop a theoretical framework for analyzing the performance of various mixup algorithms. | |
Kevin Ponto | UW | Developing a Novel 3D Capture Based Automated Inventory System for Insurance Documentation | The overall goal of this project is to design and implement a system that utilizes 3D scanning and 3D capture technology that could be used for automated documentation of scenes. This has the potential to reduce disputes between insurance companies and their clients, thus resulting in savings of money and time for both parties. As the utilization of 3D capture technology in this area is quite novel, and upcoming technological changes may further create new directions of inquiry, the project aims to research and design an initial area of impact, an Automated Inventory System, to determine how progress in this research could lead to impactful projects through subsequent stages of research. | |
Michael Ferris | UW | Adaptive Operations Research and Data Modeling for Insurance Applications | Optimization is a basic tool for both operations research and data science, and facilitates the use of data streams to mitigate the effects of uncertainty. Since random events can occur at multiple time and spatial scales, we propose a new approach that separates strategic decision making from operational modeling, but interconnects the models via a stochastic simulation engine (that generates a collection of possible future events). | |
Michael Morgan | UW | Extending American Family Insurance’s High-Resolution Weather Forecasting into a State-of-the-Science Probabilistic Regional Forecasting System | This project will develop an ensemble (weather) prediction system (EPS) that builds upon a prior project that provides a high-resolution weather forecasting system run entirely in Amfam’s cloud computing infrastructure. This project would provide valuable estimates of forecast uncertainty by producing many realizations of the same forecast from slightly varying initial conditions. The ensemble forecast would provide advanced warning of not only hazards of interest in targeted regions at high resolution, but also the uncertainty associated with the predictability of the hazards. These hazards include hail, wind gusts and hurricane impacts. A long-term database of forecasts can be used to further refine and improve uncertainty estimates through statistical post-processing. | |
Mirko Bronzi | Mila | Automotive Insurance Claim Processing | This Project aims at building a first proof of concept of a claim completion system based on text and images, as well as to assess the quality of the claim dataset from a machine learning perspective, develop algorithms and models to build meaningful representations that can further help leverage the information contained in claims (text and images) and demonstrate the usefulness of these representations in a context of claim completion, verification, or retrieval. | |
Rebecca Willett | UChicago | Machine Learning for Usage-Based Insurance: Privacy Protections and Anomaly Detection | The goal of usage-based insurance (UBI) is to assess a driver’s risk and set pricing based on measurements of their behavior while driving. These measurements can be acquired from sensors such as a GPS tracker, on-board diagnostic (OBD) devices, and advanced driver assistance systems. In general, we may think of the data for one phase of a driver’s trip as a time series of measurements. This project explores machine learning models but will focus on the underlying representations of driver behavior that emerge from these models and use those representations to (a) detect anomalous driving behaviors and (b) preserve private information about drivers. | |
Rob Nowak | UW | Optimizing Q&A Systems via User Feedback | In this project, the theory, and methods for adapting and improving Q&A systems based on user-feedback are investigated. We frame this problem as a multi-armed bandit problem and draw on recent advances in this field to explore new approaches for Q&A systems. The research is intended to improve Q&A systems and expand the foundations and applications of multi-armed bandits. | |
Robert Holz | UW | Machine Learning for Usage-Based Insurance | In this project, machine learning (ML) methods for mapping UBI multivariate time series plus ancillary data to a measure of driver risk will be investigated. Using insurance premiums and claims to label each time series will be explored, since insurance premiums can serve as an indicator of risk. The investigation will be twofold: determine whether we can accurately identify the risk level 1) of a time series and 2) (if possible) of intervals of time. The key underlying question is how this time series data should be represented for effective classification, so that we are robust to location, sensitive to road conditions and prevailing behaviors, and robust to noise and other data errors. | |
Shivaram Venkataraman | UW | Model Recycling: Accelerating ML Systems by Exploiting Past Computations | This project looks to automate the process of fine-tuning and aims to speed up the process by reusing past computations using a technique called model recycling. The insight for model recycling comes from the fact that successive training jobs in the fine-tuning workflow could share a number of computations and that we can avoid re-doing computation if we save the intermediate models from prior training jobs. A software framework that can help data scientists accelerate model fine-tuning will be developed, as well as an intelligent predictor that can automatically save prior computation results based on their importance. | |
Song Gao | UW | A Deep Learning Approach to User Location Privacy Protection | Location-based profiles provide invaluable source of information for various business recommendation systems and data products in both public and private sectors, whereas users increasingly raise privacy concerns especially in the context of geographic space usage and activity patterns. One key challenge in location data pipelines lies in finding the tradeoff between the detail level of users’ location data collection required for business analysis and the preservation of users’ geoprivacy. The proposed research aims to develop a state-of-the-art deep learning architecture to protect users’ location privacy while keeping the inference capability for location business recommendation purposes. The developed algorithms can be potentially applied in the usage-based insurance (UBI) and other location intelligence domains. | |
Vikas Singh | UW | Lightweight NLP/Vision Algorithms: Applications to Data Analysis Tasks at AmFam | An efficient and accurate deep learning based Natural language processing (NLP) model is a key component in numerous applications such as text classification, entity resolution, automated question answering (i.e., chat bots), retrieval and search, and so on. In various other settings closer to computer vision models, an appropriate language model also serves to associate image content with language and/or for identifying works/text often used to describe some components in a scene captured in the image. These ideas form the basis of many automated image captioning systems as well as serve an important role in identifying similarities between image content. |