University of Missouri – Kansas City, Atterbury Student Success Center- Pierson Auditorium
5000 Holmes, Kansas City, MO 64110
THURSDAY, APRIL 11, 2019
10:00 AM–10:30 AM
WELCOME AND INTRODUCTIONS
Dennis Ridenour | Welcome
Mark Hoffman, PhD | Working in the Big Tent of Bioinformatics
10:30 AM – 11:30
Wendy Chapman, PhD | Informatics as a Journey: How Your Expertise is Needed in the Grand Goal of Transforming Healthcare
Informatics is critical to the transformation of healthcare as we seek to increase patient safety, catalyze new discoveries, and improve clinician and patient experience. As a multi-disciplinary field, it is clear how training in computer science and engineering, health, and biostatistics contribute to this grand cause. But to be effective, informatics also needs contributions from many other fields such as psychology, human-computer interaction, linguistics, change management, and sociology. Dr. Chapman will discuss the need for multi-disciplinary contributions to informatics by describing her journey to this career as well as by sharing stories from other unlikely contributors to the field.
11:30 AM–1:00 PM
LUNCH & POSTER PRESENTATION
1:00 PM – 2:00 PM
MODERATOR: Keith Gary, PhD
Carolyn Lawrence-Dill, PhD | Computing on Biological Data in Plants: From Genome to Phenome (and Beyond)
In my research group we use data science approaches to solve plant biology and crop improvement problems by investigating the structure and function of plant genomes as well as the interrelationships among genomes, environments, and phenotypes. Our work has focused on mapping genomes and gene elements, predicting gene function, inventing new ways to link genes to phenotypic descriptions and images, developing ways to compute on phenotypic descriptions, organizing broad datasets for community access and use, and developing computational tools that enable others to do all of these sorts of analyses directly (https://dill-picl.org/projects/). These sorts of projects require not only a background in biology and computer science, but also an understanding of human/computer interaction, natural language processing, and engineering principles. As such, we work with psychologists, sociologists, linguists, engineers and others. If time allows, I will describe some challenges and opportunities that working across disciplines can bring.
Stephen Simon, PhD | Mining the Electronic Health Record
The electronic health record (EHR) offers opportunities for research and quality improvement studies that did not exist before. Data mining, discovering new and unexpected patterns in the data, requires a different mode of access for EHR data than more traditional hypothesis driven studies. This talk will cover the specialized statistical and programming skills needed for data mining.
Matt Obenhaus | Standards-Based APIs in Healthcare – Where Do We Take It From Here?
The healthcare IT industry has long been challenged with the lack of interoperability across health systems that enable data exchange to complete the longitudinal view of the person. Given this lack of interoperability, external application development for clinical and patient/consumer use cases has traditionally been a long and difficult process with natural barriers to entry that prevented innovation and minimized the role of the digital health startup. Emerging standards-based APIs have started to positively alter this landscape. And yet, there is still a tremendous amount of growth expected and needed in the volume of connected quality applications in the days ahead. API technologies are starting to proliferate in the health systems, which could in turn start to generate the sought after returns in application innovation.
Student Presentation: Deepak Kumar | Metagenomics Reveal Presence of a Novel Ungulate Bocaparvovirus in Alpaca Fecal Sample
Metagenomic Next Generation Sequencing (NGS) is becoming a popular approach to investigate the microbiome of clinical samples to determine the causative agent of disease. Here, we report identification of a novel ungulate Bocaparvovirus in Alpaca (Vicugna pacos) fecal sample by metagenomic NGS. Bocaparvovirus is a genetically diverse group of DNA viruses known to cause respiratory, enteric, and neurological illness in humans and animals. An Alpaca fecal sample was submitted to the Kansas State Veterinary Diagnostic Laboratory for metagenomic NGS. Sample was filtered, nuclease treated and subjected to viral nucleic acid extraction followed by cDNA synthesis, metagenomic library preparation, quantification and sequencing on Illumina MiSeq. FASTQ file was adapter trimmed and quality control was performed. Complete genome of Bocaparvovirus was de-novo assembled and mapped against a reference genome using CLC Genomics Workbench. Phylogenetic trees and nucleotide and amino acid percent identity tables were created using Geneious. Genome of Alpaca Bocaparvovirus (AlBoV) was 5155 nucleotides and comprised NS1, NP1 and VP1 ORFs of 2154 bp, 507 bp and 1395bp, respectively. Notably, VP1 gene of AlBoV was the shortest among all Ungulate bocaparvoviruses in the GenBank. Whole genome, NS1, NP1 and VP1 gene phylogenetic trees illustrated a distinct branch formed by AlBoV within Ungulate group 8 comprising camel bocaparvoviruses. NS1 protein showed the highest amino acid percent identity in the range of 57.89-67.85% to the members of Ungulate group 8, which was below 85% cut-off set by the International Committee on Taxonomy of Viruses (ICTV) for classifying Bocaparvovirus. Low NS1 amino acid identity qualifies AlBoV as a new species belonging to group Ungulate Bocaparvovirus 8. Recombination events were lacking with other ungulate bocaparvoviruses in the GenBank. Combination of Metagenomic NGS and bioinformatics could be useful to identify and characterize novel pathogens in samples causing clinical illness.
2:00 PM – 3:00 PM
DATA STANDARDIZATION AND INTEGRATION
MODERATOR: Susan Brown, PhD
Baek-Young Choi, PhD | Blockchain: Opportunities and Challenges for HealthCare
Blockchain technology has the potential to transform the health care echosystem, significantly increasing the interoperability, security and privacy of health records. However, the technology is not fully mature yet, nor can be immediately applied for arbitrary systems. The capabilities and limitations of blockchain for healthcare systems will be summarized. Some operational, organizational, and societal issues in adoption of blockchain in health care systems will be also pointed out.
Martina Clarke, PhD | Increasing the Use of Personal Health Record Through User-Centered Design
The Personal Health Record (PHR) is intended to support patients’ access to data, clinical summaries, preventive care, educational materials, and medication reconciliation. It is one of the core requirements for Meaningful Use Stage 3. The PHR aims to improve medication adherence, self-management of disease, and the patient provider communication. Despite the potential benefits of PHRs, adoption has been poor, in part due to usability issues. While 75% of patients see value in a PHR, fewer than 10% are users. Current research focuses heavily on improving PHR usability to increase use, without taking into consideration patient-specific factors that affect PHR adoption. By understanding the needs of users, the use and utility of the PHR will increase.
Guoqin Yu, PhD | Microbiome and Cancer Epidemiology
Human microbiota has been shown to play a critical role in human health. Studies on human microbiota is exploring. However, quality microbiota data collection and analysis are challenging. The techniques for microbiota data collection and analysis will be discussed, especially in the perspective of epidemiology study.
Student Presentation: Yang Liu | Integrate Multi-Omics Data Analysis Approach to Analyze the Gut Microbiome,Metabolome and Socio-Communicative Behaviors of California Mice Developmentally Exposed to Genistein
With the development of high-throughput sequencing technology, multiple omics data were generated and used to provide comprehensive information to understand biological process comparing to single omics data analysis. We conduct a multi-omics data analyses of California mice (Peromyscus californicus) who were developmentally exposed through the maternal diet to genistein (GEN), a phytoestrogen present in soy can disrupt neurobehavioral programming, and compared with offspring derived from dams fed a control (AIN) diet to study the effects of genistein on gut microbiome, metabolome and social and vocal communicative behaviors. For analytic purposes, F1 offspring California mice were divided into 4 groups of 5 AIN females, 5 GEN female, 3 AIN males and 3 GEN males. 16S rRNA metagenomics reads were sequenced by Mizzou DNA core and were processed using ‘Qiime’ software to compare the operational taxonomics (OTU) identified in the various groups, and DEseq was used selected difference abundance OTUs among groups. Metabolome analysis was conducted by Mizzou metabolomics center and MetaboAnalyst 3.0 was used differentiation analysis. Social communication with strangers and ultrasonic vocalization data were collected using video tracking and acoustic recording device. Integrative correlation analysis among differential OTUs, differential metabolites, social and vocal behaviors was conducted using ‘mixOmics’ package. Results from our multi-omics analysis suggest that these behavioral deficits might also be attributed to GEN-induced microbiota shifts and resultant changes in gut metabolites. Findings indicate there is cause for concern that perinatal exposure to GEN might detrimentally affect offspring microbiome-gut-brain axis.
3:00 PM – 3:30 PM
3:30 PM – 4:30 PM
MODERATOR: Trupti Joshi, MBBS, ADB, MS, PhD
James Miller, PhD | Visualization in the Arts and Sciences
Visualization could be very simply defined as the use of interactive computer graphics to present data and provide insight into that data to stakeholders. In recent years, the use of visualization has become much more than simple after-the-fact presentation of results. Many researchers have come to realize that visualization is actually a powerful tool used throughout the scientific discovery process. In addition, the idea that visualization can be used as a powerful form of storytelling has emerged as an important area of research. In this presentation, I will briefly highlight a bit of what is known and not known from a theoretical perspective, and then present results from some recent projects.
Jay Unruh, PhD | Single Particle Averaging Super-Resolution Mapping of Macromolecular Complexes in Situ
The organization of megadalton complexes in living organisms represents a challenge that has been addressed in the past by Cryo-EM, X-ray, and computational modeling. These approaches are expensive and struggle to provide the throughput or context necessary for large scale mapping of cellular organization and heterogeneity. With the advent of super-resolution fluorescence microscopy, we can now resolve large scale features of such structures. We have demonstrated in situ single particle averaging methodologies to map out the organization of the yeast cell division machinery and the fruit fly synaptonemal complex. In yeast, this has allowed us to quantify novel interactions necessary for insertion of the centrosome as well as the asymmetry that orients its duplication. It is our hope that this methodology will greatly speed the progress of large scale structural modeling in many organisms.
Thomas Coffin | Experiencing Data through Interactive Visual Explorations
We are experiencing an exponential growth of data in all aspects of human life to the point that the vast amounts of data are becoming overwhelming to manage as well as are starting to be unused due to our lack of tools to extract meaningful information from the raw data. Social media, advances in sensors, new computational models, web surveys, electronic transactions, are just examples of data generation/collection technologies that are capturing much more than we can handle with the current approaches for data analysis. Clearly the human cognitive system only enable us to scrutinize and analyze a limited amount of the raw data we generate, and therefore limiting also the quality of our scientific insight on the problem at hand. Consequently, our data-rich world is developing a critical need for visualization as a key component of the scientists’ tool set for discovery and insight into their areas of expertise.
Visualization, and especially interactive visualization, takes advantage of the bandwidth of the human visual system, our ability to visually identify patterns and relationships, and how we interact with the data to extract information. This presentation explores the power of visualization to extract information from big data by presenting an introduction to visualization, to current methods and techniques and some illustrative examples of work being done at the Emerging Analytics Center at the University of Arkansas at Little Rock. The presentation seeks to stimulate the audience’s imagination about what’s possible as well as to pursue future research with a multidisciplinary approach in which visualization takes as much as a central role as the data gathering approaches model and analyze a wide variety of problems, phenomena, situations, training and other disciplines of human life.
4:30 PM – 5:30 PM
MODERATOR: Donna Buchannn, PhD
Kathryn Cooper, PhD | Dynamic and Reproducible Network-Based Approaches for Analysis of Temporal Gene Expression
Rooted in graph theory and social sciences, network-based analysis of systems-level molecular data took root approximately 20 years ago. The flexibility and aesthetic appeal of the network model has made it a popular tool for visualization, analysis, and representation of biological variables and their relationships, be these protein-protein interactions, genetic interactions, metabolomic pathways, or patient-caregiver relationships. Network analysis provides a broad-view perspective for datasets with “volume” and “velocity” that have arrived with dropping costs of NGS technology. Our research focuses on temporal analysis of gene expression and NGS data using network modeling and the reliability of these models from one dataset to another. Comparison of systems-level molecular data across the short or long term reveals insights not available using traditional methods, including the formation, maintenance, or loss of critical relationships that sustain cellular functions. In establishing the benchmarks necessary for robust analysis of temporal gene co-expression network analysis, we are also to fill the need for efficient analytical tools that can reliably support clinical decision making and offer insights into chronic disease.
Keith Slotkin, PhD | Mining Junk for Gold: Reuse of Filtered-Out Reads Provides a New Layer of Epigenetic Data
Deep sequencing datasets undergo filtration steps whereby sequenced reads that do not match the reference genome are discarded. We have gone back to these discarded reads and used a split-read mapping approach to remap the genomic variation in datasets compared to the reference genome. We are not detecting SNP variation, but rather insertion/deletion structural variation, and in particular the new insertion sites of genomic parasites called transposable elements. In addition to genome resequencing data, we can investigate nearly all deep sequencing experimental data types (RNA-seq, ChIP-seq, Methylome-seq, etc…) to identify both small-scale single cases of insertion and large-scale measurements of transposable element activity. Since transposable elements are well-established to be epigenetically repressed in eukaryotic cells, we now have the ability to use virtually any existing sequencing dataset to quantifiably assay the potency of epigenetic regulation in that sample, using transposable element activity as the barometer.
Michael Nassif, MD | Smartphone Based PRO Capturing
Over the past thirty years, numerous disease specific patients reported outcomes and screening questionnaires have been created. The tools themselves are valid, reproducible over time in stable patients, and sensitive to clinical changes when they occur. They can be utilized for a variety of conditions and are not only reliable, but also prognostic of clinical events, costs, and mortality. They are far more reproducible than clinician-reported outcomes. Numerous advisor groups including the Heart Failure Society of America, National Quality Forum, the Federal Drug Administration and patient advocates, emphasized how much the values of PRO and all strongly endorsed their use in quality assessment, drug/device approval, and clinical care.
However, screening health questionnaires can be time-consuming and burdensome. The process of having patients answer questions, a nurse or research coordinator transcribe the responses into case report forms adds labor and can introduce error. Patients themselves often free text answers, or select multiple answers on a single question. This makes it difficult to interpret their response and certainly alters the psychometric properties of the questionnaire. Many of the issues can be overcome with an app based electronic questionnaire. We, therefore, conducted both small scale cognitive interviews (n =10) about equivalency, and a more substantial equivalency (n= 59) testing both the paper and app based version of the KCCQ-12. Patients in the Heart Failure Clinic at the Saint Lukes Mid America Heart Institute were consent and offered to participate in our study. Patients at routine clinic appointments were asked to complete a paper KCCQ-12, they were allowed a period of time for distraction and completion of demographic information (a 2 page and 21 question form), then they completed an electronic KCCQ-12 (either on a Ipad or an Iphone 7S).
There was a mix of educational backgrounds with 34% having a high school degree or less, 37.5% completing some college, and 29% having a college degree or advanced degree. Eighty-five percent or patients owned a smartphone, and 40% of them had an iPhone. For the qualitative interviews all 10 respondents stated the electronic and the paper versions were asking the same questions. Eight respondents preferred the electronic version, one had no preference, and one preferred the paper version. For the quantitative equivalency, 74% percent of individual responses were identical between electronic and paper, 18% were one Likert response away, 4% and 2% answered NA or skipped a question respectively (table 2). In total 92% of patients had a KCCQ-12os within 10 points, with a median difference of in overall score of 2.17 points (Figure 1 and 2). All case In univariate analysis having a smart phone was the only variable potentially associated with discrepancies, with weak statistical significance, 7/8 patients (87.5%) without a smartphone had a discrepancy of more than 1 Likert response, compared with 22/48 (45.8%) who owned a smartphone (p=0.05 uncorrected for multiple testing). The same pattern of discrepancies amongst those without a smartphone held across all domains of the KCCQ-12, although weaker signals due to smaller numbers. No other characteristic, including age, sex, race or education level was appreciably associated with discrepancies between.
We conclude that app based collection of the KCCQ-12 is both feasible and equivalent to paper collection in patients whom own a smartphone.
Student Presentation: Ellen Kerns | Mobile-Device Based Electronic Clinical Decision Support Tool Use Post Intervention
Electronic clinical decision support (ECDS) tools have demonstrated a positive impact on adherence to clinical practice guidelines when used as part of practice improvement projects. However, ECDS tool usage after project completion has not been well characterized. An ECDS tool entitled ‘Febrile Infant’ was deployed via freely downloadable mobile app as part of the national quality improvement project ‘Reducing Variability in Infant Sepsis Evaluation’ (REVISE) conducted by the Value in Inpatient Pediatrics (VIP) Network. As usage of Febrile Infant was associated with increased adherence to REVISE performance metrics, this study aims to describe any changes in usage patterns in the year following project completion. The sum total of times the app was accessed (Sessions), metric-related screen views (MetricHits), and unique designated market areas (DMAs) in which sessions occurred were calculated for both the project (01Dec16-30Nov18) and post-project period (01Dec17-30Nov17). MetricHits, DMAs with use, and MetricHits per DMA by day were analyzed for both study periods. A changepoint model was fit to all three measures to compare rate/day patterns between the two study periods. Total Sessions, MetricHits, and number of DMAs with use were all higher in the post-project period [Table 1]. MetricHits and unique DMAs with app use each day during the post-project period rose at a significantly higher rate (p<0.001 and p=0.025) than during the project period [Figure 1-2]. The rate of MetricHits per DMA with use each day declined during the project period, but increased in the post-project period (p<0.001) [Figure 3]. Sub-analyses restricted to usage in DMAs that contained a REVISE site (66 DMAs) showed consistent results. In conclusion, ECDS deployed via a freely available mobile app has the potential to achieve a sustained impact beyond the original project period given the increasing rate of both use and reach in the post-project period shown here.
5:30 PM – 8:00PM
COCKTAILS, NETWORKING, & DINNER
FRIDAY, APRIL 12, 2019
7:15 AM–7:45 AM
7:45 AM–8:00 AM
WELCOME & INTRODUCTIONS
Keith Gary, PhD | Thank Sponsors and Volunteers
Mark Hoffman, PhD | Welcome
8:00 AM – 9:00 AM
Scott Shearer, PhD, PE | Digital Agriculture – Disruption or Distraction?
Many data scientists argue that machine and agronomic data in agriculture fail to meet the definition of “big data.” However, the combination of technology and venture capital directed at agriculture are changing the landscape with respect to on-farm production. The roots of the digital revolution in agriculture began with the advent of GNSS and precision agriculture in the mid 1990s. Couple rapid advancement in technologies (i.e., wireless communications, unmanned aerial systems, robotics and cloud computing), a boom cycle in agriculture (2008-2015) and talk about feeding 10 billion people by 2050; and many investors and businesses now view agriculture as the new frontier. This presentation will overview the evolution of what many are calling digital agriculture – connecting the farm to the internet. Topics shaping the future of agriculture to be explored will include cloud computing, Internet of Things, data standards and exchange, federated IDs, blockchains, data ownership, data privacy and security, emerging ecosystems, broadband internet access and supervised autonomy. The adoption of technology in agriculture is proceeding at a rapid pace causing concern on the part of farmers and traditional agribusinesses. This presentation will provide an overview of the forces shaping the future of agriculture and some of the risks faced by those who fail to prepare for this digital revolution.
9:00 AM – 10:00 AM
MODERATOR: Ryan Moog
Douglas Marthaler, PhD | Bayesian and Bioimmunoinformatic Methods to Understand Rotavirus Genetics and Antigenicity
Our research group studies the emergence and transmission of animal viruses, aiming to understand the molecular determinants of antigenicity to protect animal and public health. Extensively research is being conducted on rotaviruses, which are a significant cause of enteric disease in many animals, including humans. While the human rotavirus A vaccine has prevented millions of infant deaths, rotavirus C has been established as a cause of neonatal piglet mortality, leading us to determine the evolutionary relationships among rotavirus C strains from multiple hosts and investigate potential zoonotic transmission. Additionally, we are using in silico methods for elucidating putative B cell epitopes on the rotavirus C virion for the development of second generation vaccine technologies for this fastidious pathogen.
Kim Smolderen, PhD | Applying a Random Forest Algorithm to Build a Quality of Life Prediction Model for Peripheral Artery Disease: A Simple Application of a Machine Learning Algorithm
Patients with peripheral arterial disease (PAD) have a number of potential therapeutic options however, it is currently unknown how their personal characteristics and different treatments might impact their health status (symptoms, function and quality of life). The first step in creating personalized estimates for patients is to model 1-year health status outcomes. A wealth of information was collected on 1275 patients with PAD that were enrolled in the international PORTRAIT registry. To weigh the relative importance of clinical and patient characteristics for 1-year quality of life outcomes, we explored whether a simple application of a supervised machine learning algorithm – a random forest algorithm – would be useful to help select variables to build our quality of life model. A real-world example using the PORTRAIT observational study data will be provided.
Henry Yeh, PhD | Factorizing Spatially Correlated Neighborhood Characteristics and Health Outcomes in Greater Kansas City
Neighborhood characteristics may have profound effects on residents’ health. In this work, I extracted tract-level data for the Greater Kansas City metropolitan area from two databases: the 500 Cities Project and American Community Survey, and applied Bayesian Spatial Group Factor Analysis (SpGFA) to investigate relations among 4 groups or views of variables: neighborhood characteristics, unhealthy behaviors, use of preventive services, and health outcomes. SpGFA reveals multi-view relationships by a few factors while accounting for spatial dependency due to geographical structure. The resulting factor scores may be used as “environment” variables for future research, e.g. combining with genetics data of individuals for studying gene-environment relations.
Adina Howe, PhD | Digging in Dirt: Finding Signal From Noise
High throughput sequencing platforms have rapidly changed the quantity of data that can be used to explore natural environmental systems. In the GERMS lab, we integrate these technologies to understand how we can better manage our land and water resources. A constant challenge is distinguishing single from noise, especially in very complex and heterogeneous environments. I’ll present some of the challenges, solutions, and impact of using data analysis in two different systems, beneficial and harmful microbial interactions in agricultural soils and the development of harmful algal blooms in Iowa freshwater.
Student Presentation: Kathryn Kyler | Increased Variability in Drug Dosing for Children with Obesity Hospitalized With Asthma
Obesity results in physiologic alterations that may be important to drug disposition. However, dosing recommendations for children with obesity remain limited, including drugs for asthma exacerbations. This knowledge gap may lead to variability in prescribing practices in children with obesity, posing a serious risk of under or over-exposure to drugs. Therefore, we aimed to determine the likelihood of non-guideline adherent dosing of steroid drugs based on weight category for children hospitalized with asthma. We performed a retrospective cohort study of children aged 2-17 years who were prescribed steroids during hospitalization for asthma in the years 2010-2017 using the Cerner Health Facts® (HF) database. Doses were categorized as guideline adherent or non-guideline adherent based on NHLBI asthma guidelines (Figure 1). Total daily doses were calculated based on prescribed drug doses and frequencies. Body mass index (BMI) was calculated from documented height and weight; weight categories were defined using CDC BMI percentile guidelines. Chi-square tests determined statistical differences in non-guideline adherent doses between weight categories. We identified 35,706 patients hospitalized for asthma exacerbations who received steroids. The majority were ages 6-10 years (45.3%), male (60.1%), African American (51.9%), and had government insurance (55.7%). Most had a healthy weight (54%), but 38.8% were overweight or obese (n= 13,889), with 5,407 patients having class I obesity (15.1%), 2,011 with class II obesity (5.6%), and 1,173 with class III obesity (3.3%). A substantial number of children overall received non-guideline adherent drug doses (31.8%), rising significantly as weight category increased, from 25.6% of the healthy weight group to 66.7% of those with Class III obesity (p<0.001) (Figure 1). Variation in prescribing practices for patients hospitalized with asthma exacerbation increases with increasing weight category, disproportionately affecting children with severe obesity, placing these children at higher risk of adverse events related to over- or under-exposure to steroid drugs.
10:00 AM – 10:30 AM
10:30 AM–11:30 AM
MODERATOR: Devin Koestler, PhD
Doina Caragea, PhD | Root Anatomy Based on Root Cross-Section Image Analysis with Deep Learning
The aboveground plant efficiency has improved significantly in recent years, and the improvement has led to a steady increase in global food production. The improvement of belowground plant efficiency has the potential to further increase food production. However, the belowground plant roots are harder to study, due to inherent challenges presented by root phenotyping. Several tools for identifying root anatomical features in root cross-section images have been proposed. However, the existing tools are not fully automated and require significant human effort to produce accurate results. To address this limitation, we propose a fully automated approach, called Deep Learning for Root Anatomy (DL-RootAnatomy), for identifying anatomical traits in root cross-section images. Using the Faster Region-based Convolutional Neural Network (Faster R-CNN), the DL-RootAnatomy models detect objects such as root, stele and late metaxylem, and predict rectangular bounding boxes around such objects. Subsequently, the bounding boxes are used to estimate the root diameter, stele diameter, and late metaxylem number and average diameter. Experimental evaluation has shown that our models can accurately detect the root, stele and late metaxylem objects, as well as their anatomical traits.
Andres Bur, MD | Clinical Applications of Deep Neural Network-Based Image Classification in Oncology
Deep neural networks (DNN) are a subset of machine learning that are well-suited for complex image classification tasks and form the basis for facial recognition and image search technologies. DNN have been used in oncology research to automate analysis of cross-sectional imaging, histopathologic slides and clinical photos. Deep learning has the potential to support physicians by enhancing human performance in tedious tasks, in which computers excel. Additionally, deep learning can enhance personalization of oncology care. This presentation will review current and future applications of DNN for automated image classification that will transform cancer care.
Sean McKinney | Convolutional Neural Networks as an Image Processing Swiss Army Knife
Convolutional neural networks have proven to be capable of performing a diverse range of image processing tasks. Our team has applied them to over a dozen tasks spanning 2D/3D nuclear segmentation, automated clustering of image cytometry data, spot identification, yeast segmentation, and electron microscopy segmentation. All of these were tasks that were difficult if not impossible to reliably automate using traditional processing methods and were accomplished with the same underlying architecture in a relatively brief span of time.
11:30 AM–12:30 PM
MODERATOR: Mark Nichols, PhD
Mark Clements, MD, PhD | Transforming Type 1 Diabetes Care with Advanced Machine Learning and Quality Improvement Methods
Inadequate blood glucose control in type 1 diabetes significantly increases lifelong risk for cardiovascular complications and for mortality. Most youth with type 1 diabetes are not achieving targets for blood glucose control; approximately 18-20% will experience deterioration in control from age 8-6 years. The ability to predict youth who will experience a rise in hemoglobin A1c, the major biomarker of blood glucose control, would allow clinicians to use more intensive management approaches to reduce the deterioration in blood glucose control. Combining predictive analytics with quality improvement methodologies can help clinicians rapidly identify alternate management approaches that work in youth at risk for deteriorating disease control. This approach can be applied across most chronic diseases.
Jonathan Mitchem, MD | Overcoming Resistance to Immune Based in Colorectal Cancer
Immune based therapies, such as immune checkpoint blockade, have revolutionized the treatment of some difficult to treat malignancies. These results, however, have not translated to the majority of patients with colorectal cancer despite a preponderance of data suggesting anti-tumor immunity is important for treatment response, recurrence, and survival in this disease. Immune based therapy should work in colorectal cancer, but currently it does not. To tap into the potential of immune based therapy in colorectal cancer, we must understand why this therapy works in some patients and how currently utilized therapy alters anti-tumor immunity. Answering these questions will help us to devise better therapeutic strategies and then chose the right patients for that treatment.
Suzanne Arnold, MD, MHA | Improving Decision Making in Elderly Patients with Valvular Heart Disease
Valvular heart disease is incredibly common in elderly people and impacts both survival and quality of life. Both aortic stenosis and mitral regurgitation have traditionally required open-heart surgery to correct. Over the past 10-20 years, transcatheter procedures have been integrated into our treatment options to allow us to treat these valvular conditions less invasively. Given the advanced age of the patients who are candidates for these procedures and their burden of comorbidities, the decision as to whether or not to proceed with an invasive procedure (albeit less invasive than surgery) can be challenging. I will discuss work done to try to define the potential benefit of these procedures, to identify the patients most (or least) likely to benefit, and how this can be used to better inform these treatment decisions.
Kari Lane, PhD, RN, MOT | Utilizing AI and Embedded Sensors to Provide Early Alerts for Health Changes in Older Adults
TigerPlace is an independent living facility that was built and licensed to nursing home standards. TigerPlace started caring for residents in 2004 and installed embedded sensors in approximately 50% of resident apartments (as they consented) since that time. This sensor system has detected changes in function and in chronic diseases or acute illnesses on average 10 days to 2 weeks before usual assessment methods or self-reports of illness. Over the past 14 years, we have monitored both the residents with sensors and those without sensors while providing interdisciplinary care coordination to all residents. We have a proactive model. We highly encourage active engagement and mobility while providing an Aging in Place atmosphere. We have collected continuous data 24 hours/7 days a week from motion sensors to measure overall activity, an under-mattress bed sensor to capture respiration, pulse, and restlessness as people sleep, and a gait sensor that continuously measures gait speed, stride length and time, and automatically assess for increasing fall risk as the person walks around the apartment. Continuously running computer algorithms are applied to the sensor data and send health alerts to staff when there are changes in sensor data patterns. We have tracked multiple health outcomes which demonstrate a decline in cognition and mobility over time in both groups, and better quality of life outcomes including (ADLs, incontinence, falls, depression, social and mental functioning, and adjustment). Our findings demonstrate that sensor data with health alerts and fall alerts sent to AL nursing staff can be an effective strategy to detect and intervene in early signs of illness or functional decline.