Machine Learning Technologies and Their Applications to Scientific and Engineering Domains Workshop – Speaker Bios and Abstracts
NASA Langley Machine Learning Workshop 2016
Confirmed Speakers’ Bio’s and Abstracts
Atkins | Carbonell | Codella | Duraisamy | Kumar | Lee | Long | Mavris | O’Reilly | Poczos | Pokutta | Rajan | Scheutz | Vatsavai | Xiao
Speaker/Bio |
Abstract |
Dr. Ella Atkins, University of Michigan
Dr. Ella Atkins is Associate Professor in the Department of Aerospace Engineering at the University of Michigan, where she is director of the Autonomous Aerospace Systems (A2SYS) Lab. Dr. Atkins holds B.S. and M.S. degrees in Aeronautics and Astronautics from MIT and M.S. and Ph.D. degrees in Computer Science and Engineering from the University of Michigan. She previously served on the Aerospace Engineering faculty at the University of Maryland, College Park. Dr. Atkins is past-chair of the AIAA Intelligent Systems Technical Committee, AIAA Associate Fellow, IEEE senior member, small public airport owner/operator (Shamrock Field, Brooklyn, MI) and private pilot. She served on the National Academy’s Aeronautics and Space Engineering Board (ASEB) (2011-2015 term), was a member of the Institute for Defense Analysis Defense Science Studies (DSSG) Group (2012-2013), and recently served on an NRC committee to develop an autonomy research agenda for civil aviation (2013-2014). |
Title: New Data Sources to Revolutionize UAS Situational Awareness and Minimize Risk
Acceptable levels of safety for small unmanned aircraft systems (sUAS) will be provided by a combination of onboard and cloud-based sensing, processing, and data access. Historically manned aircraft and sUAS have relied on onboard sensing supplemented by aviation charts to facilitate navigation and communication. New non-traditional data sources can also be exploited to improve situational awareness and decision-making. Onboard, vision, lidar, and/or radar can map the local environment, while cloud access and multi-vehicle networking are essential for over-the-horizon awareness. This presentation will present static and dynamic data sources and their use for pre-flight and in-flight sUAS perception and decision-making. An sUAS pre-flight planner can utilize static terrain, building, road, and census data to select an obstacle-free flight route that minimizes risk due to population overflight. In flight, a combination of static data and dynamic data, e.g., from cell phones and cars, can further improve decision making to ensure any risky maneuvers or emergency landings are conducted away from people. New constraints and cost metrics related to privacy, annoyance, and property ownership and zoning will be essential to ensure low-altitude sUAS flights respect trepass and disorderly conduct laws as well as FAA airspace use and time/energy constraints. Progress toward infusion of new data sources into sUAS decision making will be summarized, and a roadmap to sUAS safety through novel data mining and real-time access will conclude the presentation. |
Dr. Sebastian Pokutta, Georgia Institute of Technology
Dr. Sebastian Pokutta received both his master’s degree in 2003 and his Ph.D. in 2005 in Mathematics from the University of Duisburg-Essen in Germany, and then worked as a postdoctoral fellow at the MIT Operations Research Center. Upon completion Pokutta was appointed as an optimization specialist at IBM ILOG and later joined KDB Krall Demmel Baumgarten, a risk management consultancy. He then returned to academia as a research scientist at the Technische Universität Darmstadt and was a visiting lecturer at MIT. Prior to joining Georgia Tech, Pokutta worked as a professor at the University of Erlangen-Nürnberg. Pokutta’s research concentrates on polyhedral combinatorics at the intersection of combinatorial optimization and theoretical computer science as well as the combination of machine learning and optimization. Pokutta has also worked on applications and combinations of optimization methods and machine learning in order to leverage data in the context of pressing industrial and financial challenges. These areas include supply chain management, manufacturing, cyber-physical systems (incl. industrial internet, industry 4.0, internet of things), and finance. |
Title: Machine Learning in Engineering: Applications and Trends
Machine Learning regained some significant attention in the last few years due to the converge of three major factors: new computing architectures, abundant data through cheap sensors, and new learning algorithms. This convergence has led to some major breakthroughs in machine learning (such e.g., deep learning and reinforcement learning) as well as in applications of machine learning to problems from a wide range of disciplines and applications. Machine Learning also holds strong promises for engineering disciplines with a potential paradigm shift in the research and development process, more heavily relying on data-derived models rather than traditional hypothesis-driven approaches. In his talk I will provide an overview of machine learning with a focus on applications in the engineering domain. I will survey current show case applications as well as promising future directions. |
Dr. Chris Codella, IBM
Dr. Christopher F. Codella is an IBM Distinguished Engineer and Public Sector Chief Technology Officer at IBM Watson Group. His current activities center around complex data analytics, particularly applying IBM’s Watson technology to build solutions that support the missions of government agencies at all levels. Dr. Codella earned his Ph.D. in electrical engineering from Cornell University in 1984. He first joined IBM in 1979 at the East Fishkill, NY semiconductor lab working on advanced memory devices. In 1989 he became a Research Staff Member in the Computer Science Department at the IBM Thomas J. Watson Research Center where he managed the Virtual Worlds Group developing software for collaborative, networked virtual environments. He led a team in development of object-oriented application server technology, distributed systems, reusable software components, component architecture, and web services. He has received IBM’s Invention Achievement Awards and Outstanding Technical Achievement Awards for his contributions to Enterprise Java and J2EE, and for leading the development of IBM Research’s Global Technology Outlook for 2003. He is a Senior Member of the Institute of Electrical and Electronics Engineers, author of numerous of professional publications and conference papers, and holder of several U.S. and international patents. |
Title: Cognitive Computing and IBM Watson in Research, Operations, and Medicine
Cognitive computing and Watson in particular are applied as advisors or assistants to engineering and scientific professionals through interaction using natural language. Rather than replace people, these technologies amplify human abilities such as insightful reasoning, creativity, and intuition gained through experience. Watson can read vast collections of documents, selected by experts, to build a knowledge base in a specific domain, whether specialized or broad. When trained, it can quickly respond to questions with information it deems probabalistically relevant to what is asked, supplying evidence-based comparative hypotheses. In this way, Watson can help techncal professionals and decision makers including physicians, reach conclusions based on the most complete collection of supporting information – more than is possible for any expert to read on their own. This talk will explain the main concepts of cognitive computing, the Watson system itself, and example applications in a variety of use cases revant across fields in science, medicine, and engineering. |
Dr. Barnabas Poczos, Carnegie Mellon University
Dr. Barnabás Póczos is an assistant professor in the Machine Learning Department at the School of Computer Science, Carnegie Mellon University. His research interests lie in the theoretical questions of statistics and their applications to machine learning. Currently he is developing machine learning methods for advancing automated discovery and efficient data processing in applied sciences including health-sciences, neuroscience, bioinformatics, cosmology, agriculture, robotics, civil engineering, and material sciences. In 2001 he earned his M.Sc. in applied mathematics at Eotvos Lorand University in Budapest, Hungary. In 2007 he obtained his Ph.D. in computer science from the same university. |
Title: Applied Machine Learning for Design Optimization in Cosmology, Neuroscience, and Drug Discovery
In this presentation we will review some recent advances in machine learning. First we will discuss new nonparametric machine learning methods that can process large and complex distribution- and function-valued data sets, and then we will present some novel theoretical results in Bayesian design optimization using Gaussian processes. The proposed algorithms and the theoretical results will be illustrated with applications in cosmology, neuroscience, and drug discovery. |
Dr. Lyle Long, Pennsylvania State University
Winner of the IEEE Gordon Bell Prize in 1993, Dr. Lyle Long is a Distinguished Professor of Aerospace Engineering, Computational Science and Mathematics at the Pennsylvania State University, where he directs the Computational Science Graduate Minor Program and the Graduate Program in Neuroscience. Dr. Long’s research and teaching interests include computational science and engineering, software engineering, unmanned air vehicles, neural networks, intelligent systems, high performance computing, and computational fluid dynamics. Dr. Long earned his doctor of science from George Washington University and holds a bachelor’s degree in mechanical engineering from the University of Minnesota and a master’s degree from the department of aeronautics and astronautics at Stanford. His doctoral research was on rotorblades in helicopters, specifically a simulation of compressible aerodynamics using the wave equation from acoustics. Today, Long’s research interests have widened to encompass intelligent systems and cognitive architectures for mobile robots, legged robots, large-scale spiking neural networks and learning, computational fluid dynamics, and object-oriented programming. Dr. Long is currently a Fellow of AIAA and of APS. |
Title: Toward Human-Level (and Beyond) Artificial Intelligence
This talk will describe possible approaches to achieving human-level artificial intelligence. Artificial intelligence began in roughly 1956 at a conference at Dartmouth University. The participants, and many researchers after them, were clearly overly optimistic. As with many new technologies, the technology was oversold for many decades. Computers at the time could only do about 16,000 operations per second. Computer processing power, however, has been doubling every two years thanks to Moore’s law, and growing even faster due to massively parallel architectures. Finally, 60 years after the first AI conference we have computers on the order of the performance of the human brain (10^16 operations per second), even if they are a million times less efficient (in terms of power and space) than the human brain. The main issues now are algorithms, software, and learning. We have excellent models of neurons, such as the Hodgkin-Huxley model, but we do not know how the human neurons are wired together, or how carefully we need to match brain architecture, but human-brain scale simulations are now feasible on massively parallel supercomputers. With careful attention to efficient parallel computing, event-driven programming, table lookups, and memory minimization these simulations can be performed. Artificial consciousness and emotions will also be possible. |
Dr. Tsengdar Lee, NASA
Dr. Tsengdar Lee currently manages the NASAHigh-End Computing Portfolio. He is responsible in maintaining the high-end computing capability to support the aeronautic research, human exploration, scientific discovery, and space operation missions within NASA. He also leads the NASA Weather Focus Area. In this role, he is responsible for setting the strategic direction in weather research and development.He chairs the High-End Computing Interagency Working Group (HECIWG) under the Federal Networking and Information Technology Research and Development (NITRD) program.He Served as NASA Chief Technology Officer for Information Technology (CTO-IT) between 2011 and 2012. In this role, he set up the IT-Labs at NASA and invested in cloud computing and big data research and development projects.He joined NASA in 2001 as the High-End Computing Program Manager for the Earth Science Enterprise. He was responsible for the Earth science computational modeling needs. His work primarily focused on weather and climate modeling. Between 2002 and 2006, he managed the Earth Science Global Modeling Program. He funded research efforts to study the global climate change, weather forecasting, and hurricane prediction problems. |
Title: NASA Earth Science Knowledge Network
Authors: CMU/Jia Zhang, NASA/Rahul Ramachandran & NASA/Tsengdar Lee” IBM Watson won the Jeopardy in 2011. Google AlphaGo won the Go game in March 2016.The convergence of computing resources and maturing big data technologies has ushered in an era now called Artificial Intelligence 2.0 (AI2.0). We are beginning to see powerful applications build on top of the knowledge extracted from vast amount of data.Back in 2011, I posted a question if we could apply Watson technology in our research enterprise especially out data systems. The end goal would be to provide a one-stop gateway able to proactively recommend personalized dataset, tools and algorithms, as well as experience. Not until recently, we gathered the necessary pieces of the puzzle and started working. In this talk, we will provide an update on our progress including the necessary technologies that are incorporated into the current project. |
Dr. Una-May OReilly, ALFA Group, CSAIL MIT
Dr. Una-May O’Reilly leads the AnyScale Learning For All (ALFA) group of MIT Computer Science Artificial Intelligence Lab (CSAIL) . She has expertise in scalable machine learning, evolutionary algorithms, and frameworks for large-scale, automated knowledge mining, prediction and analytics. She educates the forthcoming generation of data scientists, teaching them how develop state of art techniques that address the challenges spanning data integration to knowledge extraction. The author of over 100 academic papers, in 2013. She is the area editor for Data Analytics and Knowledge Discovery for Genetic Programming and Evolvable Machines (Kluwer), and editor for Evolutionary Computation (MIT Press), and action editor for the Journal of Machine Learning Research. She advises PatternEx, a Silicon Valley security company preventing data breaches by using AI to extract behavioral patterns from big data. Her most recent research with students has resulted in two patents that have spun out startups: Cardinal Wind, co-founded by Teasha Feldman-Fitzthum and DataSight, co-founded by Will Drevo. A third patent arose from serving as VP Engineering at Icosystem which provides predictive modeling using complexity science. The Stealth project has also generated IP. She holds a B.Sc. from the University of Calgary, and a M.C.S. and Ph.D. (1995) from Carleton University, Ottawa, Canada. |
Title: Data-Driven Artificial Intelligence with Machine Learning
Machine learning underpins “data-driven” Artificial Intelligence, providing the ability to predict complex events from predictive cues within streams of data. Extending beyond an algorithm, it is embedded within the human endeavor to understand human and system behavior. It involves the translation of knowledge into problem definitions and hypotheses that are automatically explored. It includes the transformation of simple, observational data into complex, meaningful features. It creates the new role of data scientist. I will outline how my research group, ALFA (Any-scale Learning for All) is investigating challenges to rapid and effective predictive modeling. |
Dr. Matthias Scheutz, Tufts University
Matthias Scheutz is a Professor in Cognitive and Computer Science in the Department of Computer Science, and an Adjunct Professor in the Department of Psychology at Tufts University. He earned a Ph.D. in Philosophy from the University of Vienna in 1995 and a Joint Ph.D. in Cognitive Science and Computer Science from Indiana University Bloomington in 1999. He has more than 200 peer-reviewed publications in artificial intelligence, artificial life, agent-based computing, natural language processing, cognitive modeling, robotics, human-robot interaction, and foundations of cognitive science. As Director of the Human-Robot Interaction Laboratory at Tufts, his current research focuses on complex cognitive and affective robots with natural language capabilities for natural human-robot interaction. |
Title: Intelligent Agents: One-Shot Learning through Task-Based Natural Language Dialogues
Open-world tasks in which unforeseen events and aspects will likely come up during task performance require artificial agents to deal with unknown information and possibly learn new capabilities to handle the unforeseen aspects. Critically, bottom-up statistics-based machine learning techniques are typically not applicable here, due to time pressure and/or lack of available data. Rather, heavily top-down constrained learning mechanisms are necessary to quickly acquire new knowledge from only a few exemplars. In this presentation, I will provide a high-level overview of our attempts to develop novel dialogue-based one-shot learning techniques for open-world tasks. In addition to showing various robot demos of one-shot learning, I will discuss the many challenges ahead in generalizing the proposed approaches to different scenarios and interaction contexts. |
Dr. Dimitri Mavris, Georgia Institute of Technology
Dimitri Mavris earned his B.S. (1984), M.S. (1985) and Ph.D. (1988), in Aerospace Engineering from Georgia Tech. He is the chaired professor of Advanced Aerospace Systems Analysis in Georgia Tech’s School of Aerospace Engineering, Regents Professor, and Director of its Aerospace Systems Design Laboratory (ASDL). He is the newest S.P. Langley NIA Distinguished Professor, an AIAA Fellow, member of the ICAS Executive Committee, the AIAA Institute Development Committee, and the US Air Force Scientific Advisory Board. He is also the Director of the AIAA Technical, Aircraft and Atmospheric Systems Group.For the past 20+ years, Prof. Mavris and ASDL have specialized in the integration of multi-disciplinary physics-based modeling and simulation tools. ASDL’s signature methods streamline the process of integrating parametric simulation toolsets and enable huge runtime improvements that facilitate large scale design space exploration and optimization under uncertainty. Recent research focuses on combining these methods with advances in computing to enable large-scale virtual experimentation for complex systems design. |
Title: Application of Machine Learning and Data Analytics for Aircraft Design
Aircraft systems are complex and the design of a new, next generation vehicle is an intricate, multi-disciplinary problem. Throughout the design lifecycle, engineers build and exercise complex modeling and simulation environments that enable them to explore the potential performance of the entire design space. Early in the design process there are many unknowns regarding all of the relevant design variables. Therefore, the design space is vast and there are a large number of designs that need to be analyzed. The complexity of the modeling environments utilized for this analysis can cause them to have high runtimes, which can prohibit exploration of the entire space. Furthermore, when a data for a large number of designs exist it can be difficult to visualize the results in a meaningful way that enables the identification of trends. Researchers at Georgia Tech’s Aerospace Systems Design Lab have utilized advances in the fields of machine learning and data analytics to create new opportunities for the exploration of the design space. ASDL has embraced the latest techniques and technologies in the development of visual representations and interactive environments and dashboards. Current and past visualization efforts have focused on supporting the design exploration of complex systems, increasing the understanding of existing systems, forecasting, through a combination of data-driven models and predictive modeling capabilities, the evolution of current systems, and guiding decision making. Furthermore, research efforts have made use of a number of data cleansing techniques, automated data mining, and both supervised and unsupervised machine learning approaches to handle the identification of trends, perform root cause analysis, and perform predictive analyses to identify future risks. |
Dr. Heng Xiao, Virginia Polytechnic Institute and State University
Dr. Heng Xiao is an Assistant Professor in the Department of Aerospace and Ocean Engineering at Virginia Tech. He holds a bachelor’s degree in Civil Engineering from Zhejiang University, China, a master’s degree in Mathematics from the Royal Institute of Technology (KTH), Sweden, and a Ph.D. degree in Civil Engineering from Princeton University, USA. Before joining Virginia Tech in 2013, he worked as a postdoctoral researcher at the Institute of Fluid Dynamics in ETH Zurich, Switzerland, from 2009 to 2012. His current research interests lie in quantifying and reducing model uncertainty in RANS simulations with data assimilation and machine learning techniques. More information can be found in the manuscripts from the presenters website: |
Title: A Physics-Informed Machine Learning Framework for RANS-Based Predictive Turbulence Modeling
Numerical models based on the Reynolds-averaged Navier Stokes (RANS) equations are widely used in turbulent flow simulations in support of engineering design and optimization. In these models, turbulence modeling introduces significant uncertainties in the predictions. In light of the decades-long stagnation encountered by the traditional approach of turbulence model development, data-driven methods have been proposed as a promising alternative. In this talk, I will present a data-driven, physics-informed machine learning framework for predictive turbulence modeling based on RANS models. The framework consists of three components: (1) prediction of discrepancies in RANS modeled Reynolds stresses based on machine learning algorithms, (2) propagation of improved Reynolds stresses to quantities of interests with a modified RANS solver, and (3) quantitative, a priori assessment of predictive confidence based on distance metrics in the mean flow feature space. Merits of the proposed framework are demonstrated in a class of flows featuring massive separations. Specifically, high-fidelity simulation data from a few flows (e.g., curved backward step, channel with wavy wall) are used to train the discrepancy functions of Reynolds stress, which are subsequently used to predict flows in a new geometry (channel with periodic hill) that is not present in the training flow database. Significant improvements over the baseline RANS predictions are observed. Moreover, in all test cases the agreements between predictions and benchmark data correlate well with the confidence assessment results. The favorable results suggest that the proposed framework is a promising path toward RANS-based predictive turbulence in the era of big data. |
Dr. Karthik Duraisamy, University of Michigan
Dr. Duraisamy obtained a doctorate in aerospace engineering and master’s degree in applied mathematics from the University of Maryland, College Park. Prior to his appointment in 2013 at the University of Michigan, he spent time at Stanford University and the University of Glasgow. At the University of Michigan, he is the founding director of the Center for Data-driven Computational Physics, which involves 10 affiliated faculty members and is focused on deriving data-driven solutions to complex multi-physics problems in many fields. His other research interests are in turbulence modeling and simulations, numerical methods and reduced-order modeling. |
Title: Data-driven Turbulence Modeling: Current Advances and Future Challenges
With the recent growth in high performance computing and measurement resolution and the capability to learn from large volumes of data, there is an opportunity to use data, to inform predictive models of turbulence. While the general idea of data-driven modeling appears intuitive, the process of obtaining useful predictive models from data is less straightforward. This is especially true in turbulence modeling in particular, because there is no proof that a sufficiently accurate single point closure model is waiting to be discovered. This talk will discuss a coordinated approach of experimental design, data decomposition, statistical inference, machine learning and physical modeling with the goal of improving the predictive capabilities of turbulence models in a practical setting. Pragmatic choices that are made at every step of the process will be particularly emphasized. When the machine learning-generated model forms are embedded within a standard solver setting, we show that much improved predictions can be achieved, even in geometries and flow conditions that were not used in model training. The usage of very limited data (such as the measured lift coefficient) as an input to construct comprehensive model corrections provides a renewed perspective towards the use of vast, but sparse, amounts of available experimental datasets towards the end of developing predictive turbulence models. The applicability of different machine learning algorithms including Gaussian processes, Neural Networks and Genetic programming will be discussed in the specific context of turbulence modeling. |
Dr. Krishna Rajan, University at Buffalo: the State University of New York
In 2015, Dr. Krishna Rajan was appointed as the Erich Bloch Endowed Chair of the newly founded Department of Materials Design and Innovation (MDI) and Empire Innovation Professor at the University at Buffalo: the State University of New York. Prior to his new appointment, he was the Wilkinson Professor of Interdisciplinary Engineering at Iowa State University with dual appointments in the Departments of Materials Science & Engineering, and the Bioinformatics & Computational Biology Program. He is noted for his pioneering contributions to the application of informatics techniques and the development of innovative accelerated computational methods for design, discovery and characterization of novel materials. He received his undergraduate degree from the University of Toronto and his Sc.D. in Materials Science from Massachusetts Institute of Technology (MIT) with a minor in Science and Technology Policy. His postdoctoral appointments were at MIT and Cambridge. |
Title: Materials Informatics: Mining and learning from data for accelerated design and discovery
From a methodological perspective, many of the key research areas problems in material science lend themselves to be well suited to the field of machine learning and data mining. The fundamental tenets of data mining that include classification, prediction, association discovery and outlier detection, actually are the foundations of how many materials science theories and experiments are defined. In this presentation we provide an overview of how we can harness the tools of machine learning to extract structure-processing-property relationships for materials design. It is shown that the data mining approach provides additional or new insights in materials behavior that would have been difficult to have discovered by using only experiments and/or physical modeling. This knowledge discovery process is shown to significantly accelerate materials design and discovery. |
Dr. Vipin Kumar, University of Minnesota
Dr. Vipin Kumar is a Regents Professor at the University of Minnesota, where he holds the William Norris Endowed Chair in the Department of Computer Science and Engineering. Kumar’s current research interests include data mining, high-performance computing, and their applications in Climate/Ecosystems and Biomedical domains. Kumar is the Lead PI of a 5-year, $10 Million project, “Understanding Climate Change – A Data Driven Approach“, funded by the NSF’s Expeditions in Computing program that is aimed at pushing the boundaries of computer science research. He also served as the Head of the Computer Science and Engineering Department from 2005 to 2015 and the Director of Army High Performance Computing Research Center (AHPCRC) from 1998 to 2005. His research has resulted in the development of the concept of isoefficiency metric for evaluating the scalability of parallel algorithms, as well as highly efficient parallel algorithms and software for sparse matrix factorization (PSPASES) and graph partitioning (METIS, ParMetis, hMetis). He has authored over 300 research articles, and has coedited or coauthored 10 books including two text books “Introduction to Parallel Computing” and “Introduction to Data Mining”, that are used world-wide and have been translated into many languages. Kumar is a Fellow of the ACM, IEEE and AAAS. Kumar received the B.E. degree in Electronics & Communication Engineering from Indian Institute of Technology Roorkee (formerly, University of Roorkee), India, in 1977, the M.E. degree in Electronics Engineering from Philips International Institute, Eindhoven, Netherlands, in 1979, and the Ph.D. degree in Computer Science from University of Maryland, College Park, in 1982. |
Title: Big Data in Climate: Opportunities and Challenges for Machine Learning and Data Mining
URL: http://www.cs.umn.edu/~kumar This talk will present an overview of research being done in a large interdisciplinary project on the development of novel data mining and machine learning approaches for analyzing massive amount of climate and ecosystem data now available from satellite and ground-based sensors, and physics-based climate model simulations. These information-rich data sets offer huge potential for monitoring, understanding, and predicting the behavior of the Earth’s ecosystem and for advancing the science of global change. This talk will discuss challenges in analyzing data sets and some of our research results.Research funded by the NSF Expeditions in Computing Program and NASA |
Dr. Jaime Carbonell, Carnegie Mellon
Dr. Carbonell is currently leading Boeing/Carnegie Mellon Aerospace Data Analytics lab to finds ways to use artificial intelligence and big data to capitalize on data generated in the design, construction and operation of modern aircraft. University Professor and Allan Newell Professor of Computer Science, Jaime G. Carbonell joined the Carnegie Mellon community as an assistant professor of computer science in 1979, and has gone on to become a widely recognized authority in machine translation, natural language processing and machine learning. Carbonell has invented a number of well-known algorithms and methods during his career, including proactive machine learning and maximal marginal relevance for information retrieval. His research has resulted in or contributed to a number of commercial enterprises, including Carnegie Speech, Carnegie Group and Dynamix Technologies. In addition to his work on machine learning and translation, Carbonell also investigates computational proteomics and bio linguistics — fields that take computational tools used for analyzing language and adapt them to understanding biological information encoded in protein structures. This process leads to increased knowledge of protein-protein interactions and molecular signaling processes. Before joining the Carnegie Mellon faculty, Carbonell earned bachelor’s degrees in mathematics and physics at the Massachusetts Institute of Technology, and his master’s degree and Ph.D. in computer science at Yale University. |
Title: Transfer Learning and Data Science for AerospaceThe growth of data in all forms, numeric, textual, sensor-badged, images, etc. has led to a boom in machine learning methods that power data science. However, the relative paucity of “labeled” data, that is expert-curated interpretation of the data means that standard supervised machine learning methods must be extended by techniques such as unsupervised methods, active learning and transfer learning. The presentation addresses some new forms of transfer/multi-task learning, augmented with proactive learning, and discusses applications to aerospace and related challenges. |
Dr. Ranga Raju Vatsavai, North Carolina State University, and Oak Ridge National Laboratory, USA
Raju Vatsavai joined the Department of Computer Science at the North Carolina State University in August 2014 as a Chancellor’s Faculty Excellence Program Cluster Associate Professor in Geospatial Analytics. Before that he was lead data scientist at Oak Ridge National Laboratory. Raju is an interdisciplinary scientist known for innovative contributions to large scale spatial and spatiotemporal data management and spatial data mining. His overarching geospatial analytics research spans big data management, data mining, and high performance computing with applications in national security, geospatial intelligence, natural resources, climate change, location-based services, and human terrain mapping. In addition, as the associate director of the Center for Geospatial Analytics, Raju plays a leadership role in developing and executing the strategic vision for spatial computing research at NCSU. He holds MS and PhD degrees in computer science from the University of Minnesota. |
Title: Global Earth Observations Based Machine Learning Framework for Monitoring Critical Natural and Man-made Infrastructures
Earth is a dynamical system continually changing due to both natural and human induced factors. Understanding the complex interactions between various competing systems including food, energy, water, and climate is critical for building sustainable future. Recent decade has witnessed major changes on the Earth, for example, deforestation, varying cropping and human settlement patterns, and crippling damages due to disasters. Accurate damage assessment caused by major natural and anthropogenic disasters is becoming critical due to increases in human and economic loss. This increase in loss of life and severe damages can be attributed to the growing population, as well as human migration to the disaster prone regions of the world. Rapid assessment of these changes and dissemination of accurate information is critical for creating effective response systems. Global earth observations with constellation of more than 100 operational satellites are providing unprecedented spatiotemporal data coverage, which can be exploited to continuously monitor these key resources.We present recent advances in data mining and machine learning approaches for analyzing multitemporal and very high-resolution remote sensing big data. In particular, the following topics: (i) Gaussian Process (GP) based approaches for change monitoring, (ii) multi-instance learning based approaches for settlement mapping and (iii) semantic classification based critical infrastructure monitoring will be presented. |