Value-centered Information Theory for Adaptive Learning, Inference, Tracking, and Exploitation
Tactical sensing and actuation systems are inundated with diverse and high volumes of data. Much of this data is uninformative and irrelevant to the end task of the system which can evolve over the mission. The problem of extracting and exploiting the relevant and informative portion of sensor data has been an active area of research for several decades. Despite some progress, notably in information-driven tracking and data fusion, a general solution framework remains elusive, especially for autonomous and distributed sensing systems. What is needed is a comprehensive set of principles for task-specific information extraction and information exploitation that can be used to design the next generation of autonomous and adaptive sensing systems. These principles must go beyond the standard information theoretic approaches that fail to account for non-classical information structures due to factors such as: small sample size, poorly-specified target and clutter models, feedback control actions, hostile or adversarial environments, computation/communication constraints, distributed sensing resources, and time-critical decision making.
The mappings from data to information (information extraction) and from information to action or decision (information exploitation) constitute the backbone of active sensing systems. However, virtually all existing approaches for specifying these mappings are based on ad hoc methods and fail to provide a useful measure of either the quality (quantified uncertainty) or the value (exploitation utility) of information in terms of system objectives. For example, a heuristic information extraction strategy such as principal components analysis (PCA) mistakenly equates high variance and high information content of the data. As another example, current designs often decouple the strategies for platform control (plan-ahead modality, sensor position or look angle) and for target prediction (future location, behavior, interactions), even though the separation theorem of estimation and control does not hold for autonomous systems; this leads to poor system performance in many cases.
The problem in both of the above examples is a failure to correctly assess the value of information, resulting in conflation of the goals of information extraction and information exploitation. This value will be a function of many factors including final mission objective, modalitydependent data-acquisition and switching cost, modeling uncertainty, countermeasure tactics of adversaries, and data relevancy horizon. To quantitatively evaluate this value requires physicsbased sensor models, assessment of of contextual information (which may come from conflicting sources), quantification of uncertainty, and bounds on learning and adaptation rates. Therefore, completely physics-based, completely learning-based, or completely context-based approaches to assessing value of information will likely fail. To make progress we will need to blend together the best of all three approaches in a systematic analytical framework. This framework must accommodate sensor control actions for which the performance measure can be continuously updated as new data comes in so as to ensure effective sensor management. It must also accommodate distributed autonomous networks of sensors and resources that operate in a hostile environment with limited communications bandwidth and processing power. Finally, the framework must not only provide a theory of information value but it must also lead to practical improved strategies for assessing, extracting, and exploiting information in real time, using distributed, limited capability sensor processing nodes.
Our research program is laying the foundations for a new systems theory that applies to general controlled information gathering and inference systems. Our research approach comprises three inter-related research themes. These themes are:
- information-driven structure learning and representation
- distributed information fusion
- active resource management for effective information exploitation
We aim to develop an end-to-end solution that will result in better raw sensor data acquisition and processing, improved fusion of multiple sources and modalities, and more effective sensor management and control. We will validate our new theory and algorithms on on third party databases and on the autonomous sensing testbed that is in co-PI Jon How’s lab at MIT.
Information-driven Learning and Representation
Thrust 2.1 addresses learning informative and predictive models that account for the sequential nature of data collection in active sensing systems, such as autonomous maneuvering robots with vision/IR/LIDAR capabilities. Quantifying the value of information will be essential but there exists no suitable theory applicable to such systems. Classical Shannon information theory is inadequate as it was not designed for learning in active sensing systems; rather it was designed for data transmission in communications systems. The principal objective of Thrust 2.1 is to develop and apply a new theory for learning the value of information that accounts for realtime feedback and control of the sensor, applies to signals that are non-linearly embedded in high dimension, accounts for models with complex structural components, e.g., hierarchical graphical models of interactions in the scene, has scalable computation even in large distributed sensor systems, and accounts for the economic or human cost of acquiring data or fielding a new sensor.
Our premise is that, in the context of decision and control tasks for high-dimensional, highly uncertain and highly structured data, the notion of information and control are inextricably tied together: One must exercise control in order to acquire information from the data. This principle is clearly illustrated in an active vision system where occluded objects can only be discovered by maneuvering the sensor. This requires a generalized state space representation that contains the information that matters for control or decision, including soft information like prior context knowledge available to the design engineer, analyst, or decision maker. Similar to the classical information state, the new representation will be specified by a statistical model (posterior distribution) and will carry complete information concerning the uncontrolled target states and the controlled sensor states (important in a reactive adversarial environment). The information state is the basis for statistical machine learning, classification, tracking, and detection – thus the new theory will enable our development of improved algorithms for statistical inference, fusion and sensor management (Thrust 2.2 and 2.3). It will also lead to useful extensions of classical control theoretic notions of observability, controllability, detectability and reachability that will elucidate design tradeoffs for distributed sensing systems.
Distributed information fusion
Our team has pioneered the development of multi-modal data fusion using information theoretic measures and associated surrogates, e.g., non-linear canonical correlations, latent variable estimates in graphical models, and manifold learning. This work lays the foundational framework for a new theory that can account for the value of information by developing quantitative task dependent performance predictions. These predictions will be essential for full information exploitation and sensor management (Thrust 2.3) in the fast-paced small-sample-size battleground sensing environment. The predictions will be used to optimize algorithm tuning parameters (graphical model order, correlation shrinkage coefficients, and embedding dimension). These fusion methods will need to account for background variations and incomplete knowledge of statistical feature relationships across platforms and sensing modalities, all the while taking full advantage of known sensor physics.
We are developing a novel approach to value-centered fusion and dimensionality reduction based on non-commutative information theory. Non-commutative information theory is the offspring of non-commutative probability, also called free probability, and it applies to data that comes in the form of large dimensional random matrices or, more generally, any multidimensional dataset that can be cast as a “determinental process” (T. Tao 2010). Remarkably, non-commutative information theory provides universal probabilistic limits (Marchenko-Pastur) and bounds on certain macro-properties, e.g., the spectral distribution of the data sample, that apply even if the sample size is small. Using these limits we are working to quantify the uncertainty associated with data-driven spectral decompositions such as principal components analysis (PCA), linear discriminant analysis (LDA), canonical correlation analysis (CCA) and non-linear generalizations such as kernel CCA. Free probability extensions of the main tools of Shannon’s information have been very recently developed, including non-commutative notions of entropy, mutual information, KL divergence, Fano bounds, Sanov bounds, and rate-distortion bounds. These tools will be effective for quantifying value of information for large scale dimensionality reduction and sensor fusion.
Active Resource Management for Effective Information Exploitation
Thrust 2.3 is to develop methods for sensor resource management in multimodality and/or mobility enabled sensor platforms by exploiting the information provided by the learning and fusion functions (Thrust 2.1 and Thrust 2.2). The sensor manager plans ahead and controls the degrees-of-freedom (actions) of the sensor in order to achieve system objectives. Available actions that we will consider include: region of focus of attention, choice of modality and mode (for example EO vs LIDAR), transmit waveform selection, and path planning actions (platform maneuvering). Sensor management will predict the value of information resulting from each of the candidate sensing actions. This prediction will account for the uncertainty of the environment, time-varying visibility constraints (target obscuration), target behavior, and sensor resource constraints.
We are adapting the new active information state representations developed in Thrust 2.1 to the likelihood map framework, which maps out the posterior density of the location of the target in state space. As this posterior contains all available information about target uncertainty it is a natural component of the sensor management strategy. The flow of information through an actively managed distributed sensing system can be represented as a directed graph (with cycles) from data acquisition to multimodality fusion to decision and control functions, which may themselves be defined as operations on graphs, e.g., signal flow diagrams and structures, graphical model emulations, and iterative algorithms such as belief propagation. The global information over the graph can be represented geometrically by Fisher information for continuous variables (e.g., sensor measurements and kinematic target states) and by Chernoff information for discrete variables (e.g., contextual information). As part of Thrust 2.3, we are using this geometric perspective to develop a powerful unifying framework for reducing computational complexity and for quantifying and minimizing losses due to factors such as: local approximation to global information state, use of myopic plan-ahead policies, conflicting or counterfactual contextual information, or decentralized multisensor planning.