The core problem that drives my research is to develop computational agents that exhibit human-level intelligence: real-time systems that persist for long periods of time while autonomously contending with, and improving performance on, a variety of complex problems. General intelligence is a fascinating pursuit in its own right, but also supports diverse applications that are rich, interactive, and adaptive, such as training systems and personal robotic assistants.
An important capability that humans have is to integrate higher-order knowledge with lower-level cognitive processing in order to make intelligent decisions. Research in this space is challenging and rare, as it requires bridging numerous processing, representational, and political dichotomies that include symbolic vs. sub-symbolic, discrete vs. continuous, and online/incremental vs. batch.
Online machine learning is crucial for many low-level cognitive processes, including real-time perceptual pattern recognition and continuous control. However, few methods can scale to large numbers of examples, in high dimensions, and can support incremental approximation of a wide variety of functions. The Boundary Forest (BF) is a novel instance-based algorithm that satisfies these properties, and, as a bonus, is transparent and easy to implement.
Representative publications:
The Alternating Direction Method of Multipliers (ADMM) is a classic algorithm for convex optimization that has seen an upsurge in recent interest, primarily because it is well suited for distributed implementations. Optimization is a useful framework in the context of cognitive systems because many processes (e.g. constraint satisfaction, planning, vision/perception) can be formulated as optimizing an objective function. While ADMM was developed to solve convex problems, it is well-formed for general optimization, and thus I have been investigating its efficiency and efficacy on a selection of non-convex problems.
Representative publications:
Reinforcement Learning (RL) is a key component of many agent architectures, as it is a fully general, online and incremental algorithm that holds many similarities to neural processes in the brain. An RL agent exploits experience in the world to tune a value function, which is a mapping from state-action pairs to an expectation of future reward. The value function then informs action selection, such that the agent maximizes its expected reward. Although RL has been studied extensively, there has been little research on its integration within a cognitive system, nor how the value function is determined by an agent's experience with a task using background knowledge.
Representative publications:
Human-level intelligence must contend with the infinite complexities of the real world, including sensing multiple modalities (e.g., vision, audition), localization, and actuation such as to achieve goals.
How can contemporary work in machine learning and cognitive architecture be used in mobile music interactions? I have integrated the Soar cognitive architecture with the urMus meta-environment and explored learning for music generation and autonomous accompaniment.
Representative publications:
I have been developing computational and research methods for developing mobile robots that can persist for long periods of time in human-centric environments (e.g., offices, homes).
Representative publications:
A vital aspect of human-level intelligence long-term memory: people are able to accumulate large amounts of experience and access it later in flexible ways, all while dealing with, in real time, the myriad challenges of everyday life. Long-term memory is crucial to human-level reasoning, as illustrated by the deficits in patients afflicted with anterograde amnesia (e.g. H.M.); however, no computational systems support this capability over long periods of time. The work described below has been implemented within the Soar Cognitive Architecture, which is open-source, cross-platform, and comes with comprehensive documentation and tutorials.
In psychological literature, semantic knowledge is characterized as general information about the world that is independent of the context in which it was originally learned. An agent endowed with a long-term semantic memory is able to access large stores of information (e.g. lexicon, ontology, fact base) that may be useful across many situations.
Representative publications:
In psychological literature, episodic memory is a temporally organized sequence of detailed events as originally experienced by an agent. Prior work has shown that agents endowed with flexible access to prior experience are enhanced with numerous cognitive capabilities, such as virtual sensing and action modeling. However, prior work has not produced a task-independent episodic memory that can scale real-time performance across long lifetimes.
Representative publications:
In the AI literature, the utility problem refers to the situation in which learning more knowledge can harm an agent's problem-solving performance; one common strategy to address this issue is to incrementally forget a subset of learned knowledge. However, prior work has demonstrated the challenge inherent in devising forgetting policies that work well across problem domains, effectively balancing the task performance of agents with reductions in retrieval time and storage requirements of learned knowledge.
Representative publications: