PLEASE NOTE THAT THIS LIST OF INTERESTS IS NOW OUT OF DATE. (SHOULD BE MOVED TO THE "PAST" PART OF THIS WEBSITE.)
SEE MY GOOGLE SCHOLAR PAGE FOR CURRENT INTERESTS.
Information theory and Game theory
My current focus includes differential measures of the values of information; multi-player versions of Kelly gambling; Nash equilibria of multi-player conversations taking place through noisy communication channels (and what they imply about optimal grammars); and information geometry of noncooperative games.
See my papers on "Observational limits of decision making "and on "Hysteresis effects ... of non-cooperative games" for early work on this topic. See also http://www.santafe.edu/research/sfi-theme-weeks/citgt/.
Multi-information source optimization
In conventional optimization one samples a single Information Source (IS) to search over x's for an optimal x. An example is using simulated annealing to construct a sequence of designs x of a device, where the (single) IS is a simulator of a device designed according to x, and the goal is to find the optimal design of the device.
In practice though, almost always we have several IS’s to choose from at each step in the optimization sequence, IS's that have different accuracies and costs. For example, the multiple IS’s might be multiple simulators with different accuracies and computational costs. (This example is related to the field of "multi-fidelity optimization".) As another example, the multiple IS’s might include experimental tests, which have monetary costs.
The problem, never before addressed, is how to dynamically choose among IS’s during the optimization to achieve faster optimization at less total cost. In particular, how can we account for the costs of the different IS's in a principled way? And how can we do this when both the inputs and the outputs of the IS's may differ? In addition to multi-fidelity optimization, this problem is related to a host of other fields, including adaptive experimental design, active learning, optimal learning, multi-portfolio management, and even multi-disciplinary optimization.
See http://www.santafe.edu/gevent/detail/science/1161/ for more details.
Why it may be optimal for humans to be "bounded rational"
It is now well-established that many aspects of human behavior do not come close to maximizing what the external scientist considers that human's utility function. Yet natural selection does not stop at the neck; it is hard to imagine why the behavior of the human brain should not be close to optimal, given the same exigencies that cause the rest of a human's organs to behave close to optimally. So why does the human brain (apparently) behave so sub-optimally?
I am very interested in this issue. Especially intriguing is the idea that much of what appears to us, the external scientist, to be sub-optimal human behavior is actually optimal - once we figure out what problem that behavior is actually designed for, as opposed to what problem we think it is designed for. (From a certain perspective, that is what all game theoretic theoretical models of altruism in terms of repeated games try to do.) See my papers on hedonic prods and persona games for some early work along these lines.
Exploiting machine learning to improve Optimization and Monte Carlo estimation (rather than the other way around)
Over the last decade or two, Machine Learning has benefited considerably by exploiting leading-edge techniques from Optimization and Monte Carlo. I am interested in going the other way, exploiting techniques from Machine Learning to improve Optimization algorithms and Monte Carlo methods.
Some of my earlier work along this line has exploiting supervised learning and cross-validation to post-process Monte Carlo samples . In computer experiments, this approach - a combination of control variates and cross-validation - never hurt convergence of Monte Carlo integral estimates, and often helped it considerably. (See my papers on "Stacked Monte Carlo".)
Other earlier work has used a formal "isomorphism" between Monte Carlo optimization and supervised machine learning, to allow us to "translate" any technique from supervised learning into a technique applicable to Monte Carlo optimization. In particular, by using such a translation we can apply cross-validation during simulated annealing, to dynamically set the best temperature.
See my papers on "immediate sampling probability collectives", and associated eye-candy at http://ti.arc.nasa.gov/tech/dash/intelligent-data-understanding/probcol/demonstrations/.
Extending the domain of game theory to more realistic field scenarios
There are several challenges facing the use of current game theory in many complicated "field" (i.e., non-laboratory) scenarios.
i) The first of these is that in many real-word, field scenarios, the set of pure strategies of each of the players cannot be explicitly listed by us, the external scientists trying to predict the players' behavior. So we need a way to predict the joint behavior of interacting people without knowing their "move spaces". See my work on predictive unstructured bargaining for an early attempt to overcome this shortcoming by exploiting ideas from unstructured bargaining.
ii) A second challenge is that in many field scenarios, players do not move according to some pre-defined sequence, but rather in an event-driven, stochastically determined sequence. Formally speaking, extensive form game theory can model such scenarios. However for anything other than very simple scenarios, such extensive form models are intractable. See my work on event-driven game theory for an early attempt to overcome this challenge.
iii) A final challenge arises from the use of "point solution concepts" by (current) noncooperative game theory. This means that game theory's prediction for any game is typically that the joint behavior of the players of that game (their "mixed strategy profile") has measure zero in the space of all possible behaviors. This also means that current game theory does not assign relative probabilities to any of the behaviors in that measure zero set.
In many respects, this differs from how physical scientific theories are typically formulated. Since the 19th century, science has made predictions using probability and statistics, assigning zero probability only to those events that are physically impossible. In the context of game theory, this would mean that we explicitly assign probabilities to all behaviors (i.e., all mixed strategy profiles), and have those values be non-zero for all behaviors (since all are physically possible).
In addition to its formal motivation, this approach to predicting human behavior has major practical benefit. In particular, it reduces the problem of how to "steer" human behavior to maximize expected social welfare (e.g., when regulating an economy) to a problem of conventional control theory, potentially allowing the powerful results of closed-loop control to be applied.
Some work in Econometrics can be seen as generating a probability distribution over all behaviors. However this work is more often motivated by the practical need to estimate parameters of a game from noisy data than by first-principles reasoning. See my papers on Predictive Game Theory (PGT) for an early attempt to reformulate game theory in terms of a "probability distribution - valued solution concept" using first-principles reasoning.
Interestingly, conventional game theory's focus on point-valued solution concepts leads to numerical approximations based on fixed point solvers (e.g., homotopy methods), whereas PGT leads to Monte Carlo approaches. Such use of Monte Carlo generates a large set of what are, in essence, agent-based simulations. However these simulations are generated by sampling an explicitly specified distribution, one that is derived from a mathematical model. This contrasts with some agent-based modeling which instead builds only a few agent-based simulations, in a relatively ad hoc manner. Using PGT to generate the agent-based simulations means one can leverage all the power of having an underlying mathematical model, so that the lessons learned from investigating instantiations of the model in a particular domain are broadly applicable beyond the particular domain.
The second law and growth of complexity
Many biological systems seem driven to have high complexity, often accompanied by low
entropy. Many processes can drive these increases, including natural selection, auto-catalysis,
constructive neutral evolution, and embryogenesis. The second law of thermodynamics
allows these complexity-increasing processes, since biological systems are open. However
it may be that the second law in fact drives these processes, i.e., drives biological systems to have low entropy. This possibility is known as “Schrodinger’s paradox”.
Ultimately, one would like to address Schrodinger's paradox in terms of dynamic systems theory. To be more precise, let p(z) refer to the phase space density of a system over phase space position z, and let H be the Hamiltonian governing its dynamics. Then our goal is to understand what characteristics of H determine whether the resultant dynamics of p has an attractor throughout which p has high complexity, and understand how the dynamics of the complexity of p is determined by H.
To illustrate this dynamic systems perspective in a biological context, say our system is initially described by a p within a basin of attraction of a high-complexity attractor. Say that the system experiences an external shock knocking it to another point in the basin that has lower complexity. After the shock, the system would start increasing its complexity back to the value it had before the shock. Examples of this arguably include asteroid impacts, volcanic eruptions, etc., that cause a mass extinction, thereby reducing the complexity of the terrestrial biosphere, after which the biosphere's complexity grows back. Note though that if the shock were big enough to knock the systems completely out of the basin of attraction, then the system would "die", and not increase its complexity back to what it was.
The goal then is to investigate how the Hamiltonian of a system determines its high-complexity attractors, and in particular how its thermodynamic properties do so.
To start to address this we need a framework that encompasses both thermodynamics and a formalization of "complexity". A natural choice is Shannon's information theory, since as Jaynes showed it can be used to express statistical physics (thermodynamics), and would seem to provide a possible measure of complexity. However Shannon entropy is often dismissed as a measure of complexity, since it assigns different values to gases and crystals, while both are typically viewed as non-complex. This would seem to rule out the use of information theory to analyze Schrodinger's paradox.
However like so much work with entropy, this result is based on using a uniform prior in the definition of entropy. In a recent paper I have shown that if one is careful in specifying the prior distribution in the entropy measure, that measure actually assigns a low complexity to both crystals and gases. This provides the possibility of using information theory to address Schrodinger's paradox.
The mathematical underpinnings of reality
Ever since Wheeler introduced his idea of "it from bit", there has been great interest in whether the ultimate nature of reality can be cast in purely information-theoretic terms. Much of the work on this topic has used Shannon information theory. However as Shannon emphasized, his theory is purely concerned with encoding and decoding a message. It is not concerned with what the message means, with what it says concerning some physical object or event. This non-semantic nature of Shannon information theory suggests that to fully capture the true physical nature of information we must construct a semantic theory of information.
Most semantic information theories have required a utility function over the event space, with the different utility values for different events providing their semantic content. However it is not clear what the "utility function" is for the physical universe.
In a series of articles, I have started to investigate a semantic information theory that does not involve utility functions. Preliminary work has uncovered connections between this semantic information theory and the theory of Turing machines. In particular, there are Halting-theorem-like results that limit the semantic information that a set of observation devices can have about external systems and about one another; one of these results can be viewed as a non-quantum-mechanical "uncertainty principle".
We can define a physical reality to be any set of physical systems and observation devices. Accordingly, the impossibility results mentioned above provide (semantic) information-theoretic limits on the nature of physical reality.
This preliminary work is related to Aumann's work on knowledge operators, and to the work of many researchers on the relation between computer science and physics. See my papers and talk on inference devices for early work on this topic. Here is a presentation of this work: http://davidwolpert.weebly.com/uploads/1/7/1/8/17187380/new.inference.devices.8.short.pdf
Evolution of Technology
Any technology can be described by a blueprint specifying its components and their interaction. That blueprint determines an actual device. The operational characteristics of that device that we are interested in comprise its specs. For example, a smartphone’s blueprint determines its specs.
A population of multiple related blueprints evolves stochastically in blueprint–spec space. It does this by combining multiple members of the current population, while modifying some of them (perhaps drastically), to create new blueprints. This evolution is driven to create blueprints whose specs are more “fit”, i.e., more desirable to at least one inventor creating new blueprints. While clearly having similarity to both population biology and economics, this stochastic process has major distinctions with both.
A preliminary goal is to model (and fit to historical data) the evolution purely as a driven stochastic process in spec space, without considering any underlying blueprint space. ("Performance curves" can be viewed as such models for the simple case where the spec space is one-dimensional.) More ambitiously, the goal is to model the entire blue-spec evolutionary process.
See /uploads/1/7/1/8/17187380/presentation.to.strat.latency.workshop.2012.pdf for a high-level overview.