Received: from illc-sun.philo.uva.nl by moose.cs.indiana.edu
(8.7.1/IUCS.1.39) id JAA24448; Fri, 22 Dec 1995 09:16:32 -0500 (EST)
Received: from [145.18.66.112] (mac112.philo.uva.nl) by illc-sun.philo.uva.nl with SMTP (5.65c/5.1)
id AA12266; Fri, 22 Dec 1995 15:16:18 +0100
Message-Id: <199512221416.AA12266@illc-sun.philo.uva.nl>
X-Organisation: Department of Philosophy
University of Amsterdam
Nieuwe Doelenstraat 15
NL-1012 CP Amsterdam
The Netherlands
X-Phone: +31 20 525 4500
X-Telex:
X-Fax: +31 20 525 4503
Date: Fri, 22 Dec 1995 15:16:37 +0800
To: ITALLC96@cs.indiana.edu
From: gerbrand@illc.uva.nl (Jelle Gerbrandy)
Subject: ITALLC96 Submission
X-Attachments: :ha! disco:12697:abstractITALLC:
Status: RO
Please find my submission attached to this message.
_____________________________________________________
Jelle Gerbrandy
ILLC/Department of Philosophy, Nieuwe Doelenstraat 15, 1012 CP, Amsterdam
tel.: +31-20-525 4551, e-mail: gerbrand@illc.uva.nl
\documentstyle[titlepage]{article}
\newtheorem{proposition}{Proposition}[section]
\newenvironment{prop}{\begin{proposition}\rm}{\end{proposition}}
\newtheorem{definition}[proposition]{Definition}
\newenvironment{defn}{\begin{definition}\rm}{\end{definition}}
\begin{document}
\title{\vspace{3cm}
Dynamic Epistemic Logic\\
Abstract
\vspace{2cm} }
\author{Jelle Gerbrandy\\ \\
ILLC/Department of Philosophy\\
Nieuwe Doelenstraat 15\\
1012 CP, Amsterdam\\
tel.: +31-20-525 4551\\
e-mail: \tt{gerbrand@illc.uva.nl}\\}
\date{}
\maketitle
\section{Introduction}
This paper is an attempt to combine two traditions: epistemic logic and
dynamic semantics.
Dynamic semantics is a branch of formal semantics that is concerned with
{\em
change}, and more in particular with change of information.
The motivation for, and applications of this `paradigm-shift' can be found
in
areas such as semantics of programming languages, protocol analysis in
computer
science, default logic, pragmatics of natural language and of man-computer
interaction, anaphora resolution, presupposition theory and discourse
analysis. The main idea is that the meaning of a syntactical unit---be it a
sentence of natural language or a computer program---is best described as
the
the change it brings about in the information state of a human being or a
computer.
This paper is firmly rooted in this paradigm, but at the same time
it is much influenced by another tradition: that of the analysis of
epistemic logic in terms of multi-modal Kripke models.
This paper is the result of combining these two traditions. Because it is
hard to define operations of information change on the classical
representation in terms of Kripke models, we will provide
an alternative, but equivalent model of information. Then we will add
operators
to the language of classical epistemic logic that express information
change,
and we will provide a dynamic semantics for this language on the basis of
the
new modeling. The new operators express concepts such as `learning that...'
and `it becomes common knowledge that...' The resulting semantics is
compared
with Kripke semantics, with the semantics of the logic of common knowledge
in
terms of knowledge structures of
Fagin, Halpern and Vardi (1991), and with update semantics of
Veltman (1990).
\section{Static Modal Semantics}
The classical language of multi-modal logic is the following:
\begin{defn}
Let ${\cal A}$ is a set of agents and ${\cal P}$ a set of propositional
variables. The language $\cal L$ is the smallest set such that ${\cal
P}\subseteq
\cal L$, and if $\phi, \psi \in \cal L$, then $\phi \wedge
\psi$, $\neg \phi$ and $\Box_a \phi$ for each
$a \in \cal A$ are in $\cal L$.
\end{defn}
One way of providing a semantics for this language is in terms of Kripke
models. A {\em pointed Kripke model} is a
quadruple $\langle W, \{R_a\}_{a \in \cal A}, V, w\rangle$, where $W$ is a
set
of possible worlds, $w$ is a distinguished element of $W$ (the point of
evaluation), $R_a$ is a relation on $W$ for each $a
\in \cal A$, $V$ is a valuation function that assigns a truth-value (either
$0$
or $1$) to each pair of a world $v \in W$ and a propositional variable $p
\in
\cal P$.
Intuitively, given a Kripke model and a world $w$ in it, the information of
an
agent $a$ in
$w$ is represented by the set of worlds that are accessible from $w$ via
$R_a$; these worlds are the worlds compatible with $a$'s information in
$w$.
Kripke models are studied extensively, and they provide a very perspicuous
semantics for the classical language of epistemic logic. Unfortunately, it
turns out that Kripke-models are not very suitable structures for
defining
operations that correspond to intuitive notions of information change. The
problem is that possible worlds and relations play a double role. For
example, the same world might be accessible by two different relations,
which
means that it occurs as a possibility in two different information states.
This makes it hard to distinguish the information of different agents in
the
model.
To avoid this problem, we use a different (but equivalent) representation.
\begin{defn} {\em Situations}\\
Let $\cal A$, a set of agents, and $\cal P$, a set of propositional
variables, be given. The class of situations is the largest class such
that:
\begin{itemize}
\item A situation $w$ is a function that assigns to each propositional
variable $p \in \cal P$ a truth value $w(p) \in \{0,1\}$, and to each
agent $a
\in
\cal A$ an information state $w(a)$.
\item
An information state $\sigma$ is a set of situations.
\end{itemize}
\end{defn}
So, a situation $w$ characterizes which propositions are true and which
are
false, and moreover it characterizes what the information is that each of
the
agents has in that situation as an information state $\sigma$, which
consists
of the set of situations the agent considers possible in $w$.
This definition of situations should be read to range over the universe of
non-well-founded sets in the sense of Aczel (1988).\footnote{To be
precise, the underlying set-theory is axiomatized by $ZFC^-$
(the Zermelo-Fraenkel axioms minus the axiom of foundation) plus Aczel's
Anti-Foundation Axiom (AFA).} The form of this
definition, defining a set co-inductively, is a more or less standard form
of
definition in non-well-founded set theory.
It turns out that using situations instead of Kripke-models does not
make an essential logical difference.
\begin{defn}
Let ${\cal K} = (W,\{R_a\}_{a \in {\cal A}}, V, w)$ be a pointed Kripke
model.
\begin{itemize}
\item
A {\em labeling} of $\cal K$ is a function $l$ that assigns to
each world $v
\in W$ a function with $\cal P \cup A$ as its domain, such that $l(v)(p) =
V(v)(p)$ for each $p \in \cal P$, and $l(v)(a) = \{l(u) \mid v R_a u \}$
for
each $a \in \cal A$.
\item
If ${\cal K} = (W,\{R_a\}_{a \in {\cal A}}, V, w)$ is a
Kripke model, and $l$ is a labeling for it, we say that $l(w)$ is its
{\em solution}, and that
${\cal K}$ is a {\em picture} of $l(w)$.
\end{itemize}
\end{defn}
A labeling of a Kripke model assigns to each possible world $w$ in that
model a
situation in such a way that this situation assigns the same truth-values
to
the propositional variables, and assigns to each agent $a$ the set of
situations that are labels of worlds accessible from $w$ by $R_a$.
The notions of solution and picture give us a correspondence between
Kripke-models and situations:
\begin{prop} $ $
\begin{itemize}
\item
Each Kripke model has a unique solution, which is a situation.
\item Each situation has a Kripke model as its picture.
\item Two Kripke-models are pictures of the same situation iff they are
bisimilar.
\end{itemize}
\end{prop}
Defining truth of a formula in a Kripke model in the standard way,
and truth in a situation in the obvious analogous way, it holds that:
\begin{prop}
$\phi$ is true in a situation $w$ iff it is true in each picture of $w$.
\end{prop}
So a situation and a picture of it are descriptively equivalent.
This means that one can see situations as representatives of
equivalence classes of Kripke models under bisimulation.
>From this point of view,
giving a semantics for modal logic in terms of situations is an improvement
on
the semantics in terms of Kripke models. Since bisimilar models cannot be
distinguished by the sentences of modal logic, we might as well collapse
equivalence classes under bisimulation into a single model, which is
precisely what we have done. The price to pay is the use of a non-standard
set
theory that is not very familiar to most researchers in the field.
\section{Updating}
We will now proceed to define operations on situations that
correspond to some intuitive notions of change in the information state of
an agent.
The kind of information change we will model is that of agents
getting new information. To be able to describe such changes we add a
set of operators $L_a$ to the language, one for each agent $a \in \cal A$.
Intuitively, a sentence $L_a \phi$, $a$ learns that $\phi$, means that $a$
gets
the information that
$\phi$.
Typically, if an agent gets a certain piece of information, her information
state will change. To model this, we will characterize the `meaning' of a
sentence $L_a \phi$ as a relation between situations. For sake of
uniformity,
we extend this relational interpretation to all sentences of our language.
We
will see that interpreting classical sentences relationally does not
make any essential logical difference.
So, with each sentence $\phi$ of $\cal L$, we associate a relation $[\phi]$
over situations as its interpretation. Before giving the
definitions for the interpretation function $[\;]$, I introduce a
notational
convention. If $w$ and $v$ are situations and $a \in \cal A$, then
$w[a]v$ expresses that $v$ differs at most from
$w$ in the information state that it assigns to $a$.
The interpretation $[\phi]$ of a sentence $\phi$ is defined as
follows:
\begin{defn}\label{upddef}{\em Relational interpretation}
\begin{eqnarray*}
w[p]v &\mbox{ iff }& w = v \mbox{ and } w(p) = 1\\
w[\phi \wedge \psi]v &\mbox{ iff }& \exists u: w[\phi]u \mbox{ and }
u[\psi]v.\\
w[\neg \phi]v &\mbox{ iff }& w = v \mbox{ and not } w[\phi]v \\
w[\Box_a \phi]v &\mbox{ iff }& w = v \mbox{ and for all } u \in w(a):
u[\phi]u\\
w[L_a \phi]v &\mbox{ iff }& w[a]v \mbox{ and } v(a)= \{ u \mid
\exists u' \in w(a) \mbox{ such that } u'[\phi]u \}
\end{eqnarray*}
\end{defn}
It is easy to see that $[\phi]$ is a partial function for each $\phi$. So
if
there is a situation
$v$ such that $w[\phi]v$, we can write $w[\phi]$ for the situation $v$
(using
so-called postfix notation), and if there is no such $v$, we say that
$w[\phi]$ is not defined. Furthermore,
we call $w[\phi]$ the update of $w$ with $\phi$.
The interpretation given to an atomic sentence $p$ is a reflexive relation
that holds only between situations in which $p$ is assigned the
value 1. An update with an atomic proposition $p$ either returns the same
situation or is undefined. This reflects a choice we have made. We assume
that the facts in the world do not change; we will only look at change of
information that agents have about a static world.
Conjunction is interpreted as the relational composition of the
updates associated with each conjunct: the corresponding slogan is `first
process the first conjunct, then the second'.
As for the classical fragment of the language (i.e.\ the part without the
$L$-operators), the clauses in the definition are very much like
the clauses in the standard definition of truth in a Kripke model. The
following holds:
\begin{prop}\label{se}
For all $\phi$ in the classical fragment the following three
statements are equivalent:
\begin{itemize}
\item
$w[\phi]$ is defined
\item
$w[\phi]w$
\item
$\phi$ is true in $w$ (defining truth in situations in the obvious way).
\end{itemize}
\end{prop}
The literature on dynamic logic contains several notions of validity, but
the
following seems the most natural one in the present
framework:
\begin{defn}\label{vald} {\em Support and validity} (cf.Veltman (1990))
\begin{itemize}
\item A sentence $\phi$ is supported in a situation $w$, $w \models \phi$,
iff
$w[\phi]w$.
\item An argument is valid iff for each situation $w$, if the update of
$w$ with the premises $\phi_1, \ldots, \phi_n$ (in that order) exists, then
in
the resulting situation the conclusion $\psi$ is supported:
\begin{eqnarray*}
\phi_1, \ldots, \phi_n \models \psi &\mbox{ iff }& \forall w\forall
v:\mbox{
if } w[\phi_1]\ldots[\phi_n]v \mbox{ then } v \models \psi.
\end{eqnarray*}
\end{itemize}
\end{defn}
So, a sentence $\phi$ is supported in a situation $w$ just in case the
update
of $w$ with $\phi$ does not change anything. An argument is valid just in
case
it holds that any situation that is updated with the premises, in the order
they are given, results in a situation where the conclusion is supported.
Taking this as our definition of validity, the following result follows
immediately from proposition~\ref{se}
\begin{prop} For $\phi_1 \ldots \phi_n, \psi$ in the classical fragment of
the language:
$\phi_1 \ldots \phi_n \models \psi$ iff $\phi_1 \ldots \phi_n /\psi$ is a
valid argument in the modal logic $\bf K$.
\end{prop}
So, with regard to the classical fragment of the language, defining
validity in terms of the relational interpretation given above is just an
indirect way of defining the classical notion of validity. The interesting
part, of course, comes with the
$L$-operator: sentences of the form $L_a \phi$ get a non-trivial relational
interpretation, which shows in their behavior.
A sentence of the form $L_a \phi$ is interpreted in such a way that an
update
of a situation with this sentence will in general result in a situation
that
is really different from the initial one.
An update of a situation
$w$ with a sentence $[L_a \phi]$ results in a situation $v$ that differs
from $w$ only with respect to the information state assigned to $a$. The
new
information state $v(a)$ consists of all situations that result from
updating
each situation in $w(a)$ with
$\phi$. So we interpret `$a$ learns that $\phi$' as updating each situation
that is compatible with $a$'s information with the formula $\phi$.
For example, an update of a situation $w$ with the sentence
$L_a p$ results in a new situation $v$ that differs from $w$ only in the
information state assigned to $a$. In the new situation $v$, $v(a)$
is the set of all and only those situations in $w(a)$ in which
$p$ is true.
For the language as a whole, the logic is not classical. In particular, it
will
make a difference in which order one updates with sentences containing an
$L$-operator. Typically, permutation fails: for example, $L_a p
\wedge \neg
\Box_a p$ is not equivalent to $\neg \Box_a p \wedge L_a p$. Intuitively,
the
difference between the two sentences corresponds to the difference between
first learning that $p$, and after that not having the information that $p$
(which is what the first sentence expresses), and first not knowing that
$p$,
and after that learning that $p$ (which corresponds to the second
sentence).
For consider a situation in which
$a$ does not have any information regarding
$p$, i.e.\ a situation $w$ in which $w(a)$ contains situations in which $p$
is
false, as well as situations in which $p$ is true. Updating $w$ first with
$L_a p$ results in a situation in which $a$ has the information that
$p$: in all situations that $a$ considers possible, $p$ is supported. The
update of this new situation with $\neg
\Box_a p$ is not defined, because this new situation supports $\Box_a p$.
This
means that there is no $v$ such that $w[L_a p \wedge \neg \Box_a p]v$.
On the other hand,
in the situation $w$, $\neg \Box_a p$ is supported, and as we saw it is
very
well possible to update this situation with $L_a p$.
\medskip
The resulting logic and semantics is very much like the system `Multi-agent
Eliminative K' from Groeneveld (1995, p.\ 157 ff.). It suffers from
the same kind of problems, most notably the fact that introspection is not
preserved over $L_a$-updates. The problem is the following. Consider a
situation $w$ in which $a$ has fully introspective information, i.e.\ a
situation in which it holds that for all $\phi$: $w \models \Box_a \phi$
iff $w
\models \Box_a \Box_a \phi$, and $w \models \neg \Box_a \phi$ iff $w
\models
\Box_a \neg \Box_a \phi$. Suppose moreover that $w(a)$ contains
both $p$ an non-$p$ situations, i.e.\ $w \models
\neg \Box_a p$, and hence $w \models \Box_a \neg \Box_a p$.
Just as in the previous example, an update of $w$ with $L_a p$
results in a situation
$v$ such that $v \models \Box_a p$, because all
non-$p$ situations from $w(a)$ are eliminated. But because each
situation in $v(a)$ also occurred in $w(a)$, each situation in $v(a)$ will
support $\neg \Box_a p$.
So, it holds that $v \models \Box_a p$, while it
also holds that $v \models \Box_a \neg \Box_a p$: $a$'s information is not
introspective anymore in $v$. This is an undesirable result, in particular
if
one wants to do epistemic logic.
To solve this problem, we need a notion of an `iterated update': if $a$
learns
that
$\phi$, we also want this update to involve that $a$ learns that he learns
$\phi$, etcetera. The need for such an iterated update can also be
independently motivated:
\section{Common Knowledge}
Common knowledge is a concept that occurs under different names (mutual
knowledge, common ground) in the literature. As Barwise (1989)
shows, the theory of non-well-founded sets is very useful for modeling
this concept. Barwise
concludes that the right approach to common knowledge is the `fixed point'
account: a sentence
$\phi$ is common knowledge between a group of agents
$\cal B$ iff each $b \in \cal B$ has the information that $\phi$, and
moreover each agent in $\cal B$ has the information that it is common
knowledge
between the agents in $\cal B$ that $\phi$. I will adopt this approach
and define a (`static') notion of $\phi$ being common knowledge. We add
operators of the form
$\Box_{\cal B}$ to the language, one for each (non-empty) set of agents
$\cal
B$. (To be distinguished from the $\Box_a$ operators by the subscript being
a
set.)
\begin{defn} For each $\phi$ and ${\cal B}\subseteq {\cal A}$, $[\Box_{\cal
B}
\phi]$ is the largest relation such that:
\begin{eqnarray*} w[\Box_{\cal B} \phi]v &\mbox{ iff }& w =v \mbox{ and }
\forall b
\in {\cal B} \; \forall u \in w(b): \\ &&
u[\phi]u \mbox{ and } u[\Box_{\cal B} \phi]u
\end{eqnarray*}
\end{defn}
As far as I know, the most fully worked out semantic theory of common
knowledge
is the one presented in Fagin et.\ al.\ (1991). The authors
develop the notion of a {\em knowledge structure}, which basically is a
variation on the notion of a Kripke-model.
Knowledge structures are mathematically quite complex objects;
repeating the definitions in this abstract would take up too much space. I
just
want to note that comparing knowledge structures with
situations gives results that are analogous to those
obtained when comparing the present semantics with Kripke-semantics. More
precisely, given a suitable notion of bisimulation on knowledge
structures, situations can be seen as representing bisimulation classes of
knowledge structures of length $\omega^2$.
Apart from the fact that one can model
information {\em change} in addition to merely `having the information
that',
I believe that there are several advantages of the present approach over a
semantics in terms of knowledge structures. To mention the most important
one:
one needs knowledge structures of an ordinal depth of
$\omega^2$ to model common knowledge. It is quite counterintuitive that
mathematically relatively simple concepts as knowledge and common knowledge
need to be represented by structures that make use of such intricate
mathematics. This inelegance is not reflected in our
semantics.\footnote{Note
that the disappearance of this problem does not depend on the dynamic
character of the definitions: they might easily be replaced by classical
truth-definitions.}
In this paper however, we are concerned with change of information. So we
will
model what it means for a certain sentence to {\em become} common
knowledge. To
express this in the object language, we add sentence operators of the form
$C_{\cal B}$ for each subset $\cal B$ of
$\cal A$. The intended interpretation of a sentence of the form $C_{\cal B}
\phi$ is that
$\phi$ becomes common knowledge between the agents in $\cal B$.
The interpretation is defined as follows:
\begin{defn} For each $\phi$ and ${\cal B} \subseteq \cal A$, $[C_a \phi]$
is
the largest relation such that:\footnote{We could have left out the
co-inductive clause here; the relation defined here is unique.}
\begin{eqnarray*}
w[C_{\cal B} \phi] v &\mbox{ iff }& w[{\cal B}]v \mbox{ and } \forall a \in
{\cal B}: \\ &&
v(a)= \{y \mid \exists x \in w(b):\; x[\phi][C_{\cal B} \phi]y \}
\end{eqnarray*}
\end{defn}
So, updating a situation $w$ with a
sentence $C_{\cal B} \phi$ results in a situation $v$ that differs only
from $w$ in that for each $a \in \cal B$, all situations in $w(a)$ are
first updated with $\phi$, and then with $C_{\cal B} \phi$.
Note that the notion of a sentence becoming common knowledge restricted to
a
single agent boils down to the notion of an iterated update that was needed
above: an update with $C_{\{a\}} p$ results in a situation in which all
sentences of the form $\Box_a \ldots \Box_a p$ are supported, just as
desired. In general, we can proof that if $\phi$ is a classical sentence,
then if $w$ is a situation in which $a$ has fully introspective
information, then $a$ has fully introspective information in
$w[C_{\{a\}}\phi]$.
\section{Update Semantics}
Update semantics, as it is presented in Veltman (1990), has been an
important source of inspiration for this paper. It turns out that update
semantics can be seen as a special case of the present approach: update
semantics can be seen as describing the updates of an information state of
a single agent who has fully introspective knowledge.
In update semantics, sentences are interpreted as functions that operate on
information states. Information states are sets of classical possible
worlds.
The relevant definitions are the following:
\begin{defn} $ $
\begin{itemize}
\item ${\cal L}^{US}$ is the language
built up from a set of atomic sentences
$\cal P$ and the connectives $\neg, \wedge$ and a unary sentence operator
$might$ in the obvious way.\footnote{In Veltman's paper the language is
restricted to those sentences in which $might$ occurs at most as the
outermost operator in a sentence.}
\item
An classical information state $s$ is a set of classical possible worlds,
i.e.\
a set of assignments of truth-values to the propositional variables.
\item
For each sentence $\phi \in {\cal L}^{US}$ and each classical information
state
$s$, the update of $s$ with $\phi$, $s[\phi]$, is defined as:
\begin{eqnarray*}
s [p] & =&\{w \in s \mid w(p) = 1 \}
\mbox{ for } p\in {\cal P}\\
s [\phi \wedge \psi] &=& s [\phi]\cap s[\psi]\\
s [\neg \phi] &=& s \setminus s [\phi]\\
s [might \phi] &=&
s \mbox{ if } s[\phi] \neq \emptyset \\
&=& \emptyset \mbox{ otherwise} \\
\end{eqnarray*}
\end{itemize}
\end{defn}
Validity in update semantics is defined just as in definition~\ref{vald}.
There is a close correspondence between updates of information states in
update semantics and iterative learning in a situation in which agents
have introspective information. More precisely, we can associate with each
introspective situation $w$ and agent $a$ the classical information state
$w^a$, which consists of the set of classical worlds that correspond to the
situations in $w(a)$. Vice versa, given an arbitrary classical world $w$
and
agent $a$ (a classical information state does not provide us with
any information about which agent we are talking about, or what the `real
world' looks like, so we have to supply these parameters ourselves) we can
associate with each classical information state $s$ a situation
$s_w^a$, which assigns to each propositional variable the same value as $w$
does, and which assigns to $a$ a set containing $s_v^a$ for each $v \in s$.
More formally:
\begin{defn}$ $
\begin{itemize}
\item
If $w$ is a situation and $a$ an agent, then $w^a = \{ v$
restricted to $\cal P$ $\mid v \in w(a)\}$.
\item If $s$ is a classical information state, $w$ a classical possible
world, and $a$ an agent, then $s_w^a$ is a situation such that $s_w^a(p) =
w(p)$ for each $p \in \cal P$, and $s_w^a(a) = \{s_v^a \mid v \in s\}$.
\end{itemize}
\end{defn}
It is not hard to see that in $s_w^a$, agent $a$ has fully introspective
knowledge. The following proposition expresses how US-updates can be viewed
as
iterative updates of an introspective information state. It also shows that
$might$ can be translated as $\Diamond$, i.e.\ if we view a classical
information state as the information state of a certain agent $a$, updating
such an information state with $might \phi$ in update semantics corresponds
to updating $a$'s information state with $\Diamond_a \phi$, $a$ considers
it
possible that $\phi$, in our semantics.
\begin{prop} For each $\phi \in {\cal L}^{US}$, let $\phi^*$ be just as
$\phi$
but with all occurrences of $might$ replaced by $\neg \Box_a \neg$. Then it
holds that:
\begin{itemize}
\item
For all classical information states $s$ and $t$:
$s[\phi]t$ iff $s_w^a [C_a \phi^*]t_w^a$.
\item
For all situations $w$ and $v$ and each $a$ such that $a$ has introspective
information in $w$:
$w[C_a \phi^*]v$ iff $w^a[\phi]v^a$.
\item $\phi_1 \ldots \phi_n /\psi$ is a valid argument in update semantics
iff
for all introspective $w$: $w[C_{\{a\}}
\phi_1^*]\ldots[C_{\{a\}} \phi_n^*]\models C_{\{a\}} \psi^*$.
\end{itemize}
\end{prop}
What this proposition expresses is that a US-update can be seen as an
iterated
update of the information state of an agent who has fully introspective
information. One of the things that this proposition suggests is that the
interpretation of $might \phi$ in update semantics corresponds better with
``You don't know that $\phi$'' than with ``It might be that $\phi$.''
\section{Applications and further research}
In this paper, I presented a dynamic semantics for the classical language
of multi-modal logic, extended with operators expressing certain kinds of
information change.
One motivation for developing the semantics sketched here, was the need for
a
logic that is rich enough in expressive power to develop a basic formal
theory
of pragmatics and discourse in the Gricean tradition. Of course, as a tool
for
pragmatics, the logic is very weak in expressive power (most saliently, it
is
only a propositional logic), but it is nevertheless surprising how far one
can
get using only these very basic notions. The work I have done on this
seems
to be fairly promising. In particular, the dynamic notion of common
knowledge
turns out to be quite useful for describing what happens when a participant
in
a conversation makes an utterance.
Lines of possible further research include
\begin{itemize}
\item Finding a sound and complete deduction system.
\item Extending the framework to a predicate-logical version.
\item Doing formal pragmatics.
\item Investigating whether and how the theory relates to the work done in
theoretical computer science on distributed systems.
\end{itemize}
\section*{References}
\begin{description}
\item[]
Aczel, P. 1988. {\em Non-well-founded Sets}. CSLI Lecture Notes, Stanford.
\item[]
Barwise, J. 1989. On the model theory of common knowledge. {\em The
Situation
in Logic}. CSLI Lecture notes. pp.~201--220.
\item[]
Fagin, R., Halpern, J. and Vardi, M. 1991. A model-theoretic analysis of
knowledge. {\em Journal of the Association for Computing Machinery} {\bf
39}(2),~382--428.
\item[]
Groeneveld, W. 1995. {\em Logical Investigations into Dynamic Semantics}.
PhD
thesis. ILLC dissertation series 18.
\item[]
Veltman, F. 1990. Defaults in update semantics. To appear in the Journal of
Philosophical Logic.
\end{description}
\end{document}