Economic subjects | Decision theory » Mellies-Mimram - Asynchronous Games vs Concurrent Games

Datasheet

Year, pagecount:2010, 30 page(s)

Language:English

Downloads:5

Uploaded:July 25, 2019

Size:1 MB

Institution:
-

Comments:

Attachment:-

Download in PDF:Please log in!



Comments

No comments yet. You can be the first!


Content extract

Source: http://www.doksinet Asynchronous Games vs. Concurrent Games Paul-André Melliès∗ Samuel Mimram† Abstract Game semantics reveals the dynamic and interactive nature of logic, by interpreting every proof of a formula as a winning strategy of a dialogue game. The purpose of this work is to compare two alternative game semantics of linear logic, based on asynchronous games and on concurrent games. Both game models take account of the concurrent nature of proofs, but in a very different way, and our task here is thus to relate them. We also take this opportunity to clarify how the process of translating an asynchronous strategy into a concurrent strategy is related to the focusing property of linear logic. Introduction Game semantics. A healthy exercise in proof theory is to read a cut-free proof (typically written in sequent calculus) starting from its conclusion rather than from its hypothesis. Unravelled in this way, the proof does not really “prove” in the

traditional sense, but rather “deconstructs” the conclusion by removing its connectives one after the other, along the ordering specified by the syntactic tree of the formula. This point of view leads to reunderstand the formula as some kind of playground which the proof gradually explores and decomposes from the root to the leaves, according to the rules of the logic. In that prospect, the formula describes a game whose moves are the logical connectives, and the proof describes a strategy which plays these moves according to a specific (and logically valid) exploration scheduling of the formula. This account of proof theory is the starting point of game semantics, which we understand here in the widest possible sense, including the inaugural work by Novikoff on classical logic [?], by Lorenzen and his school on intuitionistic and classical logic [?], as well as the various notions of ludics defined by Girard and after him [?, ?, ?]. This game-theoretic approach to logic becomes

particularly appealing when one thinks of a proof as a program, along the line of the Curry-Howard correspondence: the strategy describes in that case the interactive behavior of the proof/program in front of its refutation/environment. Two players are involved here: the Proponent which represents the proof, and the Opponent which represents the environment. Every formula of the logic is thus interpreted as a two-player game, describing the different ways in which Proponent and Opponent may explore the formula together. ∗ Laboratoire PPS (Preuves, Programmes, Systèmes), CNRS and Université Paris Diderot, Case 7014, 75205 Paris Cedex 13, France. † CEA LIST, Laboratory for the Modelling and Analysis of Interacting Systems, Point Courrier 94, 91191 Gif-sur-Yvette, France. This work has been supported by the ANR Project CHOCO (“Curry Howard pour la Concurrence”, ANR-07-BLAN-0324). 1 Source: http://www.doksinet Sequential games. A nice illustration of game semantics is

provided by the pioneering work by Joyal [Joy77] where a category of alternating strategies between sequential games (called Conway games) is constructed. A Conway game is defined as a rooted tree (or more precisely, a directed acyclic graph) in which every edge (called a move of the game) has a polarity indicating whether it is played by Proponent (the proof) or by Opponent (the environment). A play is then defined as a path starting from the root (noted ∗) of the tree. A play is called alternating when Proponent and Opponent strictly alternate – that is, when neither of them plays two moves in a row. A strategy σ is then defined as a set of alternating plays of even length, starting from an Opponent move, thus of the form s = m1 · n1 · m2 · n2 · · · mk · nk where mi is an Opponent move and ni is a Proponent move, for 1 ≤ i ≤ k. A strategy is moreover required to satisfy the following properties: (1) σ is nonempty: (2) σ is prefix-closed: σ contains the empty

play, σ contains the play s whenever it contains the play s · m · n for an Opponent move m and a Proponent move n, (3) σ is deterministic: two plays s · m · n1 and s · m · n2 in the strategy σ have the same last Proponent move n1 = n2 . The category of Conway games constructed by Joyal in the late 1970s became very influential in the early 1990s, in the turmoil produced by the discovery of linear logic. In particular, the Conway game model was refined by Abramsky and Jagadeesan [AJ94] in order to characterize the dynamic behaviour of proofs in (multiplicative) linear logic. The key idea is that the tensor product of linear logic, noted ⊗, may be distinguished from its dual, noted `, by enforcing a switching policy on plays – this ensuring in particular that a strategy of A ⊗ B reacts to an Opponent move played in the subgame A by playing a Proponent move in the same subgame A. The notion of pointer game was then introduced by Hyland and Ong [HO00], and independently by

Nickau [?], in order to characterize the dynamic behaviour of programs in the programming language PCF – a simply-typed λ-calculus extended with recursion, conditional branching and arithmetical constants. The programs of PCF are characterized dynamically as particular kinds of strategies with partial memory, which are called innocent because they react to Opponent moves according to their own view of the play. This view is itself a play, extracted from the current play by removing all its “invisible” or “inessential” moves. This extraction is performed by induction on the length of the play, using the pointer structure of the play, and the hypothesis that Proponent and Opponent strictly alternate. This seminal work on pointer games led to the first generation of game semantics for programming languages. The research programme – to a large extent conducted by Abramsky and his collaborators – was particularly successful: by relaxing the innocence constraint on strategies,

it suddenly became possible to characterize the interactive behaviour of programs written in PCF (or in a call-by-value variant) extended with imperative features like states, references, etc. However, because Proponent and Opponent strictly alternate in the original definition of pointer games, these game semantics focus on sequential languages like Algol or ML, rather than on concurrent languages. Concurrent games. This convinced a little community of researchers to work on the foundations of non-alternating games – where Proponent and Opponent are thus allowed to play several moves in a row at any point of the interaction. Abramsky 2 Source: http://www.doksinet and Melliès [AM99] introduced a new kind of game semantics to that purpose, based on concurrent games – see also [Abr03]. In that setting, games are defined as partial orders (or more precisely, complete lattices) of positions, and strategies as closure operators on these partial orders. Recall that a closure operator

σ on a partial order D is a function σ : D − D satisfying the following properties: (1) (2) (3) σ is increasing: σ is idempotent: σ is monotone: ∀x ∈ D, ∀x ∈ D, ∀x, y ∈ D, x ≤ σ(x), σ(x) = σ(σ(x)), x ≤ y ⇒ σ(x) ≤ σ(y). The order on positions x ≤ y reflects the intuition that the position y contains more information than the position x. Typically, one should think of a position x as a set of moves in a game, or as a set of connectives explored in a formula, and x ≤ y as set inclusion x ⊆ y between such positions. Now, Property (1) expresses that a strategy σ which transports the position x to the position σ(x) increases the amount of information. Property (2) reflects the intuition that the strategy σ delivers all its information when it transports the position x to the position σ(x), and thus transports the position σ(x) to itself. Property (3) is both fundamental and intuitively right, but also more subtle to justify. An important point

to notice is that the interaction induced by such a strategy σ is possibly non-alternating, since the strategy transports the position x to the position σ(x) by “playing in one go” all the moves appearing in σ(x) but not in x. Another interesting point is that a closure operator σ is entirely characterized by the set fix(σ) of its fixpoints, that is, the positions x satisfying the equality x = σ(x). Asynchronous transition systems. So, a strategy can be expressed either as a set of positions (the set of fixpoints of the closure operator) in concurrent games or as a set of alternating plays in sequential games (like Conway games or pointer games). In order to understand how the two formulations of strategies are related, one should start from an obvious analogy with concurrency theory: pointer games define an interleaving semantics (based on sequences of transitions) whereas concurrent games define a truly concurrent semantics (based on sets of positions, or states) of proofs

and programs. Now, Mazurkiewicz taught us that a truly concurrent semantics may be regarded as an interleaving semantics (typically a transition system) equipped with asynchronous tiles – represented diagrammatically as 2-dimensional tiles xC {{ CCCn { C! {} { y1 C ∼ y2 CC { { C { n C! }{{ m z m (1) expressing that the two transitions m and n from the state x are independent, and consequently, that their scheduling does not matter from a truly concurrent point of view. This additional structure induces an equivalence relation on transition paths, called the homotopy relation, defined as the smallest congruence relation ∼ identifying the two schedulings m · n and n · m for every tile of the form (1). The word homotopy should be understood mathematically as (directed) homotopy in the topological presentation of asynchronous transition systems as n-cubical sets [Gou00]. This 2dimensional refinement of usual 1-dimensional transition systems enables to express simultaneously the

interleaving semantics of a program as the set of transition paths it generates, and its truly concurrent semantics, as the homotopy classes of these transition paths. 3 Source: http://www.doksinet Asynchronous games. Guided by these intuitions, Melliès introduced in [?] the notion of asynchronous game, which unifies in a nice and conceptual way the heterogeneous notions of pointer game and of concurrent game. Asynchronous games are played on asynchronous (2-dimensional) transition systems, where every transition (or move) is equipped with a polarity, expressing whether it is played by Proponent or by Opponent. A play is defined as a path starting from the root of the game, and a strategy is defined as a well-behaved set of alternating plays, in the style of traditional sequential games. Now, the difficulty is to understand how (and when) a strategy defined as a set of plays may be reformulated as a set of positions, in the spirit of concurrent games. The first step in the inquiry

is to observe that the asynchronous tiles (1) offer an alternative way to describe justification pointers between moves. For illustration, consider the boolean game B, where Opponent starts by asking a question q, and Proponent answers by playing either true or false. The game is represented by the decision tree ∗ q  q?  ?? true  ??false  ??     V F (2) where ∗ is the root of the game, and the three remaining positions are called q, V (V for “Vrai” in French) and F . At this point, since there is no concurrency involved, the game may be seen either as an asynchronous game, or as a pointer game. Now, the game B ⊗ B is constructed by taking two boolean games “in parallel.” It simulates a very simple computation device, containing two boolean memory cells. In a typical interaction, Opponent starts by asking with qL the value of the left memory cell, and Proponent answers trueL ; then, Opponent asks with qR the value of the right memory cell, and Proponent

answers falseR . The play is represented as follows in pointer games: z y qL · trueL · qR · falseR The play contains two justification pointers, each one represented by an arrow starting from a move and leading to a previous move. Typically, the justification pointer from the move trueL to the move qL indicates that the answer trueL is necessarily played after the question qL . The same situation is described using 2-dimensional tiles in the asynchronous game B ⊗ B below: ∗ ⊗ ∗K KKKqR ss s KKK s s s K% yss ∼ q ⊗ ∗K ∗ ⊗ qJ KK JJJfalseR tt trueL ttt t JJJ t qRK qL t KK t t J$ t t % yt zt ∼ ∼ q ⊗ qJ V ⊗ ∗J ∗⊗F JJ JJJ t t t t t t JJ trueL falseJR tt JJ% qR JJ $ yttt zttt qL ∼ V ⊗ qJ q⊗F JJJ tt t JJJ t t falseR J% yttt trueL V ⊗F qL 4 (3) Source: http://www.doksinet The justification pointer between the answer trueL and its question qL is replaced here by a dependency relation between the two moves, ensuring that the move trueL cannot be

permuted before the move qL . The dependency itself is expressed by a “topological” obstruction: the lack of a 2-dimensional tile permuting the transition trueL before the transition qL in the asynchronous game B ⊗ B. This basic correspondence between justification pointers and asynchronous tiles enables a reformulation of the original definition of innocent strategy in pointer games (based on views) in the language of asynchronous games. Surprisingly, the reformulation leads to a purely local and diagrammatic definition of innocence in asynchronous games, which does not mention the notion of view any more, see [Mel04] for details. This diagrammatic reformulation leads also to the discovery that innocent strategies are positional in the following sense. Suppose that two alternating plays s, t : ∗ − x with the same target position x are elements of an innocent strategy σ, and that m is an Opponent move from position x. Suppose moreover that the two plays s and t are

equivalent modulo homotopy. Then, the innocent strategy σ extends the play s · m with a Proponent move n if and only if it extends the play t · m with the same Proponent move n. Formally: s·m·n∈σ and s ∼ t and t ∈ σ implies t · m · n ∈ σ. (4) From this follows that every innocent strategy σ is characterized by the set of positions (understood here as homotopy classes of plays) reached in the asynchronous game. This set of positions defines a closure operator, and thus a strategy in the sense of concurrent games. Asynchronous games offer in this way an elegant and unifying point of view on pointer games and concurrent games. Concurrency in game semantics. There is little doubt that a new generation of game semantics is currently emerging along this foundational work on concurrent games. We see at least three converging lines of research First, authors trained in game semantics – Ghica, Laird and Murawski – were able to characterize the interactive behaviour of

various concurrent programming languages like Parallel Algol [GM04] or an asynchronous variant of the π-calculus [Lai05] using directly (and possibly too directly) the language of pointer games. Then, authors trained in proof theory and game semantics – Curien and Faggian – relaxed the sequentiality constraints required by Girard on designs in ludics, leading to the notion of L-net [CF05] which lives at the junction of syntax (expressed as proof nets) and game semantics (played on event structures). Finally, and more recently, authors trained in process calculi, true concurrency and game semantics – Varacca and Yoshida – were able to extend Winskel’s truly concurrent semantics of CCS, based on event structures, to a significant fragment of the π-calculus, uncovering along the way a series of nice conceptual properties of confusion-free event structures [VY06]. So, a new generation of game semantics for concurrent programming languages is currently emerging. but their

various computational models are still poorly connected We would like a regulating theory here, playing the role of Hyland and Ong pointer games in traditional (that is, alternating) game semantics. Asynchronous games are certainly a good candidate, because they combine interleaving semantics and causal semantics in a harmonious way. Unfortunately, they were limited until now to alternating strategies [Mel04]. The main contribution of this paper is to extend the asynchronous framework to non-alternating strategies, and to compare the resulting model with the concurrent framework defined by Abramsky and Melliès. Although the ultimate goal of producing a satisfactory notion of innocent strategy for non-alternating interactions is not reached yet, we believe that the connection to concurrent games is an important step in that direction. 5 Source: http://www.doksinet Event structures. A simple recipe to construct an asynchronous game is to start from a partial order of events where, in

addition, every event has a polarity, indicating whether it is played by Proponent or Opponent. This partial order (M, ) is then equipped with a incompatibility relation # satisfying a series of suitable properties, defining what Winskel calls an event structure. A position x of the asynchronous game is defined as a set of pairwise compatible events (or moves) closed under the “causality” order: ∀m, n ∈ M, m  n and n ∈ x implies m ∈ x. Typically, the boolean game B described in (2) is generated by the event structure zz zz z z zz true qF FF FF FF FF false where q is an Opponent move, and true and false are two incompatible Proponent moves, with the positions q, V, F defined as q = {q}, V = {q, true} and F = {q, false}. The tensor product B ⊗ B of two boolean games is then generated by putting side by side the two event structures, in the expected way. The resulting asynchronous game looks like a flower with four petals, one of them described in (3). More generally,

every formula of linear logic defines an event structure, which defines in turn an asynchronous game. For instance, the event structure induced by the formula (B ⊗ B) ( B (5) contains the following partial order of compatible events: qL tt ttt t t t ttt trueL qK KKK KKK KKK K false qR (6) falseR which may be seen alternatively as a (maximal) position in the asynchronous game associated to the formula. This game implements the interaction between a boolean function (the Proponent) of type (5) and its two arguments (the Opponent). Asynchronous games without alternation. In a typical play of the game (5), Opponent starts by playing the move q asking the value of the boolean output; Proponent reacts by asking with qL the value of the left input, and Opponent answers trueL ; then, Proponent asks with qR the value of the right input, and Opponent answers falseR ; at this point only, using the knowledge of its two arguments, Proponent answers false to the initial question: q · qL

· trueL · qR · falseR · false (7) Of course, Proponent could have explored its two arguments in the other order, from right to left, this inducing the play q · qR · falseR · qL · trueL · false 6 (8) Source: http://www.doksinet The two plays start from the empty position ∗ and reach the same position of the asynchronous game. They may be seen as different linearizations (in the sense of order theory) of the partial order (6) specified by the game. Each of these linearizations may be represented by adding causality (dotted) edges between moves to the original partial order (6), in the following way: q OO OOO OOO q ppp 444 p p 44 pp qL 44 44 44 44 trueL 44 qR qR falseR qL falseR trueL false false (9) The play (7) is an element of the strategy representing the left implementation of the strict conjunction, whereas the play (8) is an element of the strategy representing its right implementation. Both of these strategies are alternating Now, there is also a parallel

implementation, where the conjunction asks the value of its two arguments at the same time. The associated strategy is not alternating anymore: it contains the play (7) and the play (8), and moreover, all the (possibly non-alternating) linearizations of the following partial order. qL tt ttt t t t ttt qK KKK KKK KKK K qR (10) trueL falseR false This illustrates an important phenomenon of a concurrent nature: a strategy σ of an asynchronous game generally contains several plays with the same target position x, all of them equivalent modulo homotopy. In order to keep in harmony with the spirit of true concurrency, one would like to see these homotopic plays as the linearizations of a “causal order” defined by the strategy σ on the events leading to the position x. This basic intuition leads to the notion of ingenuous strategy σ which will be introduced in the course of the article, in Section 2. By definition, an ingenuous strategy is positional, this enabling to define it

alternatively as a set of plays, or as an asynchronous subgraph Gσ of the asynchronous game G where the interaction occurs. It is interesting to notice that the construction of the “causal order” induced by the positional σ for a position x ∈ Gσ requires to apply refined tools coming from the theory of asynchronous transition systems, in particular the fundamental cube property which enables to relate asynchronous transition graphs to event structures. Concurrency means courtesy. The next step we perform is to establish a satisfactory relationship between the ingenuous strategies of asynchronous games and 7 Source: http://www.doksinet the concurrent strategies of concurrent games. This is not entirely obvious however Every ingenuous strategy σ induces a set σ ◦ of halting positions, defined as the positions x ∈ Gσ such that there exists no Proponent move x − y in the graph Gσ . This set of halting positions is closed under intersection, and thus defines a

concurrent strategy, which consists in transporting every position x to the smallest halting position y of σ above x when it exists, and to the maximal position > otherwise. Conversely, every concurrent strategy τ = halt(σ) obtained from an ingenuous strategy σ in this way defines an ingenuous strategy τ which contains the plays m m m k 2 1 xk x2 · · · − x1 − ∗ = x0 − such that the position xi is in the dynamic domain of τ for 0 ≤ i ≤ k. By definition, a position x is in the dynamic domain of τ when the position τ (x) is different from the maximum position >, and the transition path s : x − τ (x) contains only Proponent moves. The correspondence theorem established in Section 4 states that the ingenuous strategy τ is equal to the least ingenuous strategy σ̃ containing the strategy σand satisfying the following courtesy property: for every play s1 · m · p · s2 of the strategy σ̃ such that m is a Proponent transition or p is an Opponent

transition, and such that the homotopy relation m · p  n · q defines a tile in the game, then the play s1 · n · q · s2 obtained by permuting p before m is also in the strategy σ̃. Moreover, this completion σ̃ of the ingenuous strategy σ has the same halting positions as the strategy σ, and thus induces the same concurrent strategy τ . Plan of the article. We recall the notion of asynchronous graph (in Section 1) and investigate the relationship with event structures when the graph satisfies the so-called cube property. This leads us to the notion asynchronous game (in Section 2) together with the definition of an ingenuous strategy. Next, we define an asynchronous model of multiplicative additive linear logic (in Section 3) 1 1.1 Asynchronous graphs and event structures Asynchronous graphs Graphs. Recall that a graph G = (V, E, s, t) consists of a set V of vertices (or positions), a set E of edges (or transitions) and two functions s, t : E V which to every transition

associate a position which is called respectively its source and its target. We write m : x − y to indicate that m is a transition with x as source and y as target. Two transitions are coinitial (resp cofinal) when they have the same source (resp. target) A path t is a sequence of transitions m1 · m2 · · · mk which are consecutive, i.e such that s(mi+1 ) = t(mi ) for every index i, and we write t : x − y to indicate that t is a path whose source is x and target is y. The concatenation of two consecutive paths s : x − y and t : y − z is denoted s · t : x − z and the empty path on a position x is denoted by εx : x − x. Asynchronous graphs. An asynchronous graph G = (G, ) is a graph G together with a symmetric tiling relation , which relates paths of length two with the same source and the same target. If m : x − y1 , n : x − y2 , p : y1 − z and q : y2 − z 8 Source: http://www.doksinet are four transitions, we write diagrammatically = z aC {{ CCCq { C

{{ y1 aC ∼ = y2 CC {{ C { m C {{ n x p (11) to indicate that m · p  n · q. We define ∼ as the smallest congruence (wrt concatenation) containing the tiling relation, and write s x − y . t x − z when there exists a path s0 : y − z such that s · s0 ∼ t. The relation ∼ is called homotopy because it should be thought as the possibility, when s and t are two homotopic paths, to deform “continuously” the path s into t. Homotopy is formalized algebraically here, but the links with geometry and (directed) algebraic topology can be investigated further and are actually an active research area [Gou00]. From the concurrency point of view, an homotopy between two paths indicates that these paths are the same up to reordering of independent events, as in Mazurkiewicz traces [Maz88]. A category of asynchronous graphs. A map (G1 , 1 ) (G2 , 2 ) between asynchronous graphs is a map f : G1 G2 between the underlying graphs, which respects the tiling relation, i.e such

that s 1 t implies f (s) 2 f (t) for all coinitial and cofinal paths s = m · p and t = n · q of length two. The category of asynchronous graphs has asynchronous graphs as objects, and maps between them as morphisms. The category of asynchronous graphs is also symmetric monoidal, when one equips it with the following notion of tensor product. Asynchronous product. Given two asynchronous graphs G and H, we define their asynchronous product GkH as the following asynchronous graph. Its positions are VGkH = VG × VH and its transitions are EGkH = EG × VH + VG × EH (by abuse of language we say that a transition is “in G” when it is in the first component of the sum or “in H” otherwise). The source and target of a transition (m, x) ∈ EG × VH are respectively (sG (m), x) and (tG (m), x), and similarly those of a transition (x, m) ∈ VG × EH are respectively (x, sH (m)) and (x, tH (m)). Tiles are of the form (z, y) dIII(q,y) u: II uu u I u u ∼ (y1 , y) (y2 , y) dIII : u u

II uu I u u (n,y) (m,y) (x, y) (p,y) (y, z) dIII(y,q) u: II uu u I u u ∼ (y, y1 ) (y, y2 ) dIII : u u II uu I u u (y,n) (y,m) (y, x) (y,p) (x0 , y 0 ) eJJ (m,y0 ) JJ JJ tt t t 0 ∼ (x , y) (x, y 0 ) eJJ 9 t JJ t JJ tt t t (x,n) (m,y) (x, y) (x0 ,n) tt9 where we suppose that there is a tile of the form (??) in G (resp. in H) in the first (resp. second) tile Note that every path s in a game GkH is the interleaving of a path sG in G and a path sH in H. Moreover, the path s is equivalent to any such interleaving of sG and sH modulo homotopy in GkH. The path sG is called the projection of the path s on the component G, and similarly for sH . The unit of the asynchronous product is the graph with one position, and no transition. 9 Source: http://www.doksinet 1.2 Residuals We suppose from now on that in every asynchronous graph, a 2-dimensional tile (??) is entirely characterized by the data of the pair of transitions (m, p), of transitions (m, n) or of transitions (p, q). This

means that: – given a path m · p there exists at most one path n · q forming a tile (??), – given a pair of coinitial transitions m and n, there exists at most one pair of cofinal transitions p and q inducing a tile (??), – given a pair of cofinal transitions p and q there exists at most one pair of coinitial transition m and n inducing a tile (??). Residual after a transition. A transition q : y2 − z is a residual of a transition m : x − y1 after a coinitial transition n : x − y2 when there exists a transition p : y1 − z forming an asynchronous tile m · p  n · q. Note that the transition p is also the residual of n after m in this case, because the tile relation  is symmetric. Moreover, our hypothesis on asynchronous graphs implies that a transition m admits at most one residual after a coinitial transition n This residual is denoted m/n when it exists. Compatible paths. Two coinitial paths s : x − y1 and t : x − y2 are called compatible when there exists a

pair of cofinal paths s0 : y1 − z and t0 : y2 − z such that s · s0 ∼ t · t0 . Residual after a path. Given two compatible paths s : x − y1 and t : x − y2 we define the residual s/t of the path s after the path t by induction: (s1 · s2 )/t = (s1 /t) · (s2 /(t/s1 )) s/(t1 · t2 ) = (s/t1 )/t2 s/s = ε Intuitively, the path s/t is what “remains” of the path s one the path t has been performed. It can be checked that the residual path s/t exists (and is uniquely defined) when the underlying asynchronous graph satisfies the cube property described below, see the second author’s PhD thesis [Mim08] for details. 1.3 Event structures Recall that an event structure (M, ≤, #) is a partially ordered set (M, ≤), whose elements are called events and the order relation is called causality, together with a symmetric irreflexive relation #, called incompatibility, on events satisfying the two following properties. 1. Finite causality: every event e ∈ M defines a

downward closed set of events e↓ = { e0 ∈ M | e0 ≤ e } which is finite. 2. Hereditary incompatibility: for every events e1 , e2 and e3 , e1 # e2 and e2 ≤ e3 10 implies e1 # e3 Source: http://www.doksinet Two events e1 and e2 are incompatible when e1 # e2 and compatible otherwise. A configuration of an event structure is a downward-closed set of compatible events. Every event structure (M, ≤, #) induces an asynchronous graph of configurations, in the following way: – the positions of the asynchronous graph are the configurations of the event structure, – its transitions m : x − y are the triples (m, x, y) where m is an event and y = x ] {m}, where ] denotes disjoint union, – every two coinitial and cofinal paths of length 2 are related by a tile. For example, the event structure on three events a, b, c with empty incompatibility relation, whose underlying poset is depicted on the left, induces the asynchronous graph depicted on the right: {a, b, c} eKK b 9

KK KK s s s ∼ {a, c} {a, b} eKKK s9 s KKK s s K b sss c {a} O c sss c b> >> >> a a ∅ 1.4 The cube and the coherence properties We would like to characterize the class of asynchronous graphs generated by an event structure. The two models of computation are intrinsically different: computations are described as sequences of transitions modulo permutation in asynchronous graphs, whereas they are described as sets of events ordered by a causality order in event structures. As we will see, the situation is the same in game semantics when we associate an asynchronous game to a formula, which can be seen as an event structure with additional structure ; or when we try and extract the causality structure from a strategy defined as an asynchronous subgraph of the asynchronous game. This explains why we are so much interested in clarifying the relationship between asynchronous graphs and event structures, starting from the cube property for asynchronous graphs, which

plays a fundamental role in the characterization. The cube property expresses a fundamental causality principle in the diagrammatic language of asynchronous transition systems [Bed88, Shi85, WN95]. It is related to stability in the sense of Berry [Ber79]. It was first noticed by Nielsen, Plotkin and Winskel in [NPW81], then reappeared in [PSS90] and [GLM92, Mel98, Mel05b] and was studied thoroughly by Kuske in his PhD thesis; see [DK02] for a survey. The characterization is based on the observation that every asynchronous graph of configurations generated by an event structure satisfies the following properties. Definition 1 (Cube property). An asynchronous graph G satisfies the cube property when a hexagonal diagram in G induced by two paths m · n · o : x − y and p · q · r : x − y, which are coinitial and cofinal, is filled by 2-dimensional tiles as pictured in the left-hand side of the diagram below, if and only it is filled by 2-dimensional 11 Source:

http://www.doksinet tiles as pictured in the right-hand side of the diagram: r yO1 aC CC CC q xO 3 ∼ /y `AA AAo A / y2 O ∼ x2 aC CC C p CC ∼ x m /ya O CC o CC C r yO1 yO2 ∼ q ⇐⇒ ∼ x2 `A AA A p A n / x1 ∼ x / y3 n aCC CC C / x1 m Definition 2 (Coherence property). An asynchronous graph G satisfies the coherence property when a hexagonal diagram in G is filled by 2-dimensional tiles as pictured in the left-hand side of the diagram below, if and only it is filled by 2-dimensional tiles as pictured in the right-hand side of the diagram: x2 aC CC nCCC ∼ x / y3 aCC CC C / x1 m  y1 aC CC CC ∼ ⇐⇒ ∼ o  x3 / y3 x2 ∼  / y2  y1 aC CC CC aCC CC C x1 r ∼ p ∼ x3  /ya CC qC C  / y2 Definition 3 (Rooted graph). A position ∗ of an asynchronous graph G is called a root when there exists a path ∗ − x to every position x of the graph. An asynchronous graph is rooted when it has a root. Definition 4 (Simply connected

graph). An asynchronous graph G is simply connected when every pair s, t : x − y of coinitial and cofinal paths are equivalent modulo homotopy. Conversely, we establish that the three properties are strong enough to characterize the asynchronous graphs G generated by an event structure. The idea is to reconstruct the asynchronous graph G from the following notion of events: Asynchronous events. Given a transition m : x − y in an asynchronous graph G, we write [m] for the equivalence class of m by the residuation relation: this set [m] is called the asynchronous event (or the move) associated to the transition, and conversely we say that m is an instance of [m]. Event structure associated to an asynchronous graph. The event structure E(G) associated to an asynchronous graph G is defined as follows: – its events are the asynchronous events of the asynchronous graph, – e1 <G e2 when for every transition m2 : x2 − y2 in the event e2 , there exists a transition m1 : x1 − y1

and a path y1 − x2 , – e1 #G e2 when every transition m1 : x1 − y1 in the event e1 and every transition m2 : x2 − y2 in the event e2 have incompatible target positions y1 and y2 , in the sense that there exists no position z such that s1 : y1 − z and s2 : y2 − z for some paths s1 and s2 . 12 Source: http://www.doksinet This leads to the following proposition. Proposition 5. Every rooted and simply connected asynchronous graph G satisfying the cube property and the coherence property coincides with the asynchronous graph of configurations generated by the event structure E(G). From this follows immediately that in a rooted and simply connected asynchronous graph G satisfying the cube property and the coherence property, one has: Corollary 6. The set of paths homotopy equivalent to a given path s : x − y is the linearization of the partial order ≤G on the events appearing in the transition path s. It should be mentioned that the corollary does not need the

coherence property to hold. The cube property is typically satisfied by every asynchronous transition system and every transition system with independence in the sense of [WN95, SNW96] The correspondence between homotopy classes and sets of linearizations of a partial order adapts to our setting a standard result on pomsets and asynchronous transition systems with dynamic independence due to Bracho, Droste and Kuske [BDK97]. The proposition is best illustrated by an example. Consider the following asynchronous graph in which every two maximal paths are homotopic: = y aC {{ CCCCh { { C {{ ∼ x 4 = aCC e = x5 d {{ CC {{ { { C {{ f {{ ∼ x2 aC = x3 CC {{ CC { {{ c b xO 1 g (12) a ∗ The asynchronous graph G is rooted, simply connected, and satisfies the cube property as well as the coherence property. In that case, the proposition states that the asynchronous graph G is generated by an event structure E(G) on the asynchronous events of the asynchronous graph. It is not difficult to

see that the asynchronous events of the graph (12) are e1 = {a}, e2 = {b, e, h}, e3 = {c, d} and e4 = {f, g}. Now, the partial order on these events is defined by stating that ei ≤ ej whenever every element of ej appears after an element of ei in a maximal path from the root ∗. The induced partial order is thus e4 e2 C CC CC e1 | || || e3 and Proposition 5 states that the asynchronous graph (12) is (isomorphic to) the asynchronous graph of configurations generated by this partial order. Let us explain briefly how the proof of Proposition 5 is established. First of all, the hypothesis that the asynchronous game G = (V, E, s, t, ) is simply connected enables to define a partial order ≤ on the set of positions, defined by reachability: x ≤ y precisely when there exists a path x − y. Define then [x, y] as the interval between two positions x and y such that x ≤ y, equipped with the reachability order ≤: [x, y] = { z∈V 13 | x≤z≤y }. Source: http://www.doksinet

The proof of Proposition 5 reduces then to showing that in every simply connected asynchronous graph satisfying the cube property: Lemma 7. The interval [x, y] is a finite distributive lattice, and for every pair of positions x, y satisfying x ≤ y. The statement of Proposition 5 follows then by applying Birkhoff’s representation theorem [Bir33] and a careful reconstruction of the paths using the coherence property, see the second author’s PhD thesis [Mim08] for details. 2 2.1 Asynchronous games Asynchronous games An asynchronous game G = (G, ∗, λ) consists of an asynchronous graph G = (V, E, s, t) (which is simply connected and satisfies the cube property) together with a distinguished initial position ∗ ∈ V and a function λ : E {O, P } which to every transition associates a polarity: either O for Opponent or P for Proponent. A transition is supposed to have the same polarity as its residuals, polarity is therefore well-defined on moves. We also suppose that every

position x is reachable from the initial position, ie that there exists a path ∗ − x. A morphism of asynchronous games is a morphism between the underlying asynchronous graphs which respects the initial position and polarities. An embedding ι : G , H of a game G into a game H is a monomorphism ι between them. Given a game G, we write G∗ for the game G with polarities inverted, we also write ˆG for the game obtained from G by adding a new position ∗, which becomes the initial position, and a new Opponent transition n : ∗ − ∗G . The dual operation is denoted ´G = (ˆ(G∗ ))∗ . Given two games G and H, we define their asynchronous product GkH as the game whose underlying asynchronous graph is the asynchronous tensor product GkH of the underlying asynchronous graphs of G and H, whose polarity function is the one induced by those of G and H and whose initial position is (∗G , ∗H ). We also define their sum G + H as the disjoint union of the two games in which the

two initial positions are identified. 2.2 Ingenuous strategies Properties characterizing definable strategies were studied in the case of alternating strategies (where Opponent and Proponent moves should strictly alternate in plays) in [Mel05a] and generalized to the non-alternating setting that we are considering here in [MM07, Mim08]. We recall here the basic properties of definable strategies An interesting feature of these properties is that they allow one to reason about strategies in a purely local and diagrammatic fashion. It is easy to show that every definable family σ χ of strategies is uniform and moreover the strategies σ constituting it satisfy the following properties. Positionality. A strategy σ is positional when for every three paths s, t : ∗ − x and u : x − y, s·u∈σ and s∼t and t∈σ implies t·u∈σ This property essentially means that a strategy is a subgraph of the game: a strategy σ induces a subgraph Gσ of the game which consists

of all the positions and transitions contained in at least one play in σ, conversely every play in this subgraph belongs 14 Source: http://www.doksinet to the strategy when the strategy is positional. In fact, this graph Gσ may be seen itself as an asynchronous graph by equipping it with the tiling relation induced by the underlying game: the fact that games and strategies are mathematical structures of comparable nature is conceptually appealing and technically useful in this setting. We sometimes say that a position x or a path s : x − y is in a positional strategy σ to mean that it is in the induced graph Gσ . Cube property. A positional strategy σ : G satisfies the cube property when the subgraph Gσ it induces satisfies the cube property (Definition 1). Preservation of compatibility. A positional strategy σ : G preserves forward compatibility when every asynchronous tile of the shape (??) in the game G belongs to the subgraph Gσ of the strategy σ when its two

coinitial transitions m : x − y1 and n : x − y2 are transitions in the subgraph Gσ . Diagrammatically, p =za q y2 y1 aC ∼ CC {= CC {{{ σ3m { n∈σ x = z aC {{ CCCq∈σ { C {{ y1 aC ∼ y2 CC {= CC {{{ σ3m { n∈σ x σ3p implies where the dotted edges indicate edges in G. Dually, it preserves backward compatibility when every asynchronous tile of the shape (??) in the game G belongs to the subgraph Gσ of the strategy σ when its two cofinal transitions p : y1 − z and q : y2 − z are transitions in the subgraph Gσ . Diagrammatically, z {= aCCCq∈σ CC {{ { { y1 a ∼ = y2 m z {= aCCCq∈σ CC {{ { { y2 y1 aC ∼ CC {= CC {{{ σ3m { n∈σ x σ3p σ3p implies n x These conditions ensure that the strategy is closed under residuals, and therefore the set of plays of the strategy σ reaching the same position x is regulated by a “causality order” on the moves occurring in these plays which refines the “justification order” on moves (in the sense of

pointer games) specified by the asynchronous game. Determinism. If the graph Gσ of the strategy contains a transition n : x − y2 and a Proponent transition m : x − y1 then there exists a residual of m along n, which is moreover contains in the strategy σ, this defining a tile of the form (??). Graphically, = z aC {{ CCCq∈σ { C {{ y1 aC ∼ y2 CC {= { C { σ3m C {{ n∈σ x σ3p y1 ` > y2 BB BB }} } σ3m B }} n∈σ x implies Our concurrent notion of determinism is not exactly the same as the usual notion of determinism in sequential games: in particular, a strategy may play several Proponent moves from a given position, as long as it converges later, in other words deterministic strategies are closed under residual of paths after paths containing only Proponent moves. 15 Source: http://www.doksinet Definition 8. A positional strategy which satisfies the three conditions above (cube property, preservation of compatibility, determinism) is called ingenuous. It is

worth mentioning that the notion of ingenuous strategy may be alternatively formulated on strategies σ described as set of plays, rather than as an asynchronous subgraph Gσ of the game, see the alternative definition in [MM07] and in the second author’s PhD thesis [Mim08]. 2.3 Composition of ingenuous strategies Every pair of strategies σ : A∗ kB and τ : B ∗ kC induces by interaction a strategy σ ÷τ of the game AkBkC, defined as σ÷τ = { s ∈ AkBkC | sA∗ ,B ∈ σ sB ∗ ,C ∈ τ } and (we implicitly adjust the polarities of the moves). The composite σ; τ is the strategy of A∗ kC defined by hiding the moves in B from the interaction between σ and τ : σ; τ { sA∗ ,C = | s∈σ÷τ } (13) We thus build a category Ging whose objects are games and whose morphisms from a game A to a game B are ingenuous strategies σ : A∗ kB. The composite of σ : A B and τ : B C is the strategy σ; τ : A C defined in (13). The identity idA on the game A is

the copycat strategy. The fact that the composite of two ingenuous strategies is also ingenuous is not entirely obvious. It is based on a subtle confluence property, stated below. Lemma 9. Suppose that σ : A B and τ : B C are two ingenuous strategies and that u : ∗ − (x, z) is a play in the composite strategy σ; τ . Suppose also that there exist two plays s1 : ∗ − (x, y1 ) and s2 : ∗ − (x, y2 ) t1 : ∗ − (y1 , z) and t2 : ∗ − (y2 , z) in σ and two plays in τ such that (s1 )A = (s2 )A , (s1 )B = (t1 )B , (s2 )B = (t2 )B , (t1 )C = (t2 )C then there exist a position y and two plays s : ∗ − (x, y) t : ∗ − (y, z) and respectively in σ and in τ such that sA = (s1 )A = (s2 )A , sB = tB , tC = (t1 )C = (t2 )C and both s1 and s2 (resp. t1 and t2 ) are prefixes of s (resp of t) modulo homotopy Diagrammatically, y xO zO == aa O O sA x0 y1Y3 ∼ Y3 33 33 (s1 )B 3 33 y0 16 y2 EE tC (s2 )B z0 Source: http://www.doksinet Proof.

The proof is done by induction on the length of the paths s1 , s2 , t1 et t2 It is trivial when one of these paths is empty (s2 for example) since the position y = y1 suits. Suppose that the path s1 begins with a transition in A; we write this transition m : (x0 , y0 ) − (x1 , y0 ) (the case where s2 begins with a transition in A is similar). By hypothesis, we have (s1 )A = (s2 )A The path s2 is thus of the form s2 = s02 · m · s002 , where s02 contains only transitions in the game B. By definition of the game A∗ kB, the positions (x1 , y0 ) and (x, y2 ) are compatible and therefore the path m · (s2 /m) : (x0 , y0 ) − (x, y2 ) exists and is in the strategy σ since it preserves forward compatibility. We can therefore replace the path s2 by the path m · (s2 /m) in our proof and suppose that the paths s1 and s2 both begin with the same transition m in A. The induction hypothesis can be applied to the paths s1 /m : (x1 , y0 ) − (x, y1 ), s2 /m : (x1 , y0 ) − (x, y2 ), t1

et t2 , from which we can deduce that there exist two positions x and y and two paths s : (x1 , y0 ) − (x, y) and t : (y0 , z0 ) − (y, z) which satisfy the property. The paths m · s and t enable us to conclude The case where on of the paths t1 or t2 begins with a transition in C is similar. Otherwise, all the paths s1 , s2 , t1 and t2 begin with a transition in B. We write m : y0 − y10 for the transition in B at the beginning of the paths s1 and t1 , and n : y0 − y20 the transition in B at the beginning of the paths s2 and t2 . If the transitions m and n are the same, then we can conclude as previously, using the induction hypothesis. Otherwise, we will show that we can come down to this case Suppose that m is a Proponent transition in B. Since the strategy σ is deterministic, the positions y10 and y2 are compatible, and the residual path m · (s2 /m) exists and is in the strategy σ since it preserves forward compatibility. We can therefore replace the path s2 by m · (s2

/m) and t2 by m · (t2 /m) in our proof and suppose that the four paths s1 , s2 , t1 and t2 begin with the same transition, from which we can conclude as previously. The case where n is a Proponent transition is similar If both m and n are Opponent transitions, they are Proponent transitions in B ∗ kC and the paths t1 and t2 begin with the same Proponent transition. We can therefore conclude by exchanging the roles of the strategies σ and τ in the proof above. This property states that given two interactions between the strategies σ and τ leading to the same play u in the composite strategy σ; τ , their union (wrt. the prefix order modulo homotopy) can be reached by interaction. From this property of “maximal witness”, which extends in our asynchronous setting the well-known “zipping lemma” of sequential game semantics, one can deduce that Proposition 10. The composite σ; τ : A C of two ingenuous strategies σ : A B and τ : B C is an ingenuous strategy. Proof. The

fact that closure under prefix and positionality are preserved by composition is simple The proofs that the other properties are preserved by composition are all similar, we only show here the preservation of determinism. Suppose that we are given a path u : ∗ − (x, z), and two Proponent transitions m : (x, z) − (x, z1 ) and n : (x, z) − (x, z2 ) in the component C such that the plays u · m and u · n are played by the composite strategy σ; τ . The play u · m (resp. u·n) results from the interaction of two plays s1 ∈ σ and t1 ·m ∈ τ (resp s2 ∈ σ and t2 · n ∈ τ ). Lemma 9 ensures that the two plays u · m and u · n result from the interaction of essentially the same pair of plays: s ∈ σ and t · m ∈ τ for u · m, s ∈ σ and t · n ∈ τ for u · n. Since the strategy τ is deterministic, the transitions m and n form a tile (??) and the plays t · m · p and t · n · q are in τ , from which we deduce that the paths m · p and n · q are also in the

strategy σ; τ . 17 Source: http://www.doksinet 3 An asynchronous model of linear logic We consider formulas of the multiplicative and additive fragment of linear logic (MALL), which are generated by the grammar A ::= A`A | A⊗A | A&A | A⊕A | X | X∗ where X is a variable (for brevity, we don’t consider constants). The ` and & (resp ⊗ and ⊕) connectives are sometimes called negative or asynchronous (resp. positive or synchronous). A position is a term generated by the following grammar x ::= † | x`x | x⊗x | &L x | &R x | ⊕L x | ⊕R x The de Morgan dual A of a formula is defined as usual – (A ⊗ B) = A ` B ∗ for example – and the dual of a position is defined similarly. Given a formula A, we write pos(A) for the set of valid positions of the formula which are defined inductively by † ∈ pos(A) and if x ∈ pos(A) and y ∈ pos(B) then x ` y ∈ pos(A ` B), x ⊗ y ∈ pos(A ⊗ B), &L x ∈ pos(A & B), &R y ∈

pos(A & B), ⊕L x ∈ pos(A ⊕ B) and ⊕R y ∈ pos(A ⊕ B). A valid position x of a formula A thus corresponds of a “prefix”, or a “partial exploration”, of this formula, the position † denoting a region of the formula that has not been explored. ∗ 3.1 ∗ ∗ Interpretation of formulas ~ for a list (Xi )1≤i≤n of variables. Suppose that A and B are formuWe write X ~ las with X as free variables and χ is a fixed function, called a context of the for~ associates a game GX . The game Gχ assomula, which to every variable X ∈ X A ciated to A in the context χ is defined inductively as follows (we sometimes simply write GA instead of GχA when the context is clear from the context). The interpretation of variables is given by the context: GχX = χ(X) The game GA`B is obtained from the game GA kGB by replacing every pair of positions (x, y) by x ` y, and adding a position † and an Opponent transition † − † ` †: we therefore have ∼ ˆ(GA kGB ).

The game GA&B is obtained from the disjoint union GA ] GB GA`B = by replacing every position x of GA (resp. GB ) by &L x (resp &R x), and adding a position † and two Opponent transitions † − &L † and † − &R †: we therefore have GA&B ∼ = (ˆGA ) + (ˆGB ). The games associated to the other formulas are deduced by de Morgan duality: GA∗ = G∗A For example, if we interpret the variable X (resp. Y ) as the game with two positions † and x (resp † and y) and one transition between them, the interpretation of the formula (X ⊗ X ∗ ) & Y is &L (x ⊗ x∗ ) hQQQ n6 n QQQ nnn QQ n n n ∼ &L (x ⊗ †) &L († ⊗ x∗ ) hPPP m6 m PPP m mm PP m m m &L († ⊗ †) O &L † hQQ QQQ QQQ QQQ Q † &RO y &R † pp8 p p pp ppp We have made explicit the positions of the games in order to underline the fact that they correspond to partial explorations of formulas, but the naming of a position won’t play any role

in the definition of asynchronous strategies (contrary for 18 Source: http://www.doksinet example to ludics [Gir01] where the definition of games starts from logic). The interpretation of formulas can be extended to sequents S = A1 , , Am ` B1 , , Bn by GχS = (ki GχA∗ )k(ki GχBi ). i 3.2 Interpretation of proofs A strategy σ on a game G is a non-empty prefix-closed set of plays, which are paths whose source is the initial position of the game. We write σ : G to indicate that σ is a strategy on a game G. To every proof π of a formula A, we associate a family σ χ , indexed by contexts χ, of strategies on the game Gχ interpreting the sequent S in the context χ. This family of strategies describe the explorations of formulas that are “allowed by the proof” and are formally defined by induction of the proofs which are generated by the following sequent rules: ` X, X ` Γ, A ` ∆, A∗ [cut] ` Γ, ∆ ∗ [ax] ` Γ, A, B [`] ` Γ, A ` B ` Γ, A ` ∆, B

[⊗] ` Γ, ∆, A ⊗ B ` Γ, A ` Γ, B [&] ` Γ, A & B ` Γ, A [⊕L ] ` Γ, A ⊕ B ` Γ, B [⊕R ] ` Γ, A ⊕ B Every axiom ` X ∗ , X is interpreted as the family of copycat strategies σ χ on the game χ(X), such a strategy σ being defined as the smallest strategy containing the empty play ε and such that for every play s : ∗ − (x, x) in σ such that there exists an Opponent move m : x − y or a Proponent move n : x − y in χ(X), the plays s · (x, m) · (m, y) : ∗ − (y, y) or s · (n, x) · (y, n) : ∗ − (y, y) are also in σ. Suppose fixed an environment χ. – A play in the interpretation of a proof whose first rule (from bottom up) is [`] s with π as premise is either empty or of the form † − (†`†) − (x`y) where s is a play in the interpretation of π. – A play in the interpretation of a proof beginning with a [⊗] rule with π1 and π2 s as premises is either empty or of the form † − († ⊗ †) − (x ⊗ y) where s is

a play such that sΓ,A is a play in the interpretation of π1 and s∆,B is a play in the interpretation of π2 . – A play in the interpretation of a proof beginning with a [&] rule with π1 and π2 s as premises is either empty or of the form † − (&L †) − (&L x) where s is a s play in the interpretation of π1 or of the form † − (&R †) − (&R x) where s is a play in the interpretation of π2 . – A play in the interpretation of a proof beginning with a [⊕L ] with π as premise s is either empty or of the form † − (⊕L †) − (⊕L x) where s is a play in the interpretation of π. – A play in the interpretation of a proof beginning with a [⊕R ] with π as premise s is either empty or of the form † − (⊕R †) − (⊕R x) where s is a play in the interpretation of π. Suppose that χ defines n variables, and χ0 is another context such that there exists a family (ιi ) of n embeddings ιi : χ(Xi ) , χ0 (Xi ). In an obvious

way, by a coproduct 19 Source: http://www.doksinet 0 construction, this family induces an embedding ι : Gχ , Gχ , where Gχ is the game associated to a given formula A in the context χ. It is the easy to show that a play t 0 of ι(Gχ ) is in σ χ if and only if it is the image by ι of a play s in σ χ . A family of strategy is said to be uniform when this property is satisfied for every such pair of contexts χ and χ0 . 3.3 Focusing in linear logic In linear logic, a proof of the form depicted on the left-hand side of π1 π2 ` A, B, C `D [⊗] ` A, B, C ⊗ D [`] ` A ` B, C ⊗ D π1 ` A, B, C π2 [`] ` A ` B, C `D [⊗] ` A ` B, C ⊗ D can always be reorganized into the proof depicted on the right-hand side. This proof transformation can be seen as “permuting” the introduction of ⊗ after the introduction of ` (when looking at proofs from bottom-up). From the point of view of the strategies associated to the proofs, the game corresponding to the proven sequent

contains † ` †, † ⊗gO† OOOq oo7 o OOO o o o o ∼ † ` †, † †, † ⊗ †gO o7 OOO o o O oo m OOO ooo n †, † p and the transformation corresponds to replacing the path m · p by the path n · q in the strategy associated to the proof. More generally, the introduction rules of two negative connectives can always be permuted, as well as the introduction of two positive connectives, and the introduction rule of a positive connective can always be permuted after the introduction rule of a negative one. Informally, a negative (resp. positive) can always be “done earlier” (resp “postponed”) We write π ≺ π 0 when a proof π 0 can be obtained from a proof π by a series of such permutations of rules. These permutations of rules are at the heart of Andreoli’s work [And92] which reveals that if a formula is provable then it can be found using a focusing proof search, which satisfies the following discipline: if the sequent contains a negative formula then a

negative formula should be decomposed (negative phase), otherwise a positive formula should be chosen and decomposed repeatedly until a (necessarily unique) formula is produced (positive phase) – this can be formalized using a variant of the usual sequent rules for linear logic. From the point of view of game semantics, this says informally that every strategy can be reorganized into one playing alternatively a “bunch” of Opponent moves and a “bunch” of Proponent moves (which correspond to the interpretation of synthetic connectives). All this suggests that proofs in sequent calculus are too sequential: they contain inessential information about the ordering of rules, and we would like to work with proofs modulo the congruence generated by the ≺ relation. Semantically, this can be expressed as follows. Definition 11 (courtesy). A strategy σ is courteous when for every play s = s1 · m · p · s2 20 Source: http://www.doksinet of the strategy σ such that the homotopy

relation m · p ∼ n · q defines a tile (??) in the game, and such that m is a Proponent transition or p is an Opponent transition, the play s0 = s1 · n · q · s2 is also in the strategy σ. We write σ̃ for the smallest courteous strategy containing σ. Courteous strategies are less sequential than usual strategies: suppose that σ is the strategy interpreting a proof π of a formula A, then a play s is in σ̃ if and only if it is a play in the strategy interpreting some proof π 0 such that π ≺ π 0 . The courtesy property ensures that a strategy σ which accepts an Opponent move n after playing an independent Proponent move m, is ready to delay its own action, and to accept the move n before playing the move m. Therefore, the “causality order” on moves induced by such a strategy refines the underlying “justification order” of the game, by adding only order dependencies m  n where m is an Opponent move. This adapts to the non-alternating setting the fact that, in

alternating games, the causality order p  q provided by the view of an innocent strategy coincides with the justification order when p is Proponent and q is Opponent. It should be noted that the composite σ; τ : A C of two courteous and ingenuous strategies σ : A B and τ : B C is itself a courteous strategy (this can be shown in the same way as in Proposition 10). In fact, courteous ingenuous strategies define a category Gcourt of 2-Player games and courteous ingenuous strategies. Note that this category is not a subcategory of the category Ging of ingenuous strategies. Indeed, the courteous identity on a game A is the buffer strategy buf A : A A defined as the smallest courteous strategy containing the identity idA in Ging . On the other hand, an ingenuous strategy σ : A B is courteous if and only if it is invariant by composition with the buffer strategy in the sense that the equalities buf A ; σ = σ = σ; buf B hold, thus adapting in our setting the notion of

buffered asynchronous agent introduced by Selinger [Sel97]. In that respect, the category of courteous strategies may be deduced conceptually from the category Ging , by restricting to the morphisms f : A B such that buf A ; f ; buf B = f , in a construction similar to the Karoubi envelope. 4 Concurrent games We recall here briefly the concurrent game model defined by Abramsky and Melliès [AM99]. A concurrent strategy ς on a complete lattice (D, ≤) is a continuous closure operator on this lattice. A closure operator is a function ς : D D which is 1. increasing: ∀x ∈ D, x ≤ ς(x) 2. idempotent: ∀x ∈ D, ς ◦ ς(x) = ς(x) 3. monotone: ∀x, y ∈ D, x ≤ y ⇒ ς(x) ≤ ς(y) Such a function is continuous when it preserves joins of directed subsets. Informally, an order relation x ≤ y means that the position y contains more information than x. With this intuition in mind, the first property expresses the fact that playing a strategy increases the amount of

information in the game, the second that a strategy gives all the information it can given its knowledge in the current position (so that if it is 21 Source: http://www.doksinet asked to play again it does not have anything to add), and the third that the more information the strategy has from the current position the more information it has to deliver when playing. Every such concurrent strategy ς induces a set of fixpoints defined as the set fix(ς) = {x∈D | ς(x) = x } This set can easily be shown satisfy the two following properties: (M) it is closed under arbitrary meets (J) it is closed under joins of directed subsets. The first property implies that the set X is a Moore family on the complete lattice D. As such, it is well-known that it induces a closure operator X • defined by ^ X • (x) = {y∈X | x≤y} whose set of fixpoints is precisely X. This closure operator is moreover continuous because of the (J) property satisfied by X and is thus a concurrent strategy.

We now explain how to construct a closure operator from a strategy in Gcourt and vice versa. Suppose that G is a game which is finite, meaning that there is no infinite path m1 · m2 · · · starting from the initial position in the game. The game G induces a partial order on its set of positions, defined by x ≤ y iff there exists a path x − y, which can be completed into a complete lattice D by formally adding a top element >. Notice that since the asynchronous graph G satisfies the cube property and is simply connected, Lemma 7 implies that two positions x and y are compatible in G if and only if x ∨ y 6= >. A position x of a strategy σ is halting when there is no Proponent move m : x − y in σ: in such a position, the strategy is either done or is waiting for its Opponent to play. We write σ ◦ for the set of halting positions of a strategy σ Conversely, to every set X of positions of G, we can associate a strategy X containing all the plays in G whose

intermediate positions x are such that x ≤P y for some position y ∈ X. Here, we write x ≤P y when there exists a path s : x − y containing only Proponent moves. Interestingly, these two operations form a section-retraction pair: a courteous ingenuous strategy σ is characterized by its set of halting positions. This is shown in Proposition 13. We first need a technical lemma in order to show this Lemma 12. For every position x of a courteous ingenuous strategy σ on a finite game G, there exists a halting position y of σ such that x ≤P y. If moreover x ≤ z for some halting position z of σ then y ≤ z. Proof. Consider a maximal path s : x − y in σ containing only Proponent moves Such a maximal path exists because the game is supposed to be finite. The position y it reaches is then necessarily halting. Suppose moreover that there exists a halting position z of σ such that x ≤ z. There exist two plays t : ∗ − x and u : ∗ − z in σ. Since the positions x and

z are compatible (since x ≤ z), the residual u/t : x − z exists in G and it is in σ by preservation of forward compatibility. By determinism of σ, the path s containing only Proponent moves, the positions y and z are compatible, the residual s/(u/t) exists. This residual containing only Proponent moves, it is 22 Source: http://www.doksinet necessarily empty since z is a halting position. Graphically, o7 7 G GzNN  o o  ooo  o y ? u/t ??   ? u s ?  xO O (u/t)/sooo t ∗ Therefore, the path (u/t)/s : y − z shows that y ≤ z. We can now proceed with the characterization of strategies by their halting positions. Proposition 13. For every courteous ingenuous strategy σ on a finite game G, we have σ = (σ ◦ ) . Proof. The equality is proved by showing inclusions in both directions Suppose that s : ∗ − y is a play in σ. For every position x visited by s, by Lemma 12, there exists a halting position x0 such that x ≤P x0 and therefore the play s is in

(σ ◦ ) . Conversely, we show that every play s : ∗ − y in (σ ◦ ) is in σ by induction on the length of s. If s is the empty play then it is in σ, which is supposed to be non-empty and closed under prefix. Otherwise, the play s can be written as the concatenation of a play t : ∗ − x with a transition m : x − y. By induction hypothesis, the play t is in σ. Moreover, there exists a halting position z and a path t0 : y − z containing only Proponent moves. The position z being a halting position of σ there exists a play u : ∗ − z in σ. Since x ≤ z the positions x and z are compatible: the residual u/t : x − z exists and is in σ by preservation of forward compatibility. Graphically: 5 5 zU e e t0 U y ∼ u/t ∼ m t u xO O ∗ The path u/t : x − z is in σ, m·t0 ∼ u/t and t0 contains only Proponent moves. Using the fact that σ is courteous, it can then be shown that the path m · ((u/t)/m) : x − z is in the strategy σ (informally, the

transition m can be pulled down in u/t since the other transitions are all Proponent). We finally conclude that the play s = t · m is in σ. The sets of halting positions generated by courteous ingenuous strategies can moreover be characterized as follows, generalizing Proposition 12 of [Mel04] to our nonalternating setting. Proposition 14. Given a set of positions X in a finite game G, there exists a courteous and ingenuous strategy σ such that X = σ ◦ if and only if X satisfies the following properties. 1. X is closed under intersection, 23 Source: http://www.doksinet m 2. every path x − w where x ∈ X and w ∈ X factors as x − y − w where m is an Opponent transition, m 3. every path x − y − w where x ∈ X and w ∈ X and m is an Opponent m s transition factors as x − y − z − w where z ∈ X and s contains only Proponent transitions, m m 2 1 y2 − w2 where x, w1 , w2 ∈ X and y1 − w1 and x − 4. for every path x − m1 , m2 are Opponent

transitions involved in a homotopy tile xC {{ CCCm2 { C! {} { y1 C ∼ y2 CC { { CC {{ ! {} z m1 there exists a transition path z − w such that w ∈ X, 5. initial condition: there exists a position x ∈ X such that ∗ ≤P x 1. The set X is closed under intersection: ∀x, y ∈ X, x∧y ∈X 2. Preservation of compatibility: for every two positions x and y of X which are compatible in G, x and y are also compatible in X. 3. For every position x ∈ X such that there exists a position z ∈ X for which x ≤ z, either x = z or there exists an Opponent transition m : x − x0 and a position y ∈ X such that x0 ≤P y ≤ z. 4. There exists a position x ∈ X such that ∗ ≤P x 5. Receptivity: for every position x ∈ X such that there exists an Opponent transition m : x − y, the position y is dominated in X 6. Totality: for every position x ∈ X, either x is maximal in G or there exists an Opponent transition m : x − y. Proof. Suppose that σ : G is a courteous

ingenuous strategy whose halting positions are X = σ ◦ . The strategy σ being ingenuous, the graph Gσ it induces satisfies the cube property (Definition 1) and therefore, by Lemma 7, the set of its positions is closed under intersection and bounded union. We now show that X satisfies the four properties above. 1. Suppose given two positions x and y in X Since x, y ∈ σ and σ satisfies the cube property, by Lemma 7, x ∧ y is also in σ. By Lemma 12, there exists a halting position z ∈ X such that x ∧ y ≤P z, z ≤ x and z ≤ y. By property of the meet, necessarily z = x ∧ y and therefore x ∧ y ∈ X. 2. Suppose that x and y are two halting positions of σ which are compatible in G There exists two plays s : ∗ − x and t : ∗ − y in σ. The positions being compatible, the residuals t/s : x − x∨y and s/t : y − x∨y exist in G and by preservation of forward compatibility, these residuals are in σ: the position x ∨ y is in σ. Therefore there exists

a halting position z in σ such that x ∨ y ≤P z, from which we can conclude. 24 Source: http://www.doksinet 3. Suppose that x ∈ X is a position such that there exists a position z ∈ X for which x ≤ z. There exists two plays s : ∗ − x and t : ∗ − z. Since x ≤ z, the positions x and z are compatible: the residual t/s : x − z exists and is in the strategy σ by forward preservation of compatibility. This path is either empty, in which case x = z, or it necessarily begins with an Opponent transition m : x − x0 because x is halting in σ. Since x0 ≤ z, by Lemma 12, there exists a halting position y such that x0 ≤P y ≤ z. 4. Since the strategy σ is supposed to be non-empty, the initial position ∗ is in the strategy and we can conclude by Lemma 12. 5. Suppose that there exists an Opponent transition m : x − y such that x ∈ X By receptivity of the strategy, the position y is also in σ and we conclude by Lemma 12. 6. Immediate since σ is supposed to

be total Conversely, suppose that X is a set of positions satisfying the properties. This set induces a strategy σ = X . We show here that this strategy is courteous, ingenuous and such that σ ◦ = X. By definition of X , the strategy σ can easily be shown to be positional and closed under prefix. We now show that σ satisfies the other properties required to be a courteous ingenuous strategy. – Determinism. Suppose that m : x − y1 and n : x − y2 are two coinitial compatible transitions in σ, with m being Proponent, and write y = y1 ∨ y2 . By definition of X , there exists two positions z1 and z2 in X such that y1 ≤P z1 and y2 ≤P z2 , in particular, there exists a path s1 : y1 − z1 containing only Proponent moves. The set X being closed under intersections, the position z1 ∧z2 is also in X. The path m · s1 containing only Proponent moves, by axiom 3, we necessarily have z1 ∧ z2 = z1 . Therefore the residuals m/n and n/m exist and they are in the strategy σ by

definition of X . – Preservation of forward compatibility. Suppose that m : x − y1 and n : x − y2 are two coinitial compatible distinct transitions in σ. If either m or n is Proponent the we can conclude by determinism Otherwise, m and n are both Opponent transitions. In this case, we can conclude that the residuals m/n and n/m are in the strategy σ by axiom 6. – Preservation of backward compatibility. Easily deduced from the closure of X under intersections. – Cube property. Since the game G is supposed to satisfy the cube property and the strategy σ preserves both forward and backward compatibility, it can easily be shown that the strategy σ satisfies the cube property. – Receptivity. Immediate consequence of axiom 5 – Totality. Immediate consequence of axiom 6 – Courtesy. Easy by definition of X Finally, remark that X ◦ = X. Namely, suppose that x ∈ X such that there exists a transition m : x − y. By definition of X there exists a position z ∈ X such

that y ≤P z and by the third property satisfied by X, the transition m is necessarily Opponent. Therefore x ∈ σ ◦ and X ⊆ σ ◦ Conversely, suppose that x ∈ σ ◦ By definition of X , there exists a position y ∈ X such that x ≤P y. Since x is halting we necessarily have x = y. 25 Source: http://www.doksinet So, a courteous ingenuous strategy σ is characterized by its set σ ◦ of halting positions, and this set satisfies the property (M), and the property (J) is trivially satisfied since the game is finite. This set of halting positions can therefore also be characterized as the set of fixpoints of a concurrent strategy on the positions of the game. In this sense, concurrent strategies are close to the intuition of focused proofs: given a position x, they play at once many Proponent moves in order to reach the position which is the image of x. In order to work for infinite games, this definition can be extended to infinite positions by ideal completion: an

infinite halting position x̂ is defined as an infinite downward-closed directed set of finite halting positions. The previous characterizations can then be extended to infinite games by considering the set σ ◦ of finite or infinite halting positions of a strategy σ, which can then be shown to satisfy (J). This generalization is important in order for example to be able to handle exponentials. However, this is out of the scope of this paper. 5 Towards full completeness Receptivity. If σ contains a play s : ∗ − x and there exists an Opponent move m : x − y in the game, then the play s · m : ∗ − y is also in σ. Totality. If σ contains a play s : ∗ − x and there is no Opponent transition m : x − y in the game then either the position x is terminal (there is no transition with x as source in the game) or there exists a Proponent transition m : x − y such that s · m is also in σ. 6 Scheduled strategies The operation F which associates to every game G its

set F (G) = VG of vertices is a function from games to sets. Moreover, given a courteous ingenuous strategy σ : G H, the set F (σ) = σ ◦ is included in VGkH ∼ = F (G) × F (H) and can therefore be seen as a relation from F (G) to F (H). One would expect the operation F to be a functor from the category games to the category of relations, however this is not the case because F does not preserve composition (and similarly, the operation which to an asynchronous strategy the corresponding concurrent strategy, described in previous section, is not functorial). This comes essentially from the fact that the category is compact closed, which means that it has the same strategies on the games interpreting the formulas A ⊗ B and A ` B (up to the polarity of the first move). For example, if X (resp Y ) is interpreted by the game with one Proponent transition † − x (resp. † − y), the interpretations of X ∗ ` Y ∗ and X ⊗ Y are respectively x∗ ` y ∗e r8 r r r rrr ∗ ∼

x ` †fL LLL LLL † `O † †9 ` y ∗ x ⊗ †d and x: ⊗ ydI II n II I ∼ †: ⊗ y uu u u uu m † ⊗O † † † 26 Source: http://www.doksinet Now, consider the strategy σ : X ∗ ` Y ∗ which contains only the prefixes of the bold path † − (x∗ ` y ∗ ) and the strategy τ : X ⊗ Y which contains only the prefixes of bold path † − (x ⊗ y). The fixpoints of the corresponding concurrent games are respectively σ ◦ = { †, † ` †, x∗ ` †, x∗ ` y ∗ } and τ ◦ = { x ⊗ y }. From the point of view of asynchronous strategies, the only reachable positions by both of the strategies in X ⊗ Y are † and † ⊗ †. However, from the point of view of the associated concurrent strategies, they admit the position x ⊗ y as a common position in X ⊗ Y . From this observation, it is easy to build two strategies σ : A B and τ : B C • • • such that ((τ ◦ σ)◦ ) 6= (τ ◦ ) ◦ (σ ◦ ) (we refer the reader to [AM99] for the

definition of the composition of closure operators). In the example above, the strategy τ is the culprit: as mentioned in the introduction, the two strategies on X and Y should be independent in a proof of X ⊗ Y , whereas here the strategy τ makes the move n depends on the move m. Formally, this dependence expresses the fact that the move m occurs after the move n in every play of the strategy τ . In [MM07], we have introduced a scheduling criterion which dynamically enforces this independence between the components of a tensor: a strategy satisfying this criterion is essentially a strategy such that in a sub-strategy on a formula of the form A ⊗ B no move of A depends on a move of B and vice versa. We recall here briefly the properties of scheduled strategies (which satisfy this criterion); detailed proofs can be found in [Mim08]. Every definable strategy satisfies this criterion and moreover, Theorem 15. Strategies satisfying the scheduling criterion form a subcategory of the

• category of ingenuous strategies and the operation σ 7 (σ ◦ ) extends into a functor from this category to the category of concurrent games. This property enables us to recover more precisely the focusing property directly at the level of strategies as follows. Suppose that σ is an ingenuous strategy interpreting a proof π of a sequent ` Γ. Suppose moreover that s : x − y is a maximal play in σ. By receptivity and courtesy of the strategy, this play is homotopic in the graph Gσ to the concatenation of a path s1 : x − x1 containing only Opponent moves, where x1 is a position such that there exists no Opponent transition m : x1 − x01 , and a path s2 : x1 − y. Similarly, by totality of the strategy, the path s2 is homotopic to the concatenation of a path s02 : x1 − y1 containing only Proponent moves, where y1 is a position which is either terminal or such that there exists an Opponent transition m : y1 − y10 , and a path s002 : y1 − y. The path s02 consists

in the partial exploration of positive formulas, one of them being explored until a negative subformula is reached. By courtesy of the strategy, Proponent moves permute in a strategy and we can suppose that s02 consists only in such a maximal exploration of one of the formulas available at the position x. If at some point a branch of a tensor formula is explored, then by the scheduling criterion it must be able to also explore the other branch of the formula. By repeating this construction on the play s002 , every play of σ can be transformed into one which alternatively explores all the negative formulas and explores one positive formula until negative formulas are reached. By formalizing further this reasoning, one can therefore show that Theorem 16. In every asynchronous strategy interpreting a proof in MALL is included a strategy interpreting a focusing proof of the same sequent A motivation for introducing concurrent games was to solve the well-known Blass problem which reveals

that the “obvious” game model for the multiplicative fragment of linear logic has a non-associative composition. Abramsky explains in [Abr03] that there are two ways to solve this: either syntactically by considering a focused proof system or semantically by using concurrent games. Thanks to asynchronous games, 27 Source: http://www.doksinet we understand here the link between the two points of view: every proof in linear logic can be reorganized into a focused one, which is semantically understood as a strategy playing multiple moves of the same polarity at once, and is thus naturally described by a concurrent strategy. In this sense, concurrent strategies can be seen as a semantical generalization of focusing, where the negative connectives are not necessarily all introduced at first during proof-search. It should be noted that some concurrent strategies are less sequential than focused ones, for example the strategies interpreting the multi-focused proofs [CMS08], where the

focus can be done on multiple positives formulas in the positive phase of the focused proof-search. Those multi-focused proofs were shown to provide canonical representatives of focused proofs (the interpretation of the canonical multi-focused proof can be recovered by generalizing Theorem 16). We are currently investigating a generalization of this result by finding canonical representatives for concurrent strategies. 7 Conclusion Ingenuous is not innocent. Opens a question: what is innocence? Scheduling Several approaches? Acknowledgments. The authors thank Pierre-Louis Curien and Martin Hyland for lively and inspiring discussions on the topic of that article. References [Abr03] S. Abramsky Sequentiality vs concurrency in games and logic Mathematical Structures in Computer Science, 13(04):531–565, 2003 [AJ94] S. Abramsky and R Jagadeesan Games and Full Completeness for Multiplicative Linear Logic The Journal of Symbolic Logic, 59(2):543–574, 1994 [AM99] S. Abramsky and

P-A Melliès Concurrent games and full completeness In LICS, volume 99, pages 431–442, 1999. [And92] J.M Andreoli Logic Programming with Focusing Proofs in Linear Logic Journal of Logic and Computation, 2(3):297–347, 1992. [BDK97] F. Bracho, M Droste, and D Kuske Representation of computations in concurrent automata by dependence orders. Theoretical Computer Science, 174(1-2):67–96, 1997. [Bed88] M.A Bednarczyk Categories of asynchronous systems PhD thesis, University of Sussex, 1988 [Ber79] G. Berry Modèles complètement adéquats et stables des lambda-calculs typés. Thèse de Doctorat d’État, Université Paris VII, 1979 [Bir33] G. Birkhoff On the combination of subalgebras In Proceedings of the Cambridge Philosophical Society, volume 29, pages 441–464, 1933. [CF05] P.L Curien and C Faggian L-Nets, Strategies and Proof-Nets In CSL, pages 167–183. Springer, 2005 28 Source: http://www.doksinet [CMS08] K. Chaudhuri, D Miller, and A Saurin Canonical Sequent

Proofs via Multi-Focusing. In International Conference on Theoretical Computer Science, 2008 [DK02] M. Droste and D Kuske Automata with concurrency relations – a survey Advances in Logic, Artificial Intelligence and Robotics, pages 152–172, 2002. [Gir01] J.-Y Girard Locus solum: from the rules of logic to the logic of rules Mathematical Structures in Computer Science, 11(3):301–506, 2001. [GLM92] G. Gonthier, J-J Lévy, and P-A Melliès An abstract standardisation theorem. In Proc of the 8th Annual Symposium on Logic in Computer Science, pages 72–81, 1992. [GM04] D.R Ghica and AS Murawski Angelic Semantics of Fine-Grained Concurrency In FoSSaCS, pages 211–225, 2004 [Gou00] É. Goubault Geometry and Concurrency: A User’s Guide Mathematical Structures in Computer Science, 10(4):411–425, 2000. [HO00] M. Hyland and L Ong On Full Abstraction for PCF: I, II and III Information and Computation, 163(2):285–408, December 2000 [Joy77] A. Joyal Remarques sur la théorie

des jeux à deux personnes Gazette des Sciences Mathématiques du Quebec, 1(4):46–52, 1977. [Lai05] J. Laird A game semantics of the asynchronous π-calculus Proceedings of 16th CONCUR, pages 51–65, 2005. [Maz88] A.W Mazurkiewicz Basic notions of trace theory Lecture Notes In Computer Science; Vol 354, pages 285–363, 1988 [Mel98] P.-A Melliès Axiomatic rewriting 4: A stability theorem in rewriting theory. Logic in Computer Science, pages 287–298, 1998 [Mel04] P.-A Melliès Asynchronous games 2: the true concurrency of innocence In Proceedings of the 15th CONCUR, number 3170 in LNCS, pages 448–465. Springer Verlag, 2004. [Mel05a] P.-A Melliès Asynchronous games 4: a fully complete model of propositional linear logic Proceedings of 20th LICS, pages 386–395, 2005 [Mel05b] P.-A Melliès Axiomatic Rewriting 1: A diagrammatic standardization theorem. Lecture Notes in Computer Science, 3838:554–638, 2005 [Mim08] S. Mimram Sémantique des jeux asynchrones et

réécriture 2-dimensionnelle. PhD thesis, PPS, CNRS – Université Paris Diderot, 2008. [MM07] P.-A Melliès and S Mimram Asynchronous Games: Innocence without Alternation. In Proceedings of the 18th International Conference on Concurrency Theory CONCUR, volume 4703 of Lecture Notes in Computer Science, pages 395–411. Springer, Heidelberg, 2007 [NPW81] M. Nielsen, G Plotkin, and G Winskel Petri Nets, Event Structures and Domains, Part I. Theoretical Computer Science, 13:85–108, 1981 29 Source: http://www.doksinet [PSS90] P. Panangaden, V Shanbhogue, and EW Stark Stability and Sequentiality in Dataflow Networks In International Conference on Automates, Languages and Programming, volume 443 of LNCS, pages 253–264. Springer Verlag, 1990. [Sel97] P. Selinger First-order axioms for asynchrony In CONCUR ’97, number 1243 in Lecture Notes in Computer Science, pages 376–390. Springer, 1997 [Shi85] M.W Shields Concurrent Machines The Computer Journal, 28(5):449– 465,

1985. [SNW96] V. Sassone, M Nielsen, and G Winskel Models for concurrency: Towards a classification. Theoretical Computer Science, 170(1):297–348, 1996 [VY06] D. Varacca and N Yoshida Typed Event Structures and the π-calculus MFPS, pages 373–398. Elsevier, 2006 [WN95] G. Winskel and M Nielsen Models for concurrency In Handbook of Logic in Computer Science, volume 3, pages 1–148. Oxford University Press, 1995 30