Com S 541 Lecture -*- Outline -*- * Discussion (4.8) Q: What are the advantages of declarative programming? compositional or modular Q: Can we do everything efficiently with the declarative model? ** efficiency (4.8.1) Basic problem: CPUs want to manipulate data in place so hard to implement... So may have to rewrite the program... incremental modifications of large data have to be careful to "single thread" it (never access old state) memoization requires change in interface may be more complex, due to lack of expressiveness see transitive closure algorithm in 6.8.1 So can't be both efficient and natural simultaneously ** modularity (4.8.2) Q: What does it mean to be modular? That a change to one part can be done without changing the rest. Q: What problems are hard to modularize in the declarative model? memoization, since the accumulator affects interfaces of other code instrumenting programs (counters for performance, etc.) Q: Why not use a compiler/preprocessor to translate stateful model to the declarative model? inefficient, due to all the argument passing ** nondeterminism (4.8.3) Q: Is the declarative model always deterministic? Yes Q: Might we sometimes want nondeterminism? yes, "Components that are truly independent behave nondeterministically w.r.t. each other" Example: merging streams from different clients See figure 4.33 Q: Can two clients send to the same server without being coordinated? No fun {Server InS1 InS2} ... end Which one is read first? Solutions: nondeterministic wait (WaitTwo), weak state (IsDet), or state Another problem (4.8.3.2) video display application ** real world (4.8.4) "The real world is not declarative." It has state and concurrency. leads to: - interfacing problems - specification problems ** picking the right model (4.8.5) What do do about the limits of the declarative model? ------------------------------------------ Rule of least expressivness: For each component, use the least expressive computation model that results in a "natural" program. ------------------------------------------ Q: What is "natural"? ** extended models (4.8.6) ------------------------------------------ EXTENDED MODELS Declarative sequential - functional programing, partial values - algebraic equational reasoning Declarative concurrent - declarative + threads + by-need - algebraic equational reasoning + reasoning about flow control Declarative with exceptions - no longer declarative - reasoning by operational semantics Message passing concurrency: - declarative concurrecy + ports - allows nondeterminism - can restrict nondeterminism to small parts of program Stateful: - normal sequential programming - reasoning based on histories (states) Shared-State concurrency: - stateful + threads - reasoning is hard Relational: - declarative model + search - reasoning based on logic(?) ------------------------------------------ ** Using different models together (4.8.7) Putting together modules with impedence matching Use small model module as part of a larger model's system Examples: - using sequential component in concurrent model, via a serializer (monitor) - using declarative component in stateful model, via a storage manager that passes context to declarative module, and keeps it for next time - using centralized component in distributed model, via a collector that accepts requests and passes them to component - using component intended for secure model in insecure model via a protector that insulates computation, verifying requests - etc. ** Advanced topics (4.9) *** declarative concurrent model with exceptions (4.9.1) Q: What situations raise exceptions? Binding two distinct values to a variable, executing an operation outside its domain (such as dividing by zero) Q: Is the declarative model with exceptions still declarative? No, for example: declare X thread try X = 1 catch _ then skip end end thread try X = 2 catch _ then skip end end Q: What happens if the execution of a by-need trigger cannot complete normally? X = {ByNeed fun {$} A=foo(1) B=foo(2) in A=B A end} the function Throws an exception when X is needed, So what value should X get? Can't be foo(1) or foo(2). But ByNeed promised a value! What to do? Oz gives an unhandled exception in the thread spawned by ByNeed, so the whole program fails Could we make that more robust? Yes, could raise an exception in the thread that needs the value. How to make sure it's raised in that thread? Use a special value... ------------------------------------------ FAILED VALUES Syntax ::= ... | {Failedvalue X Y} Sugars Y = {FailedValue X} ==> {FailedValue X Y} Recall: Operational Semantics of exceptions [trigger creation] ({({ByNeed X Y},E)|Rest} + MST, s) --> ({Rest} + MST, s') where unbound(s, E(Y)) and s' = addTrigger(s,trig(E(X),E(Y))) [ByNeed done] ({({ByNeed X Y},E)|Rest} + MST, s) --> ({({X Y},E)|nil} + {Rest} + MST, s') where determined(s, E(Y)) [trigger activation] ({ST} + MST, s) --> ({({X Y},{X->x,Y->y})|nil} + {ST} + MST, s') where needs(s)(ST,y) and hasTrigger(s,trig(x,y)) and s' = remTrigger(s,trig(x,y)) ------------------------------------------ Q: How would we make the operational semantics use Failed Values? Can't be at time of trigger creation, since not a problem then, Certainly not for [ByNeed done] So must be at trigger activation, but how? [trigger activation revised] ({ST} + MST, s) --> ({(try {X Y} catch Z then {Failedvalue Z Y} end,{X->x,Y->y})|nil} + {ST} + MST, s') where needs(s)(ST,y) and hasTrigger(s,trig(x,y)) and s' = remTrigger(s,trig(x,y)) [FailedValue] ({({FailedValue X Y},E)|Rest} + MST, s) --> (MST, s') where unbound(s,E(Y)) and s' = bind(s)(s(E(Y)),failed(E(X))) Q: How to get the rules to work with these failed values? have to modify [value creation to unbound] and [value unification] to be sure not a failed value, add 2 new rules for failed values to make them throw the exception. *** lazy execution (4.9.2) **** language design Q: Should an imperative language be lazy by default? no, too hard to understand Q: Should a declarative language by lazy by default? Many programs require more than one model - in the small (algorithms): eagerness is important for controling worst case execution time laziness is important for persistent data structures - in the large: both eagerness and laziness are useful for interfacing components push sytle = eager execution pull style = lazy execution The authors think best to have eager default and lazy available Can encode lazy evaluation explicitly (if have cells + closures) Can you encode eager evaluation using only lazy evaluation? Yes, if can make demands. **** reduction orders ------------------------------------------ REDUCTION ORDERS def: normal order reduction = evaluating arguments only when needed, in leftmost outermost order def: applicative order reduction = evaluate all arguments before application Example: fun {OhNo} {OhNo} end % loops forever fun {Three X} 3 end {Three {OhNo}} def: nonstrict evaluation order = any evaluation order that terminates when normal order does def: lazy order = an evaluation order that does the minimum number of reduction steps. local X={F 4} in X+X end ------------------------------------------ Church-Rosser (confluent property): if A --> B and A --> B', then there is some C such that B -->* C and B' -->* C. Q: Why is a nonstrict language hard to extend with explicit state? Can't predict when or how often side effects will happen. **** parallelism Q: What is speculative execution? executing expressions before known that value is needed Q: Why can we do that with the declarative model? Because they don't change the world's state, so can be dropped without compensating. *** dataflow variables as communication channels (4.9.3) Q: How is a dataflow variable like a communication channel? binding, like asynchronous send waiting (suspension) for variable to be bound, like synchronous receive Q: Can you do asynchronus recieve and synchronous send? yes: use variable, without generating a need, like asynchronous receive create ByNeed to bind variable, then wait, like synchronous send Q: What does "nonblocking" mean? returns immediately, hard to do for receive can implement using IsDet {IsDet X} is true when X is bound, false otw. *** synchronization (4.9.4) waiting synchronizes thread doing waiting and one generating the result ------------------------------------------ SYNCHRONIZATION (4.9.4) def: a synchronization point links two steps B and A iff in every interleaving B occurs after A ------------------------------------------ approaches: implicit (e.g., dataflow) explicit (e.g., demand-driven streams) *** utility of dataflow variables (4.9.5) Q: What are dataflow variables good for? - primitive for concurrent programming - remove order dependencies by replacing static ones with dynamic ones - some advantages of state (difference lists) - distributed programmnig - can do declarative claculations with partial information - can do logic programming ------------------------------------------ FUTURES AND I-STRUCTURES def: A future is a placeholder for a computation. fun {Future E} X in thread X = {E} end !!X end def: an I-struucture is an array of single-assignment variables. ------------------------------------------