Screenplays & Play

A Model of Tacit Knowledge and Action

Description
A Model of Tacit Knowledge and Action
Published
of 7
10
Published
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Similar Documents
Share
Transcript
  A Model of Tacit Knowledge and Action  Citation Gal, Y. et al. “A Model of Tacit Knowledge and Action.”Computational Science and Engineering, 2009. CSE '09.International Conference on. 2009. 463-468. © 2009 IEEE As Published http://dx.doi.org/10.1109/CSE.2009.479 Publisher Institute of Electrical and Electronics Engineers Version Final published version Accessed Tue Sep 06 11:28:28 EDT 2011 Citable Link http://hdl.handle.net/1721.1/59323 Terms of Use Article is made available in accordance with the publisher's policyand may be subject to US copyright law. Please refer to thepublisher's site for terms of use. Detailed Terms  A Model of Tacit Knowledge and Action Ya’akov GalMIT CSAIL, andHarvard University SEASRajesh KasturiranganNational Institute of Advanced StudiesBangalore, India, andMIT CSAILAvi PfefferHarvard University SEASWhitman RichardsMIT CSAIL I. A BSTRACT Natural Intelligence is based not only on conscious pro-cedural and declarative knowledge, but also on knowledgethat is inferred from observing the actions of others. Thisknowledge is tacit, in that the process of its acquisition remainsunspecified. However, tacit knowledge is an accepted guide of behavior, especially in unfamiliar contexts. In situations whereknowledge is lacking, animals act on these beliefs withoutexplicitly reasoning about the world or fully considering theconsequences of their actions. This paper provides a compu-tational model of behavior in which tacit knowledge plays acrucial role. We model how knowledge arises from observingdifferent types of agents, each of whom reacts differentlyto the behaviors of others in an unfamiliar context. Agents’interaction in this context is described using directed graphs.We show how a set of observations guide agents’ knowledgeand behavior given different states of the world.II. I NTRODUCTION How do we infer the proper behavior in unfamiliar orsensitive contexts? Suppose you are in a line, waiting to greeta foreign dignitary. Do you bow, or do you shake hands? Whatis the appropriate protocol? Similarly, at a formal dinner, whatwould be the proper etiquette — especially in an unfamiliarculture? Even the act of extended eye contact may be a fauxpas if you are facing a hostile gang or a malevolent dictator.Studies show that humans (and presumably other animalsas well) act appropriately in the world without an explicitrepresentation of all of its features (Brooks, 1991; Gibson,1977; Bonabeau, Dsrco, & Theraulaz, 1999).We believe appropriate action must be based on tacit knowl-edge about how the world works. By tacit knowledge, we meanimplicit (though not necessarily unconscious) assumptionsabout the world that guide action. The mark of tacit knowledgeis the lack of deliberation. Tacit knowledge is knowledgewhose veracity we do not evaluate using deliberate thought.Our definition of tacit knowledge differs from other uses of thisterm, for example, by Polanyi (1966), who requires that tacitknowledge be unconscious (perceptual knowledge can be con-scious without being evaluable) and by Crimmins (1992), whorequires that tacit knowledge involve implicit commitment tobeliefs about which one does not have a current opinion.In this paper, we follow Janik (1990) in defining tacit knowl-edge as knowledge that in principle can be made consciousand explicit, and evaluable when done so, but is currentlyimplicit. A simple example clarifies our use of this term.When we choose to run a red light (common in Bangalore,uncommon in Stockholm) we make decisions based on animplicit knowledge of the context and actions appropriate tothat context. Such decisions rarely involve deliberation or self-reflexivity (which is the mark of explicit knowledge systems).However, if forced to do so, we may be able to state explicitlythe trade-offs involved in running a red light (e.g., the fearof being caught by a policeman vs. the need to get to thedestination as quickly as possible) and evaluate the trade-offs. When running a red light, as in many other daily lifesituations, we act tacitly though we could be fully consciousand deliberative in principle.Tacit knowledge — in combination with information fromother sensory modalities— is often invoked in new contexts.For example, one general class of tacit knowledge revolvesaround judgments of typicality and mimicry, i.e., “copy some-one who looks like he knows what he is doing”. When waitingin line to meet the foreign dignitary, one possible strategy isto follow the actions of someone whose dress and mannerismconvey familiarity about the proper etiquette. Here typicalityis assessed by looking at dress and mannerisms, both of whichare provided by perception. Mimicry and typicality judgmentsare tacit knowledge since they are not stated formally. Weare guided mostly by what other people do, not by rigorousanalysis and reasoning (Minsky, 1998).Tacit knowledge allow agents to acquire knowledge andinfer accepted modes of behavior in new contexts withoutfully reasoning about all the possible consequences of theiractions. People use tacit knowledge as a surrogate for whatis true in the world. Suppose that a driver that is unfamiliarwith the traffic laws in a particular country is waiting to turnright behind a line of cars and can only see a school bus infront. If the school bus turns right on a red light, the driver of the car behind it may choose to follow its actions and do thesame. However, this driver may choose not to follow a beat-upsedan. This is because this driver assumes the school bus tobe following the local rules, while the driver of the beat-upsedan is believed to be reckless.This paper presents work towards a computational theory of the way tacit knowledge affects the way agents interact witheach other over time. Our approach is inspired by canonicalmodels of perception and language (Marr, 1982; Agre &Rosenschein, 1996; Chomsky, 1965) that have laid down acomputational theory (by Marr for perception and Chomsky 2009 International Conference on Computational Science and Engineering 978-0-7695-3823-5/09 $26.00 © 2009 IEEEDOI 10.1109/CSE.2009.479463  for language), i.e., a formal statement of the constraints andconditions regulating a visual or linguistic process.We consider a setting in which multiple agents need tomake decisions, and interact with other agents as definedby a graphical network. We incorporate several underlyingprinciples of tacit knowledge into our model. First, agentsuse their own actions as a tool for conveying knowledge, anduse others’ actions as a guide for their own behavior. Second,agents make decisions in a bounded way, that depends, undercertain conditions, on the actions of their neighbors. We showhow these assumptions facilitate the propagation of knowledgeand action in the network, and provide certain guaranteesabout agents’ behavior for different network topologies. Lastly,we use inference mechanisms in the model that are simplein computational complexity, but are sufficient to describe avariety of ways in which agents’ accrue knowledge and actin the world. We do not mean to suggest that this model canexplain people’s performance but rather, that it describes, in aprincipled, clear way the competence that underlies people’scapacity for tacit knowledge.III. A B ASIC M ODEL Initially we will consider a single action with values denoted + (e.g., turning right on red) and − . We use notation R + and R − to denote that the action is legal or illegal, respectively.We also introduce a knowledge operator K  ti ( R + ) , specifyingthat action + is known to be legal by agent i at time t or K  ti ( R + ) , specifying that + is not known to be legal at time t (and similarly for R − ). We drop these super- and sub-scriptwhen they are clear from context.Events are assumed to be consistent, such that R + and R − cannot both hold simultaneously. Knowledge in the model isassumed to be correct, such that K  ( R + ) −→ R + . It followsthat knowledge is consistent, such that K  ( R + ) −→ K  ( R − ) .(And similarly for R − ). We also assume that for each agent,one of the following must hold: R + is known to be true( K  ( R + ) ), R − is known to be true ( K  ( R − ) ) or nothing isknown ( K  ( R + ) ,K  ( R − )) . Consider the action of turning righton a red light. This means that if an agent knows it is legal toturn right on red, it cannot the case that the agent knows thisaction to be illegal. Note that the converse does not hold. Forexample, K  ( R − ) does not imply that R − holds. Intuitively,not knowing whether it is illegal to turn right on red does notimply that this action is illegal.To capture the way different agents make decisions, weintroduce the notion of a type. In our model, a type essentiallyrefers to an agent’s strategy, and this strategy is specific to eachtype of agent. Our use of types in this work is distinguishedfrom the traditional usage of this term as representing agents’private information in game theory. Agents’ knowledge of the types of agents they interact with is the way our modelcaptures tacit knowledge, as we will soon see.We now introduce the following four types of agents,representing four different strategies that can occur in thisexample. • t 1 (conservative). Choose action − . (Never turn right onred) • t 2 (law abiding). Choose action + if  K  ( R + ) . (Turn righton red only if you know it is legal) • t 3 (risk taker). Choose action + if  K  ( R − ) . (Turn rightas long as you don’t know it is illegal) • t 4 (reckless). Choose action +. (Always turn right on red)  A. Interaction Graphs The relationship between multiple agents’ knowledge andtheir actions is defined by a directed network called an interaction graph . Each node in the network represents anagent, an edge ( i,j ) determines that agent j knows the type of agent i and that agent j can observe the action of agent i . Thefirst clause refers to one of the tenets of tacit knowledge, thatof knowing how others behave in various situations, thoughthe manner in which this knowledge is constructed in notexplicitly specified. The second clause is commonly used todefine interaction within networks. We will denote nodes inthe graph by their agent types when it is clear from context.In the traffic example, the following network represents apossible interaction graph describing a line of cars waiting toturn right, in which an agent of type t 3 is waiting behind anagent of type t 1 who is waiting behind an agent of type t 2 ,etc... t 4 → t 2 → t 1 → t 3 To formally describe the interaction between tacit knowl-edge and action, we first detail how knowledge is conveyedfor each type of agent. We say that an agent conveys knowledge if its actions provide information about rules in the world, orabout the knowledge of other agents about rules in the world. • Types t 1 and t 4 never convey knowledge because thestrategies of these types do not depend on their knowl-edge. • Type t 3 conveys K  i ( R − ) when it is observed to do − ,and conveys K  i ( R − ) when it is observed to do + . • Type t 2 conveys K  i ( R + ) when it is observed to do + and conveys K  i ( R + ) when it is observed to do − .Consider for example an agent i of type t 3 (reckless) andan agent j of type t 2 (law abiding). Suppose there is an edge ( t 3 ,t 2 ) in the interaction graph, and that at at time step 1 wehave that K  1 i ( R + ) , K  1 j ( R + ) ,K  1 j ( R − ) (agent i knows that itis legal to do + , and agent j does not know whether it is legalor illegal). Following their type specifications, at time step 1,agent i will do action + , and agent j will do action − . Now,agent j cannot infer from the actions of agent i that R + holds,because a t 3 agent that does + will not convey any knowledge.Therefore at time step 2 we still have that K  2 j ( R + ) ,K  2 j ( R − ) and following its type specification, agent j will choose to do − again.We now summarize the rules governing the behavior of any agent in the graph given its directly observed neighbors.Table I lists the actions for a row agent j and column agent i given an edge ( i,j ) in the interaction graph. Each item inthe table is a tuple, in which the left entry states the action 464  TABLE IA CTION TAKEN BY ROW AGENT j ON OBSERVING ACTION OF COLUMNAGENT it 1 t 2 t 3 t 4 t 1 ∅ ( − ) − ( − ) − ( − ) − ( ∅ ) t 2 ∅ ( − ) + ( − ) − ( − ) − ( ∅ ) t 3 ∅ ( + ) + ( + ) + ( − ) + ( ∅ ) t 4 ∅ ( + ) + ( + ) + ( + ) + ( ∅ ) taken by j given that i chooses to do + , and the right entry(between parentheticals) states the action taken by j giventhat i chooses to do − . A ∅ symbol denotes a counter-factualevent — an action that cannot be chosen by a given type underthe circumstances. For example, it cannot be the case that anagent of type t 1 (conservative) is observed to do action + .For example, according to the entry in row t 2 , column t 3 inTable I, when t 3 agent does action + , the t 2 agent will doaction − , as we have shown above.  B. From Knowledge Conditions to Actions We now show how this knowledge informs agents’ decisionsin the graph. We state the following theorem that specifies thecriteria by which knowledge and action interact in the graph. Theorem 1. Let  C  be an interaction graph. A law-abidingagent  j of type t 2 in C  will choose to do + at time step t + l if and only if the following hold: • There is an agent  i in C  of type t 2 such that  K  ti ( R + ) holds. • There is a directed path in C  from i to j of length < = l that passes solely through agents of type t 2 .Similarly, an agent  j of type t 3 will choose action − at time t + l if and only if  K  tj ( R − ) holds and there is a path from i to j of length l that passes solely through agents of type t 3 . Using the theorem, we can induce a mapping from any agent i and interaction graph C  to a knowledge condition for j ,stating that j knows an action is legal ( K  t + lj ( R + ) ), j knowsthe rule is illegal ( K  t + li ( R − ) ), or that i does not know whetherthe rule is legal or illegal ( K  t + lj ( R + ) ,K  t + lj ( R − )) .This theorem is easy to prove. Take for example a pathfrom R + to an agent of type t 2 . Any agent along this paththat is not of type t 2 will not convey knowledge of  R + to t 2 , and therefore t 2 will choose action − , as specified byTable I. A corollary to this theorem is that at a given time t , any knowledge that is conveyed by different paths in theinteraction graph is consistent. That is, any path of reasoningin the graph will always yield the same action for an agent,according to the theorem.In our model, although agents can observe their neighbors’types in the graph, they may not be able to infer what theirneighbors know. Consider for example two agents i and j ,both of type t 3 (reckless). Suppose there is an edge ( i,j ) inthe interaction graph, and that at time t we have that K  ti ( R + ) , K  tj ( R + ) ,K  tj ( R − ) (agent i knows that it is legal to do + , andagent j does not know whether it is legal to do + ). At time t , agent i will do + . Although agent j will also do action + at time t + 1 , it will not have learned the rule R + becausethis knowledge cannot be conveyed by the actions of a t 3 typeagent.We now show that the relationships between types andactions can be induced from the knowledge conditions thathold for each agent. The possible knowledge conditions inour examples are as follows:1) { K  ( R + ) ,K  ( R − ) } 2) { K  ( R − ) ,K  ( R + ) } 3) { K  ( R + ) ,K  ( R − ) } Note that the set { K  i ( R + ) ,K  i ( R − ) } cannot occur becausethat knowledge is correct. We begin by defining an order overknowledge conditions (2)  (3)  (1) Intuitively, this order represents a degree of severity: knowingthat it is illegal to turn right on is considered to be more severethan knowing that it is legal. Similarly, not knowing whetherit is illegal to turn right on red is more severe than knowingthat this action is legal.This allows us to reformulate agents’ types as a mappingfrom knowledge states to actions. Thus, an agent of type t 3 (risk taker) chooses action + if (1) and (3) hold, while an agentof type t 2 (law abiding) chooses action + solely if (1) holds.Types t 1 and t 4 choose action + and action − respectivelyfor all possible sets of knowledge predicates. This is shownin Table II. As can be seen in the table, it holds that once an KnowledgeConditionType (2) (3) (1) t 1 − − − t 2 − − + t 3 − + + t 4 + + + TABLE IIR EFORMULATING T YPES USING K NOWLEDGE C ONDITIONS agent type decides to choose action + for a given knowledgecondition, this agent will never choose an action − for aknowledge condition that is more severe. For any of the typesshown above, it holds that if a knowledge condition K  1 ismore severe than a knowledge condition K  2 , then if an actionis known to be legal in K  1 , it is also known to be legal inthe other knowledge condition. We call the types whose rulesof conduct meet these conditions monotonic . As an exampleof a non-monotonic type, consider a “malicious” agent thatchooses action + solely under knowledge condition (2). Thisagent chooses action + when it knows R − , and chooses action − when it knows R + .We use the idea of monotonic types to characterize a setof “sensible” types in our domain that facilitate the wayknowledge is propagated in the interaction graph. They alsoserve to limit the space of possible types to consider whenperforming inference. In general, if there are n binary rules,there are three possible sets of knowledge predicates for each 465  rule, and the total number of possible sets is thus 3 n . A type isa mapping from each of these sets to an action + or − , so thenumber of possible types is 2 (3 n ) , which is doubly exponentialin the number of rules. By only considering sensible types,we can reduce this space considerably. By using Theorem 1,one can determine the knowledge and the actions of particularagents without having to enumerate all types.IV. M ULTIPLE A CTIONS In this section we extend the basic model above to handlemultiple actions. Let A is a set of actions. A value for anaction a ∈ A specifies whether it is legal ( T  ), illegal ( F  ) or unknown ( U  ) . For each agent i , we define a knowledgecondition K  ti to be a function from A to action values. Aknowledge condition in this model specifies a value for eachaction in the domain. For example, if  K  ti ( a ) = F  , this meansthat i knows action a to be illegal at time t . (As before,we drop the subscript when the identity of the agent is clearfrom context). As before, a type is a mapping from knowledgeconditions to actions specifying what actions different agentsdo given their knowledge about rules in the world.In the basic traffic example there was a sole action, andthus a knowledge condition described a complete mental statefor an agent about the domain. The number of possible mentalstates is generally exponential in the number of actions, but wecan generalize the ideas we introduced in the basic examplesuch that values of particular actions will inform values of other actions. To this end, we first impose a complete orderingover action values that prefers legal actions to unknownactions, and unknown actions to illegal actions. For any action a ∈ A we have that a = T   a = U   a = F  We can then define an ordering over actions, such that action a 1 dominates action a 2 if for any agent i at time t , knowledgethat a 2 is legal also implies that a 1 is legal. (Note that thedirection of the ordering on both sides of the rule is reversed.) a 1  a 2 = ⇒ K  ti ( a 1 )  K  ti ( a 2 ) (1)We are now ready to define an ordering over knowledgeconditions. Let K  1 i ,K  2 i be two knowledge conditions for agent i at two different points in time. Definition 2. We say that  K  1 i is at least more severe thanknowledge condition K  2 i (denoted  K  11  K  2 i ) if the actionvalue that  i knows in K  1 i dominates the value of the sameaction in K  2 i . K  1 i  K  2 i ≡∀ a ∈ A K  1 i ( a )  K  2 i ( a ) As in the basic traffic example, an order over knowledgeconditions implies a notion of severity, but Definition 2 extendsthis notion to multiple actions. Intuitively, If  K  1 i is more severthan K  2 i , then if an action a is known to be legal in K  2 i , thenit must be the case that a is known to legal in K  1 i . In addition,if  a is unknown in K  2 i , then it cannot be the case that a isknown to be legal in K  1 .We illustrate by an extension to the former example inwhich the speed of the car is a discrete variable. Suppose thatthe minimal driving speed is 10 MPH, and that speeds canincrease at intervals of 10 miles per hour. define an orderingover any two speeds a 1 ,a 2 such that a 1  a 2 if and only if  a 1 > a 2 . We will define an ordering over actions such thatdriving at a slower speed always dominates driving at a fasterspeed. Consider two speeds, 50 and 40 MPH. It follows fromEquation 1 that if  K  i (50 = T  ) holds for an agent i , then itmust be the case that K  i (40 = T  ) . (If 50 MPH is knownto be a legal speed, it follows that 40 MPH is also legal.)If  K  i (50 = U  ) holds for an agent, then it must be the casethat K  i (40 = U  ) or that K  i (40 = T  ) . (If it is unknownwhether 50 MPH is a legal driving speed, then it cannot bethe case that 40 MPH is an illegal driving speed). Lastly, if  K  i (50 = F  ) holds, then any value for K  i (40) is possible. (If 50 MPH is known to an illegal speed, it may be legal or illegalto drive at 40 MPH). In general, for any two actions a 1 ,a 2 such that a 1  a 2 , the following knowledge conditions cannothold according to Definition 1. ( K  i ( a 1 = U  ) ,K  i ( a 2 = F  )) , ( K  i ( a 1 = T  ) ,K  i ( a 2 = U  )) , ( K  i ( a 1 = T  ) ,K  i ( a 2 = F  )) Referring to our example, consider an agent i that at timestep 1 knows that the maximal driving speed is 40 MPH, andat time step 2 learns that the maximal driving speed is 30MPH. According to Definition 2, we have it that K  1 i is moresevere than K  2 i . To see this, consider any possible drivingspeed a . If  a < = 30 , then it holds for i that K  1 i ( a = T  ) and that K  2 i ( a = T  ) . If  a > 40 , then it holds for i that both K  1 i ( a = U  ) and K  2 i ( a = U  ) . If  a = 40 , then it holds for i that K  1 i ( a = T  ) and K  2 i ( a = F  ) . In all of these cases, wehave that K  ( a 1 )  K  ( a 2 ) , according to Definition 2. Thus,according to our model, the case in which a higher speed islegal is strictly more severe than knowing that a lower speedis legal.We can use the mechanism above to provide a more generaldefinition of a monotonic type. Recall that a type is mappingfrom knowledge conditions to actions. Definition 3. A type T  of agent  i is monotonic if for any twoknowledge conditions K  1 i and  K  2 i at two different points intime, then the following must hold: K  1 i  K  2 i = ⇒ T  ( K  1 i )  T  ( K  2 i ) The definition of a monotonic type resembles that of amonotonic function. The prescribed action by a type for a moresevere knowledge condition must dominate the prescribedaction of a type for a less knowledge condition. In effect, thismeans that a monotonic type will not decrease its speed as itlearns more severe information. For example, if an agent typefor i prescribes to drive at 30 when K  1 i (50 = T  ) holds, thenit cannot prescribe to drive at 20 when K  2 i (60 = T  ) because 466
Search
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks
SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!

x