服务承诺





51Due提供Essay,Paper,Report,Assignment等学科作业的代写与辅导,同时涵盖Personal Statement,转学申请等留学文书代写。




私人订制你的未来职场 世界名企,高端行业岗位等 在新的起点上实现更高水平的发展




The cognitive and the social agent--论文代写范文精选
2016-04-05 来源: 51due教员组 类别: Essay范文
代理人立即采取行动应对环境,其大脑的功能往往是天生的。审议过程并未发生,模型的设计通常是基于纯粹的行为,它给这些行为和结构提供了一个更稳定和控制机制来调整本身。下面的essay代写范文进行论述。
Abstract
With help of the presence (or not) of certain components, agents can be classi- ed in different types. Many scientists give examples of classications of agents (Genesereth & Nilsson, 1987; Jorna, 2002; Russell & Norvig, 2003; Wooldridge, 2002; Franklin & Graesser, 1996). The agents are often constructed of functional components, with each component fullling a specic function, e.g. vision, memory, motor control etc. In this section, we apply a classication commonly used in cognitive science and AI that builds up an agent from simple mechanisms towards complex physical and social situated mechanisms.
Davis (2001) and Sloman (1993, 2001) have created such a classication of agent types: the reexive/reactive, the deliberative and the reective agent, comparable to the layers of agency shown in gure 2.3. This classication forms a basis under which all other cognitive approaches can easily nd a home. Two types of agent classication / agents are discussed in this section. The rst is the cognitive approach (section 2.3.1 until 2.3.3) that varies from reexive towards reective. The second approach (section 2.3.4) will be the social agent. The cognitive agent is mainly concerned with the internal mechanisms of the agent, while the social agent is concerned with the inuence of the environment affecting its behaviour and discusses aspects like autonomy, interaction with other agents, and normative and social behaviour.
The reexive and reactive agent
Reexive agents respond to the environment by an immediate action, whose functionality is often hardwired in the brain. Deliberation processes do not take place and the design of the model is often pure behaviour based, i.e. only a restrict set of known stimulus-response behaviours is implemented. The reactive agent is slightly more complicated. It gives structure to these behaviours and provides a bit more exibility and control with help of mechanisms in order to adjust itself towards more different types of tasks. These agents do not have mechanisms that involve explicit deliberation or making inferences (a symbol system). They lack the ability to represent, evaluate and compare possible actions, or possible future consequences of actions (Sloman, 2001). For the reexive agent, if necessary, a (static) goal can be set that states that the goal/behaviour, e.g. `searching for food', becomes active as soon as the amount of energy in the body is at a too low level. In case of the reactive agent: the subsumption architecture13 of Brooks (1986) is an example of a reactive agent. The agent is physically situated and responds only to the current situation it gets involved in. 2.3
The reasoning agent
The reasoning or deliberative agent adds deliberative processing to the mechanisms used by the reexive and reactive agent. The deliberative agent has many components in common with classical cognitive architectures (GOFAI14) and includes a representation of the environment, a memory, a workspace, a planning unit, management of goals and many other components that make process deliberation possible. According to Sloman (2001), a deliberative agent has a set of context-dependent and exible processes, e.g. plan and goal generation, comparison, evaluation and execution providing the basis for a number of components, that cannot be handled by a purely reactive architecture.
The interaction and the feedback that can be compared with the past representations stored in the memory give the deliberative agent opportunities to build up expectations of what effect certain actions will have on its environment and its own wellbeing. The ability to build up representations about its environment gives the agent the possibility to adapt and survive based on an experienced repertoire of the past.
We can distinguish three types of agents that are commonly associated with the reasoning agent: the deductive reasoning agent, the practical reasoning agent (Wooldridge, 2002) and the cognitive plausible agent. The deductive and practical reasoning agents often have a system that maintains a symbolic representation (logics) of its desired behaviour and a system that can manipulate this representation. Whereas the deductive reasoning agent works with deduction and theorem-proving, the practical reasoning agent is specialised in reasoning that is directed towards the future in which selection between conicting considerations are provided by the agent's desires/values/cares and what the agent believes (Bratman, 1990).
A well known 1 implementation of this agent is the beliefs/desires/intentions (BDI) architecture or Procedural Reasoning System (PRS) (Wooldridge, 2000). The weakness of these agents is that although they form representations, such as beliefs, desires and intentions, they only operate at the intentional level (Dennett, 1987), and therefore developers of these agents are not concerned with how these agents could be symbolically grounded. Or as Wooldridge states: . . . I will not be concerned with how beliefs and the like are represented. . . the assumption that beliefs, desires, and intentions are symbolically represented is by no means necessary for [the modelling of BDI agents] (Wooldridge, 2000, p. 69).
The introduction of production systems (Newell, 1973) as systems that can implement theories of cognition are adopted by many researchers in cognitive science. A production system is a physical symbol system that exists out of a set of productions (condition-action patterns) and a set of data-structures. A set of goals in combination with means-end reasoning allows the agent to explore problem spaces and exhibit intelligent action (Newell & Simon, 1972). SOAR (Newell, 1990) and ACT-R (Anderson & Lebiere, 1998) are well known examples of agents with a cognitive architecture. These systems can store and manipulate representations and contain subsystems and mechanisms that enable the actor to adapt (e.g. sub-symbolic learning in ACT-R) and learn (e.g. chunking in SOAR). For more details, we refer to chapter 4 that discusses cognitive theories and gives an elaborate description of a cognitive plausible agent.
The reective agent
The reective agent builds on top of the previous agent. It is equipped with a meta-management module that observes and monitors its own cognitive processes in order to reach better performance that serves the goal or objective the agent has set its mind to. For instance, the reective agent can for instance use the following operations for monitoring its own cognitive processes (Sloman, 2001): (1) the ability to think about and answer questions about one's own thoughts and experiences, (2) the ability to notice and report circularity in one's thinking, and (3) the ability to notice opportunities for changing one's thinking.
Another example of reection is the introduction of emotional aspects that can drive the agent in changing its behaviour radically. Such changes can be caused by properties that say something about the current status of the system, e.g. the agent feels itself bored, angry or afraid. The focus of this dissertation is not to include the reective mechanisms of the individual, but is more directed at reection by interaction with the outside world and other actors. In the next chapter we will introduce the social construct that operates at the normative level of an agent and can inuence the production system in its (performance) outcomes (e.g. allowing or prohibiting certain outcomes). 2.3(essay代写)
51Due网站原创范文除特殊说明外一切图文著作权归51Due所有;未经51Due官方授权谢绝任何用途转载或刊发于媒体。如发生侵犯著作权现象,51Due保留一切法律追诉权。
更多essay代写范文欢迎访问我们主页 www.51due.com 当然有essay代写需求可以和我们24小时在线客服 QQ:800020041 联系交流。-X
标签:essay代写 留学生作业代写 论文代写
