服务承诺
资金托管
原创保证
实力保障
24小时客服
使命必达
51Due提供Essay,Paper,Report,Assignment等学科作业的代写与辅导,同时涵盖Personal Statement,转学申请等留学文书代写。
51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标私人订制你的未来职场 世界名企,高端行业岗位等 在新的起点上实现更高水平的发展
积累工作经验
多元化文化交流
专业实操技能
建立人际资源圈Intelligent encoding and economical communication--论文代写范文精选
2016-01-16 来源: 51due教员组 类别: 更多范文
在我们看来,问题的编码信息皮层感觉处理地区不是可拆卸的。我们假设皮层感觉处理区域由进化形成了一个整体,有趣的问题是,尽管CoI对我们来说有意义,在实践中似乎也是可衡量的,但CoI逃脱了一般的数学定义。下面的essay代写范文进行详述。
Abstract
The theory of computational complexity is used to underpin a recent model of neocortical sensory processing. We argue that encoding into reconstruction networks is appealing for communicating agents using Hebbian learning and working on hard combinatorial problems, which are easy to verify. Computational definition of the concept of intelligence is provided. Simulations illustrate the idea.
Introduction
A recent model of neocortical information processing developed a hierarchy of reconstruction networks subject to local constraints (Lorincz et al., 2002). Mapping to ˝ the entorhinal-hippocampal loop has been worked out in details (Lorincz and Buzs ˝ aki, ´ 2000). Straightforward and falsifying predictions of the model concern the temporal properties of the internal representation and the counteraction of delays of the reconstruction process. These predictions have gained independent experimental support recently (Egorov et al., 2002; Henze et al., 2002). The contribution of the present work is to underpin the model by the theory of computational complexity (TCC) and to use TCC to ground the concept of intelligence (CoI). We shall treat the resolution of the homunculus fallacy (Searle, 1992) to highlight the concepts of the approach.
Theoretical considerations
In our view, the problem of encoding information in the neocortical sensory processing areas may not be detachable from CoI. We assume that the wiring of neocortical sensory processing areas developed by evolution forms an ensemble of economical intelligent agents and we pose the question: What needs to be communicated between intelligent computational agents? The intriguing issue is that although (i) CoI has meaning for us and (ii) this meaning seems to be measurable in practice, nevertheless, (iii) CoI has escaped mathematical definition. In turn, our task is twofold: we are to provide a model of neocortical processing of sensory information and a computational definition of intelligence.
According to one view , intelligent agents learn by developing categories (Harnad, 2003). For example, mushroom-categories could be learned in two different ways: (1) by ‘sensorimotor toil’, that is, by trial-and-error learning with feedback from the consequences of errors, or (2) by communication, called ‘linguistic theft’, that is, by learning from overhearing the category described. Our point is that case (2) requires mental verification: Without mental verification trial-by-error learning is still a necessity. In our model, verification shall play a central role for constructing the subsystems, our agents.
To build a network model, verification is identified as the opposite of encoding. Verification of an encoded quantity means (i) decoding, i.e., the reconstruction of inputs using communicated encoded quantities, (ii) comparison of the reconstructed input and the real input. In turn, a top-down model of neocortical processing of sensory information can make use of generative models equipped with comparators, in which the distributed hierarchical decoding process is to be reinforced by comparisons of the input and the decoded quantities. This shall be our computational model for CoI.
Discussion
The reconstruction (also called generative) network concept provides a straightforward resolution to the homunculus fallacy (see, e.g., (Searle, 1992)). The fallacy says that no internal representation is meaningful without an interpreter, ‘who’ could ‘make sense’ of the representation. Unfortunately, all levels of abstraction require at least one further level to become the corresponding interpreter. Thus, interpretation is just a new transformation and we are trapped in an endless regression.
Reconstruction networks turn the fallacy upside down by changing the roles (Lorincz, 1997): Not the internal representation but the ˝ input ‘makes sense’, if the same (or similar) inputs have been experienced before and if the input can be derived/generated by means of the internal representation. In reconstruction networks, infinite regression occurs in a finite loop and progresses through iterative corrections, which converge. Then the fallacy disappears. In our wording, (i) the internal representation interprets the input by (re)constructing components of the input using the components of the internal representation and that (ii) component based generative pattern completion ‘makes sense’ of the input. We shall illustrate the idea by reviewing computational simulations on a combinatorial problem depicted in Fig. 2 (Lorincz ˝ et al., 2002).
Connections to neuroscience
It has been demonstrated (Lorincz et al., 2002) that maximization of information trans- ˝ fer is an emerging constraint in reconstruction networks. Here, we note that the model provides straightforward explanation for the differences found between neurons of the deep and superficial layers of the entorhinal cortex (i.e., that deep layer neurons have sustained responses, whereas superficial layer neurons do not) (Egorov et al., 2002), which is the consequence of the sustained activities in the hidden layer. The model also explains the long and adaptive delays found recently in the dentate gyrus (Henze et al., 2002), which – according to the model – should be there but are not necessary anywhere else along the visual processing stream. Last but not least, the model makes falsifying predictions about the feedback connections between visual processing areas, which – according to the mapping to neocortical regions – correspond to the long-term memory of the model.
Conclusions
Our goal was to underpin a recent model of neocortical information processing (Lorincz et al., 2002) by means of the theory a computational complexity. We have ˝ argued that sensory processing areas developed by evolution can be viewed as intelligent agents using economical communication. According to our argument, the agents encode solutions to combinatorial problems (NP-hard problems, or ‘components’ in terms of psychology (Biederman, 1987)), communicate the encoded information and decode the communicated information. We have argued that reconstruction networks equipped with Hebbian learning are appealing for ‘linguistic theft’ of solutions of such problems. We have reviewed computational experiments where a combinatorial component-search problem was (1) solved and (2) the communicated encoded information was decoded and used to improve pattern completion. The novelty of the present work is in the reinterpretation of a recent model of neocortical information processing in terms of communicating agents, who (i) communicate encoded information about combinatorial learning tasks and (ii) cooperate in input verification by decoding the received and encoded quantities. We have also described the straightforward predictions of the model that have relevance to visual neuroscience.(论文代写)
51Due网站原创范文除特殊说明外一切图文著作权归51Due所有;未经51Due官方授权谢绝任何用途转载或刊发于媒体。如发生侵犯著作权现象,51Due保留一切法律追诉权。(论文代写)
更多论文代写范文欢迎访问我们主页 www.51due.com 当然有论文代写需求可以和我们24小时在线客服 QQ:800020041 联系交流。-X(论文代写)

