代写范文

留学资讯

写作技巧

论文代写专题

服务承诺

资金托管
原创保证
实力保障
24小时客服
使命必达

51Due提供Essay,Paper,Report,Assignment等学科作业的代写与辅导,同时涵盖Personal Statement,转学申请等留学文书代写。

51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标

私人订制你的未来职场 世界名企,高端行业岗位等 在新的起点上实现更高水平的发展

积累工作经验
多元化文化交流
专业实操技能
建立人际资源圈

Complexity over Uncertainty in Theory--论文代写范文精选

2015-12-29 来源: 51due教员组 类别: Essay范文

51Due论文代写网精选essay代写范文:“Complexity over Uncertainty in Theory ” 信息是什么?研究人员通过特定领域的知识引用相关信息的构造形式的,试图推广和规范构造相对较少。韦弗(1949)提供了最著名的一个量化的尝试,这个想法虽然有用,不捕捉复杂性的角色扮演的过程中进而理解事件信息。在下面的社会essay代写范文将从信息环境中,讨论问题的局限性和徒劳的泛化。更具体地说,我们认为,这不是沟通状态的不确定性。

广义信息的核心,描述信息的最好方式是通过相对收益或损失的概念,通过一组已知的实体的变化(无论他们的原产地),这表明表征信息理论完美地捕捉到了这一关键方面的信息和结论。下面的essay代写范文将进行详述。

Abstract
What is information? Although researchers have used the construct of information liberally to refer to pertinent forms of domain-specific knowledge, relatively few have attempted to generalize and standardize the construct. Shannon and Weaver (1949) offered the best known attempt at a quantitative generalization in terms of the number of discriminable symbols required to communicate the state of an uncertain event. This idea, although useful, does not capture the role that structural context and complexity play in the process of understanding an event as being informative. In what follows, we discuss the limitations and futility of any generalization (and particularly, Shannon’s) that is not based on the way that agents extract patterns from their environment. More specifically, we shall argue that agent concept acquisition, and not the communication of states of uncertainty, lie at the heart of generalized information, and that the best way of characterizing information is via the relative gain or loss in concept complexity that is experienced when a set of known entities (regardless of their nature or domain of origin) changes. We show that Representational Information Theory perfectly captures this crucial aspect of information and conclude with the first generalization of Representational Information Theory (RIT) to continuous domains.

Introduction
What is information? Why is it a useful construct in science? What is the best way to measure its quantity and quality? Although these questions continue to stir profound debate in the scientific community [1–4], they have not deterred researchers from using the term “information” liberally in their work. This attitude may be explained by the common sense intuition that most humans possess about information: namely, that for an entity to be informative it must increase our knowledge about itself and, likely, about related entities. Accordingly, the greater the knowledge increase, the more informative is the entity that stimulates the increase. This common view of information, referred to here as naïve informationalism, suggests that information is partly subjective in nature. 

In other words, that it requires both a “knower” and an external system providing the raw material to be known. Under naïve informationalism, virtually every entity (e.g., object, feature, or event) that exists and that is perceivable may be construed as informative if it is novel and of interest to the perceiver (as known as, the “receiver”). If the entity is of tangential interest to the perceiver, it will likely not result in a significant increase in knowledge (a point that we shall revisit under Section 3 and that we refer to as information relevancy). Likewise, if the entity is familiar to the perceiver, the less likely it will be that the perceiver will experience a knowledge increase. Naïve informationalism offers a tenable explanation as to why scientists from a wide range of disciplines, from Physics to Psychology and from Biology to Computer Science use the term “information” to refer to specific types of knowledge that characterize their particular domain of research. 

For example, a data analyst may be interested in the way that data may be stored in a computing device, but has no interest in the molecular interactions of a physical system. Such molecular activity is not relevant to the problems and questions of interest in the field. One could say that, to the data analyst, the entities of interest are data. Accordingly, in the field of data analysis, the terms “information” and “data” are often used interchangeably to refer to the kinds of things that a computing device is capable of storing and operating on. Similarly, for some types of physicists, the quantum states of a physical system during a certain time window comprise information. On the other hand, a behavioral psychologist may be interested on the behaviors of rats in a maze. Indeed, to a behavioral psychologist the objects of information are these behaviors. 

In contrast, a geneticist may find such behaviors quite tangential to his discipline. Instead, to the geneticist, knowledge about the genome of the rat is considered far more fundamental and symbol sequences (e.g., nucleotides) may be a more useful way of generalizing and thinking about the basic objects of information. All of these examples support the idea that there are as many types of information as there are domains of human knowledge [1]. In spite of these domain-specific notions of information, some scientists of the later 19th and early 20th centuries attempted to provide more general definitions of information. These attempts were often motivated, again, by development of domain specific knowledge. For example, in the field of electrical engineering, the invention of electrical technologies such as the telegraph, telephone, and radar, set the stage for a key definition of information that has influenced nearly all that have come after it. 

The electrical engineer Ralph Hartley proposed that information could be understood as a principle of individuation [5]. In other words, information could be operationalized in non-psychological terms, which is to say, not in terms of what increases knowledge, but as an abstract measure of the size of the Information 2013, 4 3 message necessary to discriminate among the discriminable entities in any set. Now, this approach seemed to make perfect sense because one of the main properties possessed by all types of entities, whether sets of records (data) or sequences of nucleotides, is that they can be discriminated from other entities. However, Hartley himself succumbed to a weak form of naïve informationalism by choosing as his domain of entities strings of symbols. We say “weak” because symbols are more general and abstract constructs than many other objects studied in specialized domains, such as cells and nucleotides. 

Also, this was a natural choice given that Hartley’s motivation behind such characterization, by most accounts, was the transmission of messages via telegraph and other electronic means. Accordingly, in his formal framework, the amount of information transmitted could be measured in terms of the length of the message that it would take to identify any one of the elements of a set of known entities (e.g., the set of words in the English language). Henceforth, we shall refer to Hartley’s proposal as HIT (Hartley’s Information Theory).

From Sets of Entities to Probability of Events 
A second way of interpreting Hartley’s measure assumes an alternative notion of information based on the uncertainty of an event. More specifically, if we sample an element from the finite set S uniformly at random, the information revealed after we know the selection is given by the same Information 2013, 4 5 Equation (1) above as long as we modify it slightly to include a negative sign before the logarithm function. 

This modification is necessary because the probability of any one item being chosen is the fraction 1 / X which yields a negative quantity after its logarithm is taken. But negative information is not allowed in Hartley information theory. Again, this probabilistic interpretation is only valid when uniform random sampling is assumed. Nonetheless, it was this kind of simple insight that contributed to the generalization of information proposed by Shannon and later by Shannon and Weaver in their famous mathematical treatise on information [6,7]. Henceforth, we shall refer to their framework as SWIT (Shannon–Weaver Information Theory) and to their basic measure of information as SIM. In these two seminal papers it was suggested that by construing the carriers of information as the degrees of uncertainty of events (and not sets of objects), Hartley’s measure could be generalized to non-uniform distributions. 

That is, by taking the logarithm of a random variable, one could quantify information as a function of a measure of uncertainty as follows.Shannon’s information measure appeals to our psychological intuitions about the nature of information if interpreted as meaning that the more improbable an event is, the more informative it is because its occurrence is more surprising. To explain, let x be a discrete random variable. Shannon’s measure assumes that if a highly probable value for x is detected, then the receiver has gained very little information. Accordingly, if a highly improbable value is detected, the receiver has gained a great amount of information. In other words, the amount of information received from x depends on its probability p x( ) . SIM is then defined as a monotonic function (i.e., the log function to some base b, usually base 2) of the probability of x as shown in Equation (2). For example, if the event is the outcome of a single coin toss, the amount of information conveyed is the negative logarithm of the probability of the random variable x when it assumes a particular value representative of an outcome (1 for tails or 0 for heads). If the coin is equally likely to land on either side, then it has a uniform probability mass function and the amount of information transmitted by x = 1 is ½. (essay代写)

51Due网站原创范文除特殊说明外一切图文著作权归51Due所有;未经51Due官方授权谢绝任何用途转载或刊发于媒体。如发生侵犯著作权现象,51Due保留一切法律追诉权。(essay代写)
更多essay代写范文欢迎访问我们主页 www.51due.com 当然有essay代写需求可以和我们24小时在线客服 QQ:800020041 联系交流。-X(essay代写)


上一篇:Eye-contact and complex dynami 下一篇:Vend’s POS Software System--论文