Talk:Symbol grounding problem: Difference between revisions
m Redo undo, logged in |
Mr. Stradivarius (talk | contribs) m Tag for WikiProject Linguistics using AWB |
||
Line 1: | Line 1: | ||
{{philosophy|importance=mid|class=start|mind=yes|language=yes}} |
{{philosophy|importance=mid|class=start|mind=yes|language=yes}} |
||
{{WikiProject Linguistics|class=|importance=|philosophy=yes}} |
|||
==Who is Stevan Harnad?== |
==Who is Stevan Harnad?== |
Revision as of 10:57, 9 December 2012
Philosophy: Mind / Language Start‑class Mid‑importance | |||||||||||||||||||||||||
|
Linguistics: Philosophy of language Unassessed | |||||||||||||
|
Who is Stevan Harnad?
And did he write this entire article himself?
- It seems like that guy named Harnad wrote this article by himself and extensively cited his own work. In addition, this article is a little bit hard to read and its view is too dependent to one particular viewpoint, in my opinion. But that is better than no article at all. If one could improve that article, it would be great. --
“ | Searle rightly rejects the Systems Reply | ” |
- To be honest, I swear the only reason nobody's changing this is because nobody has any better ideas, because it's a pretty obscure thing. As it is, I think it gives a really good introduction to the whole thing, but it should cite and incorporate more of other peoples' ideas. —Preceding unsigned comment added by 79.66.136.13 (talk) 14:38, 27 October 2009 (UTC)
- Stevan Harnad is currently Professor in Electronics and Computer Science at the University of Southampton, and Canada Research Chair in Cognitive Science at Université du Québec à Montréal. Founder and Editor of Behavioral and Brain Sciences. Past President of the Society for Philosophy and Psychology. Author and contributor to over 300 publications. I did PhD research in artificial intelligence, and I regard his symbol grounding hypothesis as a profoundly important and insightful contribution. This article deserves its place here. And no, unfortunately, I do not have anything approaching the time required to update it. By the way I have recast the question in neutral terms. The anonymous individual who asked the original, extremely rude and inappropriate question needs to learn some basic courtesy. Just because you haven't heard of somebody doesn't entitle you to try to diminish them. Rubywine (talk) 17:51, 31 August 2010 (UTC)
Recommending a little rework
In case I'm not the only one who would have this opinion, I want to mention that the article currently seems a little too biased in the direction of suggesting that proper consciousness identification in others is theoretically not doable and that this is relevant. Now, I shouldn't here want to contend the full implication of whatever might be precisely meant by the phrase containing "proper", but I think we can recognize that if consciousness were slightly reconceptualized, we'd attain a higher quality article.
'Meaning' itself is still a somewhat problematic notion in philosophy, so invoking consciousness as its co-operating associate is probably not contributing much to our understanding of symbol grounding. Consciousness only seems as important as it does – and not that it isn't important – because those of us claiming to have it don't currently possess in our experience repertoires that of other cognitive agents consistently making exact and seemingly uncanny predictions about our behaviors, not to mention our fates. In terms of our potential mercies or decision expectations, we're usually in full control of our robots, from start states to end states, and therefore we don't tend to attribute volitions to them. But suppose we attribute volitions to them anyway for a moment. Then, in terms of volition, the only significant difference that we might notice between them and us is that whereas we perceive their range of actions as strictly delimited, we don't perceive ours as such. Theoretical engineering barriers notwithstanding, we would likely re-evaluate how we currently regard ourselves as 'non-robotic with a mysterious consciousness' if suddenly other cognitive agents began exhibiting behaviors that would compel us to believe that they could predict us as well as we can predict that our microwaves will stop microwaving upon the duration of what we programmed in.
One possible objection to those hypothetical cognitive agents, I imagine, is that we can simply choose a set of acts different than the set we are told that we are predicted to perform. Crucially, however, no ultimate law requires that a puppeteer always tell us exactly what it knows at exactly appropriate times and thus could still know ever without error which set of acts we are to perform. Perhaps early-adopter atheists sometimes overlook this plausibility, so demonstrations involving another human would need to do the compelling. While one human subject is given the deviant role of whimsically either doing X or not doing X when told it will do X, a second human subject – preferably a skeptic – communicably apart from the first subject gets to witness the actual predictions Y made by the super-agent. Of course, the super-agent can't actually depend on mock predictions X to be error-free, especially since the first subject is probably geared to want so much to be "free" and never practically deterministic. (Hopefully, uncanny demonstrations in general will never get threatening, since then the so-called super-agents likely won't be any more interesting than human conspirators.) Appeals to the problem of consciousness, especially as it's associated with humans, can go only so far in explaining the problem of symbol grounding, and hence they're inadequate from the present perspective.
I should leave such a reworking for a later time if no one arrives interested in doing it, or else if it becomes that such a modification won't cause a big controversy. In what's hoped to be a contribution I made earlier today, I included the term 'metamodel', whose unpacking is possibly a heuristic in the proposed direction of trying to depart from consciousness concerns and perhaps going "more technical". Vbrayne 22:35, 6 April 2007 (UTC)
- Simply, if one accepts that there are varying degrees of consciousness, which would include human-level sentience as one particular segment of degrees, then when we say that the problem of meaning is related to the problem of consciousness, of the problem of consciousness we're not really referring to the problem of human-level sentience but instead a broader class of sentience. Some changes were made to achieve coherence. I studied Stevan's papers more thoroughly, then aimed to keep with and not contradict him. Valeria B. Rayne 21:59, 9 April 2007 (UTC)
- I find your comments rambling, obscure and convoluted. You haven't attempted to address the content of the article. There is no evidence here that you have grasped the symbol grounding problem. I don't think there's any argument here to be answered. Rubywine (talk) 00:47, 1 September 2010 (UTC)
- I wrote that several years ago. That day I was in a writing mood and perhaps made the post more ornate than what it should've been. The part of the problem I addressed was the way and extent Harnad, or the article at the time, mystified consciousness. I gave indications how it could be mystified less, while not being eliminativist. An earlier form of the problem, as it was expressed in this article, seemed more strongly concerned that, while meaning is associated with symbol grounding, consciousness is associated with meaning. ValRayne (talk) 15:27, 5 April 2011 (UTC)
The Non-Definition of Symbol
The definition of symbol given in the Formulation of Symbol Grounding Problem section is circular. We are told that a "symbol is any object that is part of a symbol system", and that a "symbol system is a set of symbols and syntactic rules for manipulating them...". Perhaps it would be better not to mention the 'definition' and to just delete that paragraph. —Preceding unsigned comment added by Spaecious (talk • contribs) 01:33, 21 May 2008 (UTC)
- The non-definition of symbol is at the root of the symbol-grounding problem -- look it up in any dictionary and you will be sent in circles upon circles as to what a 'symbol' means, or what 'meaning' means. But you don't need a dictionary to understand what 'meaning' means, right? -- Right? --Quetzilla (talk) 06:34, 26 January 2010 (UTC)
Let it be whatever it may connote
It may be that the tag "Tony Blair" denotes or explicates Tony Blair (1) indeed, and also connotes or implicates UK's former Prime Minister (2) and Cherie Blair's husband (3), and many others in need.
Likewise, the tag "Cherie Blair's husband" denotes or explicates Cherie Blair's husband (3) indeed, and also connotes or implicates Tony Blair (1) and UK's former Prime Minister (2), and many others in need.
Let "Mark Twain" tag or denote Mark Twain indeed, and tug or connote Samuel Clemens, the 6th child of John Clemens, the husband of Olivia Clemens, the creator of Tom Sawyer, an admirer of Helen Keller, the coiner of "miracle worker" for Anne Sullivan, or whatever he was, in need.
Regardless of what ought to be the meaning of words or names, such implication is our inborn associative intelligence that even artifcial intelligence can easily imitate. It may be that a tag or name stands for its bearer as a whole, hence to the maximal effect. It may be that meanings are brainstorming in the head anyway. This may be why we have more to do with psychologism than literalism. This may be too simple to be realized!
Judging from such a sense of the words denote and connote, problematic is John Stewart Mill's argument that proper names such as "Tony Blair" had no connotation but only denotation. Gottlob Frege also found Mill's problematic.
Thus, Frege argues in effect that the sense (Sinn) of "Hesperus" is Hesperus itself while its essence (Bedeutung) is Hesperus in itself, that is, Venus! Then, what is the essence of "Venus" in turn? Should it be Venus itself, then where should Venus in itself in contrast be found? This question suggests an endless regress from essence to essence.
To be straightforward, why not tag (or denote) "Venus" for Venus, and "Hesperus" for Hesperus? Why not let what is tagged "Venus" be Venus and the brightest star, and "Hesperus" be Hesperus and Venus seen in the evening (hence, the evening star) and so forth as far as my knowledge goes? Tagging and tugging may be too diffused to be confused.