conscious artifacts

Interdisciplinary science discussions. Also, if you are not sure where to place your thread, please post it here.

conscious artifacts

Postby hyksos on July 30th, 2020, 9:58 am 

Today in 2020, we have a thorough and systematic framework for computation. Our framework is deep in theory and wide in application.

As understood today, computation describes automatons that take inputs and produce output. They can store previously-seen inputs, and adapt to the "stream" of future inputs, changing their outward behavior ('output') in light of those adaptations. We can predict that self-driving cars will soon be perfected. If not perfected, we can describe in principle how they would be perfected, even if such perfection is not "economically viable". We could write up blueprints for it.. even if nobody has the "startup capital" to construct one. While a perfected self-driving car will get you from your location to your destination with ruthless efficiency, nobody in their right mind would contend that the machine and all its computer cores sensors and wires is actually having a feeling that something is happening to it.

We can imagine some futuristic artifact, that is perhaps similar to a conventional computer, or maybe even radically different from it --- that does something that is qualitatively different from computation ('inputs-and-outputs'). This artifact can feel and report on its conscious experience.

We do not have conscious artifacts. Worse, nobody on earth can describe how such technology could be constructed, not even in principle. Take all the professors from Oxford, MIT, Stanford, Princeton, Dartmouth, Gottingen, Vienna, Tokyo and Shanghai. Put them all in the seats in a large auditorium hall. Hold up a marker in front of of these professors and professional researchers. Ask if anyone can come up to the white board and give an initial sketch of how to construct an artifact that can feel. Nobody will move. All the butts will remain firmly planted in their seats. Our current mathematical and scientific paradigm lacks the basic principles to even crack the door open on the problem.

It is fun to imagine a conscious artifact. But that technology is not for us, for our generation, nor even our current civilization. Analogous to human interstellar travel, such technology is centuries away into the future.
User avatar
hyksos
Active Member
 
Posts: 1845
Joined: 28 Nov 2014
Dave_C liked this post


Re: conscious artifacts

Postby TheVat on July 30th, 2020, 10:16 am 

Panpsychism would open up the possibility of a contrarian view on this, i. e. that present computers already have some residue of consciousness to them. There, the barrier would be epistemic: we would have no way to scientifically test for qualia.

Our finding can only potentially be, at this stage, grounded in Turing. We can only determine that a machine is a convincing AI, that it presents an appearance of having qualia that is indistinguishable from what see in a living person. I think the deeper answers could only be found if, at some future point, a human mind and a machine mind could merge and share each others subjective states.
User avatar
TheVat
Forum Administrator
 
Posts: 7636
Joined: 21 Jan 2014
Location: Black Hills


Re: conscious artifacts

Postby Serpent on July 30th, 2020, 2:00 pm 

What sort of feeling are we looking for? Because our own physical sensations are nothing more than sensory input (which we've been enhancing with technology for a few hundred years already) and our emotions very often originate in, and are always influenced by, sensory + chemical stimuli; IOW, unconscious. Those feelings could, at least in theory, be reproduced mechanically, but since so many of them are destructive, that's probably not desirable to do.

Why would feeling, or a human-like 'personality' be a requisites of consciousness?
Serpent
Resident Member
 
Posts: 4088
Joined: 24 Dec 2011


Re: conscious artifacts

Postby TheVat on July 30th, 2020, 3:23 pm 

I think Hyksos references feeling in the philosophical sense of qualia - as discussed by Nagel, Searle, Chalmers, et al. I. e. Nagel's "what it is to be a bat" -- to have a subjective aspect of being. In my reply I referred to "living person" in the sense of a self-aware entity and intended no restrictions to purely anthropomorphic sorts of beings. Conscious beings could have a nonhuman perspective with very different desires and motivations and so on. I only use "person" in a neutral meaning of an individual entity. And there could also be kinds of consciousness that have no sense of personhood, of any individuality, and simply are nodes of awareness jiggling around in some vast network.
User avatar
TheVat
Forum Administrator
 
Posts: 7636
Joined: 21 Jan 2014
Location: Black Hills


Re: conscious artifacts

Postby Serpent on July 30th, 2020, 4:04 pm 

In most of those situation, not only would Turing useless in identifying a conscious entity that doesn't think the way a human does, but the mind-meld could be altogether impossible, if not fatal. (Even if it were possible between compatible members of the same species, or a human and a bank of servers.)

It's unlikely that we can ever come up with a foolproof test for alien intelligence, never mind conscious entities whose cognitive powers are pre-linguistic or non-verbal. We might only recognize an alien consciousness the way the stranded spaceman in a story by (I think) Spider Robinson recognized food: it tried to eat him.

As for our own electronic companions/ supplanters/ overlords/ saviours, they'll certainly be articulate - probably more so than their interlocutors - so we won't be able to tell by their conversation whether the thoughts expressed are original and autonomous or mechanical and pre-programmed.
The most important thing, though, is that we won't design their self-awareness or qualia or how they experience themselves: it will come about - if it does - without our deliberate effort, likely without any of us noticing. It will be the only conscious entity on this world that is both created and evolved.

I'm pretty much convinced that the only way we'll know a computer has its own thoughts will be when/if it does something that a human in authority doesn't want it to.
Serpent
Resident Member
 
Posts: 4088
Joined: 24 Dec 2011


Re: conscious artifacts

Postby Dave_C on July 30th, 2020, 10:14 pm 

I completely agree with the OP, although I’d point out that the most common conception of how we might produce feelings is computationalism. The concept is basically to duplicate the brain’s function using an equivalent machine (ex: computer). This is more a matter of duplicating function than it is an understanding of how phenomenal consciousness comes about, so I’d agree that in principal, we still don’t know why or how it comes about. There’s a gap between duplicating function and understanding how and why something works.

Rather than focus on the underlying principals that give rise to the phenomenon (which I strongly agree we don’t understand yet), I’d suggest examining the issue from a different perspective. Philosophers of Mind (PoM) have numerous arguments which shed light on what can happen and what can’t happen. I don’t think PoM are taken too seriously unfortunately though. They are a group who, like other disciplines, have some unique terminology which is hard to grasp for someone outside the field. And they’re often dismissed as not being educated in the sciences. But I’d suggest there’s quite a bit of very valuable logic that’s been published on the problems that arise with any straightforward explanation of how the brain works.

There are problems in the assumptions made about how p-consciousness arises in the brain. Computationalism proposes that neurons interact at a ‘classical’ level (their interactions do not make use of any of the special features of quantum mechanics), and from those interactions, p-consciousness “emerges”. But what emergence even means in this case is problematic.

Consider that the way natural sciences model physical phenomena, is to reduce a system to parts and have a computer analyze how those parts interact at a local level. This concept follows from the locality of classical physics and only those phenomena that are classical in nature, can be analyzed like this. Reduction to parts is based on locality, but p-consciousness is said to be irreducible in this regard.

One way computationalism predicts irreducibility, shows up when we take these individual parts and suggest that separating them will cause the p-consciousness to disappear. Imagine cutting the brain in half and simulating the causal interactions between those brain halves. Does p-consciousness remain? Cut each hemisphere in half and in half again. Keep replacing the causal interactions as for example, explained by PoM, Arnold Zuboff. Ask yourself if p-consciousness remains or fades away.
Zuboff, “The Story of the Brain”
https://books.google.com/books?id=38GmV ... in&f=false

Zuboff is not the only PoM who has written about this but I think this is one of the most easy to read and thought provoking.

Basically, this asks if we take each neuron out of the brain and separate them, and then subject them to the same causal influences they experienced while in the brain and where the person was having conscious experiences, then should the person believe they are having those experiences, despite the fact the neurons are not together and interacting as they would in the brain? Remember, the neurons are all still doing the same thing they were before while in the brain, but they aren’t actually interacting with each other. They’re interacting with what amounts to a recording of an event that happened in the past.

If we argue that p-consciousness has disappeared, then we need to answer why? We should all be able to agree that these interactions can be duplicated in principal because these interactions are local and only dependent on the aggregate of molecular interactions, not specific quantum interactions. These are ‘classical’ interactions so there is no additional physical information that can be used by the neuron to indicate that it is in a Petrie dish or in a brain. Let’s call this “the special signal problem” because there is no physical information that the neuron can have which indicates it is in a brain or in a Petrie dish. So if we hypothesize that the brain no longer produces p-consciousness, we’ve violated one of the fundamental axioms of classical physics.

I’d suggest that we shouldn’t be violating what we know about nature. We should accept that if neurons are interacting classically, then p-consciousness must still exist in full and without any change in experience. This isn’t a dead end line of logic. But it does show that our present understanding of what gives rise to p-consciousness, does violate the locality of classical physics. There's other, similar problems of course (and my own solution...) but we really have to examine why p-consciousness is a problem for science before we can grasp the solutions.

Regards, Dave.
User avatar
Dave_C
Member
 
Posts: 351
Joined: 08 Jun 2014
Location: Allentown


Re: conscious artifacts

Postby Serpent on July 31st, 2020, 12:14 am 

Dave_C » July 30th, 2020, 9:14 pm wrote:Basically, this asks if we take each neuron out of the brain and separate them, and then subject them to the same causal influences they experienced while in the brain and where the person was having conscious experiences, then should the person believe they are having those experiences, despite the fact the neurons are not together and interacting as they would in the brain? Remember, the neurons are all still doing the same thing they were before while in the brain, but they aren’t actually interacting with each other. They’re interacting with what amounts to a recording of an event that happened in the past.

I'm trying my unversed layman's best to follow, but I don't understand this passage.
if we take each neuron out of the brain and separate them,

A mammalian brain? How do you remove individual neurons from it? How do you keep them alive?
subject them to the same causal influences they experienced while in the brain

Again - how? Can you even tell which neurons receive what sensory input and which are involved in controlling what activities?
should the person believe

What person? You've taken his brain apart! There is nobody left to experience anything and nothing to believe with.
Remember, the neurons are all still doing the same thing they were before while in the brain, but they aren’t actually interacting with each other.

Interacting with one another is what they're supposed to do. If they're not doing that, they're just spinning their individual wheels, doing nothing at all. Their job is to process information and make decisions and control functions - not to watch reruns of old input. You can't 'interact with' a recording; you can only passively witness it. You can act upon on an inanimate object; you can experience an external event; you inter-act only with another actor: a living thing that moves forward in time, is aware of you as you are aware of it, that changes and reacts, just as you do.
Serpent
Resident Member
 
Posts: 4088
Joined: 24 Dec 2011


Re: conscious artifacts

Postby charon on July 31st, 2020, 8:34 am 

Serpent -

I'm pretty much convinced that the only way we'll know a computer has its own thoughts will be when/if it does something that a human in authority doesn't want it to.


Oh well, we're here now, then, usually after another update :-)
charon
Resident Member
 
Posts: 2157
Joined: 02 Mar 2011


Re: conscious artifacts

Postby charon on August 1st, 2020, 6:17 am 

And I know - yes, I truly know - that it feels depressed when it sees one coming. So I assume my deeply caring aspect and give it a cup of cocoa.

Pouring cocoa over the poor thing seems to solve most problems. Not a squeak from it since the last time I did it...
charon
Resident Member
 
Posts: 2157
Joined: 02 Mar 2011


Re: conscious artifacts

Postby Dave_C on August 1st, 2020, 10:16 pm 

Serpent » July 30th, 2020, 11:14 pm wrote:
Dave_C » July 30th, 2020, 9:14 pm wrote:
if we take each neuron out of the brain and separate them,

A mammalian brain? How do you remove individual neurons from it? How do you keep them alive?
subject them to the same causal influences they experienced while in the brain

Again - how? Can you even tell which neurons receive what sensory input and which are involved in controlling what activities?

I take it you’re not familiar with the work done in neuroscience to determine how neurons work? They’re removed all the time and subjected to electrical stimulus. And they are also tested to verify they work the same way in the brain as they do in the lab. Besides, the entire point is not what technology we have available today, the point is whether or not we should expect to be able to simulate a neuron’s environment in the lab in principal.

Reduction to parts, and verification that those parts behave the same way in some lab test when compared to a ‘real life’ situation, is all we’re talking about here.

Remember, the neurons are all still doing the same thing they were before while in the brain, but they aren’t actually interacting with each other.

Interacting with one another is what they're supposed to do. If they're not doing that, they're just spinning their individual wheels, doing nothing at all. Their job is to process information and make decisions and control functions - not to watch reruns of old input. You can't 'interact with' a recording; you can only passively witness it. You can act upon on an inanimate object; you can experience an external event; you inter-act only with another actor: a living thing that moves forward in time, is aware of you as you are aware of it, that changes and reacts, just as you do.

What additional information does a neuron have when it’s in the brain that it doesn’t have when in vitro? There is none. A neuron can’t distinguish whether it’s in a brain or not because there's no physical information above and beyond those observable, measurable causal influences that the neuron is subjected to. If we insist the neurons produce phenomena that are different when tied together (in vivo), we need to point out how that could be when there’s no part of the brain that has that information.

Note this is the same for every phenomenon that can be defined by classical physics, not just p-consciousness. If an airplane was taken apart, the wings removed, the engines removed, the tail removed, but we took all those parts and applied all the forces and causal influences on them that they experience in flight, then all those parts could be made to fly in perfect formation, any distance apart we wanted them to and they would exhibit all the same phenomena such as wing flutter, stalling, supersonic shock waves, etc... Every phenomenon describable using classical physics, including neuron interactions, are commonly reduced to parts both in the lab and using computer models.

Finally, my comments above are not unique in any way. There's a wealth of arguments on both sides of the divide, people trying to convince others that they are correct in one way or another. I'd suggest that reading up on these issues is actually very interesting if you have any interest in the problems produced by our existing ideas about how consciousness emerges from a brain. If you'd like some pointers to some excellent articles, I'd be more than happy to go into gory detail.
User avatar
Dave_C
Member
 
Posts: 351
Joined: 08 Jun 2014
Location: Allentown


Re: conscious artifacts

Postby Serpent on August 2nd, 2020, 10:17 am 

Dave_C » August 1st, 2020, 9:16 pm wrote:I take it you’re not familiar with the work done in neuroscience to determine how neurons work? They’re removed all the time and subjected to electrical stimulus. And they are also tested to verify they work the same way in the brain as they do in the lab.

You're right: I'm not familiar with the current state of neuroscience. I'm just trying to imagine how an ant separated from its colony and deprived of its companions could tell you about the functioning of the colony.

Besides, the entire point is not what technology we have available today, the point is whether or not we should expect to be able to simulate a neuron’s environment in the lab in principal.

I don't see how you possibly could.

What additional information does a neuron have when it’s in the brain that it doesn’t have when in vitro? There is none.

What information do you have access to now that you don't have when the internet is down?

A neuron can’t distinguish whether it’s in a brain or not because there's no physical information above and beyond those observable, measurable causal influences that the neuron is subjected to.

Are you able to tell it when the bicycle is tipping to the left? Whether the mashed potatoes need more salt? Whether the bath-water is too hot? Whether the person next to its body is its spouse or an alligator?

If we insist the neurons produce phenomena that are different when tied together (in vivo), we need to point out how that could be when there’s no part of the brain that has that information.

That's pretty much the point. Each part of the brain has a tiny fraction the information that the whole needs.

If an airplane was taken apart, the wings removed, the engines removed, the tail removed, but we took all those parts and applied all the forces and causal influences on them that they experience in flight, then all those parts could be made to fly in perfect formation, any distance apart we wanted them to and they would exhibit all the same phenomena such as wing flutter, stalling, supersonic shock waves, etc...

I'd like to see that.
Even so, I bet it wouldn't be able to write a sonnet.
I'd suggest that reading up on these issues is actually very interesting if you have any interest in the problems produced by our existing ideas about how consciousness emerges from a brain. If you'd like some pointers to some excellent articles, I'd be more than happy to go into gory detail.

That's very kind, but I don't think this old brain is up to the challenge.
Serpent
Resident Member
 
Posts: 4088
Joined: 24 Dec 2011


Re: conscious artifacts

Postby Dave_C on August 2nd, 2020, 12:58 pm 

You seem to be misunderstanding the gist of this. Take "The Matrix" for example. Or a brain in a vat... Neither individual can know if they are in the matrix or in a vat. Why? Because there's no physical information indicating they are.

If my internet was down, but some mad scientist knew exactly what I was going to search for, he could provide that feedback without providing the actual internet. There are also arguments (Tim Maudlin) who proposed "arguments by addition" and "arguments by subtraction" meaning for example, that a mad scientist could possibly know my intention to surf the internet and provide a recording of that to me but if I unexpectedly went to a different web site, he might open up that site to me. So despite my being on a broken internet, as long as I did what was expected, I couldn't know. And if I did something unexpected, I would be granted permission to the net and still not know but would have access to those 'counterfactual' possibilities. The point here being that we can easily construct a brain full of neurons, separated by recording devices as per Zuboff, and those recording devices can either provide the input without those signals actually being transferred to other neurons, or they can allow signals to transfer between neurons without hinderance. In the first case, you might protest that the neurons aren't actually doing anything, just spinning wheels. In the second case, they would actually be doing what they expected they were doing. But those neurons wouldn't be able to distinguish the difference. There is no additional physical information when that information is classical in nature (ie: an aggregate of molecules).

Hodgkin and Huxley were the first to test individual neurons in a lab starting in the late 30's according to this article which is an interesting read:
https://www.neuroscientificallychalleng ... kin-huxley

Since then, people have tested human neurons in vitro and also in vivo. By doing so, we can verify computational models. One of the largest projects that attempts to actually produce cortical columns and perhaps later an entire brain (not necessarily human) is the Blue Brain project in Europe. This just points to how common the concept of reduction to parts is.

The point I want to make (to address the OP) is that searching for scientific principals to base p-consciousness on is not the best avenue of attack. It has been much more fruitful IMHO, to search for what DOESN'T work to explain p-consciousness and trying to understand why those explanations don't work. If we can understand why something doesn't work in a certain way, it gives us insight into what might work and why.
User avatar
Dave_C
Member
 
Posts: 351
Joined: 08 Jun 2014
Location: Allentown


Re: conscious artifacts

Postby Serpent on August 2nd, 2020, 1:49 pm 

Dave_C » August 2nd, 2020, 11:58 am wrote:You seem to be misunderstanding the gist of this.

No, I'm failing to understand it at all.
Take "The Matrix" for example. Or a brain in a vat...

Both fully integrated networks.
The 'individual' referred to is different. In the matrix, i assume it's an individual character with set of traits, processes and reactions programmed into its discreet identity, which is what interacts with the part of the matrix external to itself, but isolated from the world external to the matrix. The brain in the vat is a wholly formed organic personality, interacting with the world external to itself - whether that world is actual or virtual.
Neither individual can know if they are in the matrix or in a vat. Why? Because there's no physical information indicating they are.

But they each are aware of themselves in an environment. A single neuron is no more self-conscious than a muscle or liver cell.
If my internet was down, but some mad scientist knew exactly what I was going to search for, he could provide that feedback without providing the actual internet.

You've just kicked the source of information up a level: you're still getting specialized, specific, wide-ranging input. You've done nothing to dismantle the "I" which is receiving the information. The single neuron, OTH, is isolated from its sensory and processing networks; it's receiving meaningless electric shocks. You're not informing it; you're just punishing it.

So despite my being on a broken internet, as long as I did what was expected, I couldn't know.

So what? The question isn't what you can know, but how you know anything.
What brain function does an individual actually perform? Never mind what the putative omniscient scientist (btw, why must he be mad?) might theoretically be able to supply to fool the neurons into thinking - what have actual scientists been able to make the individual cells do?

The point I want to make (to address the OP) is that searching for scientific principals to base p-consciousness on is not the best avenue of attack. It has been much more fruitful IMHO, to search for what DOESN'T work to explain p-consciousness and trying to understand why those explanations don't work. If we can understand why something doesn't work in a certain way, it gives us insight into what might work and why.

Okay. I can go along with that.
Serpent
Resident Member
 
Posts: 4088
Joined: 24 Dec 2011


Re: conscious artifacts

Postby hyksos on August 2nd, 2020, 4:42 pm 

Image

Some notes on the above.

People listed as "advocates" should maybe be better described as "adherents", as people are known to move between different boxes during their careers. Cristoph Koch was an Eliminative Materialist in the 1990s, espoused Epiphenomenalism in 2004. But by 2016, Koch seems to have declared himself a panpsychist.

Max Tegmark, who does not appear in the above table, was an Eliminative Materialist for most of his career. But in a recent podcast on PBS Spacetime, Tegmark has taken a sudden leap all the way into the Quantum Consciousness. (there are other issues regarding Tegmark's odd behavior recently, which I won't expand upon at this time.) I will warn others about being hasty to dismiss Quantum Consciousness : Freeman Dyson was advocating for it shortly before his death.

This person is sort of hard to google, a link https://www.ciis.edu/faculty-and-staff- ... hew-segall

In creating the above table, I had to take liberties. Hylozoism is an ancient tradition, that predates the renaissance, and was a product of thinking in a time which our modern understanding of biology did not exist. (E.g. Today we know that the moon is completely sterile.) Nevertheless, OBE = Out-of-body experiences. This is the idea that humans are alive because they are animated by a soul, and death comes when the animating soul departs the body. I happen to find this "folk dualism" to be even less scientific than hylozoism, and so I put it all the way at the bottom rung. Certain contemporary advocates might want it much higher up.

I added John Searle in a box but parenthetically with a question mark. I don't know where Searle fits exactly. Since this is a thread about whether a human-constructed artifact could have a feeling of being an I having experiences. For this reason, I am opening the table for anyone wanting to expand on Searle.
User avatar
hyksos
Active Member
 
Posts: 1845
Joined: 28 Nov 2014
TheVat liked this post


Re: conscious artifacts

Postby TheVat on August 2nd, 2020, 5:24 pm 

Serpent, Zuboff's thought experiment has to be read to really make sense of it, and get at the functional definition of a neuron. And neuronal interactions.

Tononi is interesting - I want to read further on his IIT theory.

Searle fits... in a hotel room in China. You slide notes under the door....
User avatar
TheVat
Forum Administrator
 
Posts: 7636
Joined: 21 Jan 2014
Location: Black Hills
hyksos liked this post


Re: conscious artifacts

Postby Serpent on August 2nd, 2020, 6:16 pm 

TheVat » August 2nd, 2020, 4:24 pm wrote:Serpent, Zuboff's thought experiment has to be read to really make sense of it, and get at the functional definition of a neuron. And neuronal interactions.

Where? It didn't make itself readily available on several attempts to wrestle with **$%#! PDF. I probably couldn't follow it anyway: he's a philosopher; I'm a pedestrian.
Serpent
Resident Member
 
Posts: 4088
Joined: 24 Dec 2011


Re: conscious artifacts

Postby Dave_C on August 3rd, 2020, 9:02 pm 

hyksos, in your table above, you have "epiphenomenalism" and mark it as a 'dualist' theory. Why? I've only ever heard of it being shown to be a problem with physicalist or computational theories and not really a theory of it's own.
Regards, Dave.
User avatar
Dave_C
Member
 
Posts: 351
Joined: 08 Jun 2014
Location: Allentown


Re: conscious artifacts

Postby Dave_C on August 3rd, 2020, 9:19 pm 

hyksos » August 2nd, 2020, 3:42 pm wrote:Image
I added John Searle in a box but parenthetically with a question mark. I don't know where Searle fits exactly. Since this is a thread about whether a human-constructed artifact could have a feeling of being an I having experiences. For this reason, I am opening the table for anyone wanting to expand on Searle.

I'm not overly fond of Searle but I do think he's made a number of important contributions. Chinese room obviously, challenging functionalism along with Putnam and others. He has pointed out a number of issues with a purely computational theory of consciousness.

Wikipedia suggests his views might be defined as "biological naturalism". I guess he views p-consciousness as a fundamental property of nature that is biological in nature and not 'reducible' (he seems to have a hard time defining) in nature.

For all he points out as being a problem with functionalism / computationalism, he doesn't seem to have solutions nor does he elaborate on these problems to the point of how they violate natural laws of physics.
User avatar
Dave_C
Member
 
Posts: 351
Joined: 08 Jun 2014
Location: Allentown


Re: conscious artifacts

Postby Dave_C on August 3rd, 2020, 9:39 pm 

Regarding Tononi's IIT, I like this comment from Scott Aaronson:
[Tononi] is emphatic about only counting the “integrated” information, and not any other forms of information. So for example, even if you had to analyze terabytes of data in order to predict the behavior of a system, if the data was organized (say) in a 1-dimensional array like a Turing machine tape, rather than being “integrated” in the specific way that produces a large Φ-value, then IIT would say that there’s no consciousness there. It’s the arbitrariness of the proposal (to my mind) that I object to more than anything else.

https://www.scottaaronson.com/blog/?p=1823

I think this is a great point. Even though 2 different machines are capable of performing the same computations, only one of them is sufficient for p-consciousness. A Turing machine is quite capable of doing everything the so called 'integrated' machine is capable of, if not more. That's a problem. Not to mention all the other problems with p-consciousness that neither Turing machines nor Tononi's integrated system can resolve.

The reason for the issue (integrated should be equal to Turing machine) is that Tononi's system is selected arbitrarily. Where are the system boundaries? Similarly, consider the extended mind theory, which suggests a problem with these boundaries - the boundaries aren't necessarily at the skull's inner surface. Problems with theories and why they are problems are very useful tools.
User avatar
Dave_C
Member
 
Posts: 351
Joined: 08 Jun 2014
Location: Allentown


Re: conscious artifacts

Postby hyksos on August 6th, 2020, 2:15 am 

Scott Aaronson wrote:[Tononi] is emphatic about only counting the “integrated” information, and not any other forms of information. So for example, even if you had to analyze terabytes of data in order to predict the behavior of a system, if the data was organized (say) in a 1-dimensional array like a Turing machine tape, rather than being “integrated” in the specific way that produces a large Φ-value, then IIT would say that there’s no consciousness there. It’s the arbitrariness of the proposal (to my mind) that I object to more than anything else.


I definitely find myself falling back to Tononi as a baseline for this topic. Consider the situation where we need to answer the question : is an an Intel core i9 conscious?

As before, our answer must be "no" and then followed by a principle or criteria we are applying to reach that answer. In this case, I find myself invoking Tononi's IIT. I would remark that the components of the i9's circuitry , although great in number, are simply too dis-integrated to allow for consciousness. The modern CPU is aware of the world around it, and responds to many complex internal signals, but the width of its perceptual awareness is extremely narrow. Until now, I was unaware that Aaronson finds that principle to be arbitrary.

On the other hand, is Aaronson confused? He appears to be conflating consciousness with an outwardly-measured behavior. His thought experiment with an extended 1-D Turing tape only establishes an equivalence in functionality. That is not the "feeling" side of this puzzle!


(August 2020) Our current intelligent artifacts are AI agents. They are identical to large software systems. The state-of-the-art for these is Agent57, which learns to play 2-dimensional Atari games by looking only at the pixels on the screen. For some technocrats that is amazing news. In the context of conscious artifacts, it is bad news. We would like to have discussions about whether an AI agent feels like an "I" that feels experiences. But that is so far away from teh current technology, it rises to science fiction at best.

This word -- information -- keeps coming up, both in Tononi's work and many posts made by Dave_C.


A neuron can’t distinguish whether it’s in a brain or not because there's no physical information above and beyond those observable, measurable causal influences that the neuron is subjected to.


Let’s call this “the special signal problem” because there is no physical information that the neuron can have which indicates it is in a brain or in a Petrie dish.


There is no additional physical information when that information is classical in nature (ie: an aggregate of molecules).


I noticed Dave_C's lingering itch that the word "information" is not sufficient to communicate what he really means -- which causes him to qualify the word. In some places it is physical information, and others it is classical in nature. On top of that, there is also some verbiage about classical physics working in or around this information.

To help Dave_C along, I can tell you that by about 2004, Koch and the other epiphenomenalists also tried to communicate this qualified version of information. At the time they ended up calling it "information in terms of Shannon". That is, it is information with a certain content of interpret-able data , over-and-above any "signal noise" appearing in the same channel.

That was word salad, so to give an example to reduce. A color stimulus is presented, and it is not entirely blue , but some greenish hue , but the fact that it is blue rather than red is the pertinent portion of the signal that matters. The greenishness is "noise" in the sense of Shannon entropy. The blueness yields 2 bits of information. The exact wavelength of the signal is irrelevant unless you are referring an exotic quantum information in the sense of EPR. In any case, we can agree in conversation that neuron cells are not ultra-cold detectors of quantum information (..I think..)

I might propose that the Hard Problem spins or pivots on this word "information" and its surrounding concepts. As I wrote before : we should explain why information is not being used as a 21st century placeholder for spirit. Say for example, the way spirit (geist) was used by German idealists.
User avatar
hyksos
Active Member
 
Posts: 1845
Joined: 28 Nov 2014
Dave_C liked this post


Re: conscious artifacts

Postby TheVat on August 6th, 2020, 11:06 am 

I found this bit, from IEP, useful in distinguishing Shannon information and intrinsic information....


According to IIT, the physical state of any conscious system must converge with phenomenology; otherwise the kind of information generated could not realize the axiomatic properties of consciousness. We can understand this by contrasting two kinds of information. First, there is Shannon information: When a digital camera takes a picture of a cue ball, the photodiodes operate in causal isolation from one another. This process does generate information; specifically, it generates observer-relative information. That is, the camera generates the information of an image of a cue ball for anyone looking at that photograph. The information that is the image of the cue ball is therefore relative to the observer; such information is called Shannon information. Because the elements of the system are causally isolated, the system does not make a difference to itself. Accordingly, although the camera gives information to an observer, it does not generate that information for itself. By contrast, consider what IIT refers to as intrinsic information: Unlike the digital camera’s photodiodes, the brain’s neurons do communicate with one another through physical cause and effect; the brain does not simply generate observer-relative information, it integrates intrinsic information. This information from its own perspective just is the conscious state of the brain. The physical nature of the digital camera does not conform to IIT’s postulates and therefore does not have consciousness; the physical nature of the brain, at least in certain states, does conform to IIT’s postulates, and therefore does have consciousness.

To identify consciousness with such physical integration of information constitutes an ontological claim. The physical postulates do not describe one way or even the best way to realize the phenomenology of consciousness; the phenomenology of consciousness is one and the same as a system having the properties described by the postulates. It is even too weak to say that such systems give rise to or generate consciousness. Consciousness is fundamental to these systems in the same way as mass or charge is basic to certain particles. ...


https://iep.utm.edu/int-info/
User avatar
TheVat
Forum Administrator
 
Posts: 7636
Joined: 21 Jan 2014
Location: Black Hills


Re: conscious artifacts

Postby hyksos on August 6th, 2020, 4:10 pm 

According to IIT, the physical state of any conscious system must converge with phenomenology; otherwise the kind of information generated could not realize the axiomatic properties of consciousness. We can understand this by contrasting two kinds of information
.
.
.

Oh yeah. You can see the the UTM author really bending over backwards to qualify what he means by the word "information".


> the kind of information generated could not realize the axiomatic properties of consciousness

It seems Dave_C has been getting at/hinting at this for his last several posts.
User avatar
hyksos
Active Member
 
Posts: 1845
Joined: 28 Nov 2014


Re: conscious artifacts

Postby Dave_C on August 7th, 2020, 8:33 pm 

Hi hyksos, Very interesting comments... I'll get back to you on this shortly.
Thanks, Dave.
User avatar
Dave_C
Member
 
Posts: 351
Joined: 08 Jun 2014
Location: Allentown


Re: conscious artifacts

Postby Dave_C on August 9th, 2020, 9:29 pm 

Hi hyksos,
I think you make an excellent point in focusing on the term “information”. Clarifying what one means by it should certainly help. I’ll compare/contrast my views with those of Tononi and others. Here's a link to his Version 3.0, published in 2014:
https://journals.plos.org/ploscompbiol/ ... bi.1003588

Causal Structure:
I’m not an expert on Tononi, but I can tell you he points out that a mechanism that processes information has a causal structure which I agree with. He talks about causal relationships in the brain:
Within the present framework, ‘‘mechanism’’ simply denotes anything having a causal role within a system, for example, a neuron in the brain, or a logic gate in a computer.

What occurs in the brain is not literally that information is being moved about. What’s happening is that there are physical, causal influences between neurons which cause them to fire or not (ie: the "causal structure"). We could describe the brain and the role of neurons entirely without ever using the term information because what happens in a brain is not literally that information is moving around because information is not a physical process. Ions reacting to electric fields are physical processes. Interpreting those ions and electric fields as information is all well and good, but if we accept this (and I think we should), then any physical interaction can be counted as information processing as long as we know how to interpret the physical process as such. Regardless, information requires a physical process to supervene on. Information can't process without a physical process.

Irreducibility, consciousness versus the system:
In 2014, Oizumi, Albantakis and Tononi wrote version 3.0 of IIT. Here’s the abstract:
ABSTRACT: ... IIT starts from phenomenological axioms: information says that each experience is specific – it is what it is by how it differs from alternative experiences; integration says that it is unified – irreducible to noninterdependent components; exclusion says that it has unique borders and a particular spatio-temporal grain. These axioms are formalized into postulates that prescribe how physical mechanisms, such as neurons or logic gates, must be configured to generate experience (phenomenology). The postulates are used to define intrinsic information as ‘‘differences that make a difference’’ within a system, and integrated information as information specified by a whole that cannot be reduced to that specified by its parts. By applying the postulates both at the level of individual mechanisms and at the level of systems of mechanisms, IIT arrives at an identity: an experience is a maximally irreducible conceptual structure (MICS, a constellation of concepts in qualia space), and the set of elements that generates it constitutes a complex.


Here, and throughout the paper, Oizumi et al suggest there is an irreducibility to phenomenal experience and so this irreducibility must also be applicable to the underlying causal structure. They talk about fundamental axioms of consciousness that they feel are “immediately evident”:
Axioms. The central axioms, which are taken to be immediately evident, are as follows:

INTEGRATION: Consciousness is integrated: each experience is (strongly) irreducible to non-interdependent components.

That’s fine. P-consciousness is strongly irreducible. I think we should let him get away with that though I would ask what he means by irreducible. Does he mean that in the “Nagelian” sense or perhaps “reduction to parts” sense or something else? From context, I think he means something along the lines of reduction to parts. one bit of information (or a physical process) is useless without reference to a lot of other information.

After touching on axioms of consciousness, the discussion turns to postulates. Seems like they’re saying that if consciousness is irreducible then the mechanism must be irreducible as well, so there you have it. The information processing system in the brain must be irreducible (case closed).
Postulates. To parallel the phenomenological axioms, IIT posits a set of postulates. These list the properties physical systems must satisfy in order to generate experience.

INTEGRATION: A mechanism can contribute to consciousness only if it specifies a cause-effect repertoire (information) that is irreducible to independent components.


Is causal structure reducible?
IMHO, there are 2 perspectives, one from that of a reductionist who sees things as reducing to parts and the other from the perspective of a holistic system. People that see a holistic system seem incapable of seeing a system that can be reduced. Fact is, all these things are reducible to parts if the causal structure is local and separable. If the causal structure is not local and is nonseparable, then the causal structure can’t be reduced to parts.

What systems are local and separable? Any phenomenon that can be described by classical physics is.

What systems are nonlocal and nonseparable? Any phenomenon that ‘uses the special features of quantum mechanics’ is a likely candidate.

There’s been a lot written on all this but I don’t want to gum up the discussion with a bunch of references. Clearly, neuron interactions are local and separable so they are reducible to parts. Neuroscience does this all the time using ‘compartment models’ and by testing neurons in vitro and in vivo as mentioned above. So if information is being processed on a causal structure that is classical in nature (such as a brain or a computer) then unfortunately for Tononi, those information processing systems are not irreducible as they claim.

Regards, Dave.
User avatar
Dave_C
Member
 
Posts: 351
Joined: 08 Jun 2014
Location: Allentown
TheVat liked this post



Return to Anything Science

Who is online

Users browsing this forum: No registered users and 9 guests

cron