(December 30, 2015 at 11:21 am)Rhythm Wrote:(December 29, 2015 at 10:58 pm)emjay Wrote: I wonder if it is the case that the colour qualia we 'see' is the only way to represent the data in a way that meets all the constraints of the system... that the palette we see emerges because it is the only way to differentiate, in the right ways, between the different states that are represented in the underlying neural hardware. That somehow an inverted colour world fails somewhere to meet the constraints of the actual brain-in-state and therefore does not, and cannot appear. That therefore all perception, whatever type it is, 'presents' the data in the only way it can to fulfil it's objectives.
Any thoughts are welcome on any aspect of this
Could I get some clarification. Are you asking whether or not we percieve things the way we do because thi is the only way that we, as a system, -can- percieve them..or because this is the only manner in which they -could be- percieved. That any system would have to arrive at this solution in order -to- perceive? The former is likely, to a degree...the latter would be difficult to substantiate.
Hi Rhythm The former, I think - if I understand you correctly. I'm not suggesting that colours are the only thing 'out there' and that it falls to a system to tap into them in order to perceive. Indeed, what I meant by talking about different animals with different senses and different perceptions was that it looks like the possibilities for perception 'design' are endless. That not even our imagination is a limit because we can't imagine perceptions we don't share - for instance I can't imagine what it would be like to have no sight... not a black visual field but rather no visual field at all... can't imagine it. But instead endless in terms of design space, or evolution's imagination if you were to put it that way. Rather, what I meant was to that a system with a given set of constraints - in this case the human brain - could only satisfy those constraints by using colour as we know it. It's not just colour that contributes to the visual scene, but lines, shapes, the actual topography etc and I think it's possible that all those constraints constrict the possible expressions of the information. That the richness of our multi-modal perception is so rich and 'distinguished' precisely because it has to be because so much information is vying for expression and constraining the output at the same time. But it's just a thought. But if it's not something like that then you have to ask how the system specifies the form of the perception, and that looks a much harder question.
Quote:It certainly appears to me to be the case that colours are produced in the brain on-the-fly as it were - that we have not necessarily seen every colour it is possible for us to see and if each one was neurally encoded individually the brain would be a lot bigger than it actually is because there would be so many possibilities.
Quote:There is a point at which you need to make the architecture bigger to handle more variables...but...consider how many individual colors you can designate in each bitplace of a relatively simple computational device. I don't know that the brain would have to be immensely large to hold a designating variable for many more colors than we can currently see. After all, we have names for colors that we can't see. We have already defined them -as variables-. We're not entirely sure that we retain every specific instance or memory of color, btw.....we have reason to suspect precisely the opposite. If the data is largely dumped after any relevant computation is performed..then no amount of new colors, however large, would overload the capacity of the system. Basically just discussing a theoretical problem that can be solved by a "biological register"....yeah?
Yeah, I'm sorry about that, I think that was a mistake on my part, so forget I said that bit. I didn't think it through enough. I'm not a neuroscientist and have no training whatsoever in it but I am interested in it so I may have misunderstood or oversimplified. But as I understand it, cones in the retina are used for distinguishing colour and they are tuned to detect red, green, or blue light. That information is transferred into the brain, via the optic nerve, on an almost one-to-one basis (albeit with some extra information added at the retina end from other cells) such that the topographical structure of the retinal map is largely preserved inside the brain. Also, cones are able to respond more rapidly to changes than other cells. So what appears to me to be the case is that there is a map of neurons in the brain that is updated in real time with 'pixel' information from the retina - each cone is represented individually (or near enough) on this map. But that information has come in in the form of RGB. So the question for me became, what to do with it next - how can it be further transformed/abstracted by a neural network? But thanks to wallym's post and looking up colour perception in bees I came across a site which said that in comparison to bees, human can distinguish about 60 different colours based on RGB values so that kind of answers it for me; if it works that way, each 'pixel' as represented by R, G, and B values need only trigger one of 60 different detectors and the variety of the colours we see would come from adjusting their brightness, which would come in from another source. So if that sort of thing's the case then there would be no exponential increase in the number of neurons required and the brain could stay small ;-) In other words the topographical map where the colour of pixels was concerned could consist (somehow) of 60 colour neurons (where one neuron is just a simplification) and their rate of firing could indicate brightness. So I don't see any problem any more because that is conceivable. Not to say that it would be that simple, because where visual processing is concerned (and everywhere else) the brain is very modular with neural detectors to detect a great many aspects of the visual scene... detectors for lines in one orientation or another, for contrast, for depth, for movement etc and then they get transformed into greater and greater levels of abstraction like shapes etc. And then all these networks are interconnected with each other in multiple ways with feedforward, feedback, and lateral connections so how and where colour discrimination fits into the such a complicated system is anybody's guess but at least now I can conceive of how it could fit in, roughly :-) And no, I don't believe either that every instance of colour is retained in memory... imo the real-time topographical map of the cones is transient and just reflects the current, ever changing state... that to store it longer term than would require an exponential increase in the number of neurons and thus not be feasible, which was part of my worry in the first place.