I've started reading the link but need to spend a bit of time on it. But the beginning sounds very interesting about sleep normalising synaptic strength.
I don't see consciousness as a hard problem at all. I think it's actually very simple when understood in terms of neural networks.
Here's a thought experiment. We take a simple neural network with each input corresponding to a certain environmental state. The input changes the neural network learns which output to fire. What you have at the most basic is a stimulus response system. That environment might also include the body, for example whether a robot's battery is increasing or decreasing etc.
Now imagine what happens if we had another system that examines the state of the neural network over time. It could look at things like whether it is going through a phase change. Are the same outputs being fired repeatedly? Is it struggling to settle into a stable state? Now feed the outputs of that second system into the inputs of the first so that it can react to its own state. Surely from the two systems combined emerge a very basic form of consciousness because it's reacting to its own state? If you implemented such a system you could create something that had the functionality of boredom, or which could try something else when it recognised that it was not able to achieve something.
Humans and animals can be more or less conscious of their own brains. Some people react in ways and don't know why. Others know for sure of how their lives have conditioned them into certain responses. We can also slowly lose consciousness. It's not some binary magical property that we either have or don't.
Of course there are many different aspects to consciousness, but that's because there are many different functions of the brain. The problem is that we're wrapping it all into a single ball and labelling it as 'consciousness'. This is why it seems to difficult to define.
I don't see consciousness as a hard problem at all. I think it's actually very simple when understood in terms of neural networks.
Here's a thought experiment. We take a simple neural network with each input corresponding to a certain environmental state. The input changes the neural network learns which output to fire. What you have at the most basic is a stimulus response system. That environment might also include the body, for example whether a robot's battery is increasing or decreasing etc.
Now imagine what happens if we had another system that examines the state of the neural network over time. It could look at things like whether it is going through a phase change. Are the same outputs being fired repeatedly? Is it struggling to settle into a stable state? Now feed the outputs of that second system into the inputs of the first so that it can react to its own state. Surely from the two systems combined emerge a very basic form of consciousness because it's reacting to its own state? If you implemented such a system you could create something that had the functionality of boredom, or which could try something else when it recognised that it was not able to achieve something.
Humans and animals can be more or less conscious of their own brains. Some people react in ways and don't know why. Others know for sure of how their lives have conditioned them into certain responses. We can also slowly lose consciousness. It's not some binary magical property that we either have or don't.
Of course there are many different aspects to consciousness, but that's because there are many different functions of the brain. The problem is that we're wrapping it all into a single ball and labelling it as 'consciousness'. This is why it seems to difficult to define.