Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: November 21, 2024, 5:50 pm

Poll: Artificial Intelligence: Good or Bad?
This poll is closed.
Good
50.00%
4 50.00%
Bad
50.00%
4 50.00%
Total 8 vote(s) 100%
* You voted for this item. [Show Results]

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Artificial Intelligence
#51
RE: Artificial Intelligence
I was truly trying to explain rather than debate but it seems that you have been mistakenly ascribing a position that I wasn't actually holding which is probably why what I said did not make sense to you.


(July 23, 2015 at 9:41 am)Rhythm Wrote: -You're aiming for a number "x", a measure of processing power whereby you feel that strong AI would be possible.  

No I am not. I never said that.  I was trying to demonstrate that we don't have enough processing power and that the more generalisable the adaptive solution, the more complex it will be and the more processing power will be required. I don't know how much processing power is required but it's more than most people think that it is. Not only that but we will need to evolve our solutions (something I tried to demonstrate above). The more complex our adaptive solutions are more the more processing is required to configure it.

Not only that but we also need to understand what's going on, that might be the greatest limiting factor. That takes time, effort, the ability to measure at finer detail and no amount of processing power will do that for us.

I am not saying that strong AI requires a certain amount of processing power or that it needs to model what we see in the brain. I do personally believe that it needs to be wholly self organising though (I won't explain why). I was railing against a top down approach where people point at something that they have simulated and claim that they have reproduced something. It's like drawing a picture of a house and pretending that you have built shelter. The function of both is completely different.

This all started with us disagreeing that the brain was inefficient and slow and maybe this just comes down to semantics. My position all this time is that we don't actually know that for sure and it's a difficult statement to actually qualify. We may suspect that to be the case because of what we understand about evolution. We know that the brain is not optimal precisely because we are able to generalise and adapt, but that's not the same as it being inefficient and slow. Nor am I denying that there isn't some scope for some redundancy. This is getting into the realms of complexity theory though.
Reply
#52
RE: Artificial Intelligence
Are you not conflating intelligence with an ability to learn (or maybe I've missed your point and that's your entire premise)? I can't help feel that your examples are akin to giving a child a chess board and expecting them to play, without first teaching them the rules (equivalent to programming). I don't think there's any escaping the fact that, once the rules are understood, a computer and a person end up performing the same task (traversing a tree of potential moves for the one with the best outcome). I totally accept that until AI systems can adapt themselves, they're not really intelligent, but I do feel you're over-estimating the power of the human mind. We are obviously capable of adapting but not as readily as it may first seem. For instance, I wonder how many humans (or our ancestors) died before realising that you could kill an animal and take its fur? How many died when motor cars first arrived on the streets? Or in more modern terms, how many people will get run over by electric cars before we (as a group) learn that cars don't necessarily make noise any more. Shared experience and knowledge is almost certainly more valuable than the adaptive power of a single mind.

Finally, I'd like to suggest watching this TED Talk, about complex behaviours arising from very simple rules. What appears to be intelligent actions, cooperation etc, could actually be not much more than natures clever "trick".

https://www.youtube.com/watch?v=0Y8-IzP01lw

Of course, having said all that, you are totally correct that we're not even at the point where even rudimentary systems can adapt well in constrained problem domains, but I don't see that as a necessarily impossible hurdle to jump.
Reply
#53
RE: Artificial Intelligence
(July 23, 2015 at 11:07 am)davidMC1982 Wrote: Are you not conflating intelligence with an ability to learn (or maybe I've missed your point and that's your entire premise)? I can't help feel that your examples are akin to giving a child a chess board and expecting them to play, without first teaching them the rules (equivalent to programming).

If you're talking about a narrow range of human intelligence concerning culture then yes, you need to program in rules. But most human intelligence doesn't involve that. The visual cortex has to self organise so that we can learn to recognise objects regardless of their orientation, size, scale, distance, colour, movement and whether they are partially obscured or not for example. But we're not aware of this happening in the brain, we just see an apple when someone holds it up and throws it as us. Babies are not given rules for how their cerebellum should adapt, except perhaps not to pick their noses. Even language will self organise in small groups of people abandoned by their own culture.

And for the most part, intelligence occurs in animals that are not humans. Maybe you could argue that packs have rules but these are also partly instinctual.

If we're going to start off simple and build up then it's too early to consider programming in rules.
Reply
#54
RE: Artificial Intelligence
Agreed, but it's hard to separate the shared body of knowledge from innate intelligence. Once you take away prior experience, you are left with an entity that has been "programmed" or "hard-wired" to fulfil a role.

Take your apple example. Our brains have had an extraordinary amount of input and feedback that allows it to recognise apples of all shapes and sizes. Once a certain threshold of prior knowledge has been reached recognising apples becomes easy; but lets not forget that without that external feedback (primarily other humans), we're still left with children categorising grapefruits as oranges, pears as apples, dogs and cats as the male/female of the same species etc etc. That corrective input we've had since first emerging from the womb is vast, but pales into insignificance when compared to the retained knowledge of the whole of humanity that allows a child's parents to show it the error of it's ways.

I'm of the opinion that our intelligence appears to be much greater than it really is, and that if we can solve a few hard but simple to define problems, intelligence will arise naturally. Once that happens, the speed of iteration found in artificially systems will soon allow AI to surpass human intelligence.
Reply
#55
RE: Artificial Intelligence
(July 23, 2015 at 12:27 pm)davidMC1982 Wrote: Agreed, but it's hard to separate the shared body of knowledge from innate intelligence. Once you take away prior experience, you are left with an entity that has been "programmed" or "hard-wired" to fulfil a role.

Take your apple example. Our brains have had an extraordinary amount of input and feedback that allows it to recognise apples of all shapes and sizes. Once a certain threshold of prior knowledge has been reached recognising apples becomes easy; but lets not forget that without that external feedback (primarily other humans), we're still left with children categorising grapefruits as oranges, pears as apples, dogs and cats as the male/female of the same species etc etc. That corrective input we've had since first emerging from the womb is vast, but pales into insignificance when compared to the retained knowledge of the whole of humanity that allows a child's parents to show it the error of it's ways.

I'm of the opinion that our intelligence appears to be much greater than it really is, and that if we can solve a few hard but simple to define problems, intelligence will arise naturally. Once that happens, the speed of iteration found in artificially systems will soon allow AI to surpass human intelligence.

That's quite possible but we don't really know until we manage it. It's true though that there are some hard but simple to define problems that will lead onto a whole wealth of intelligent behaviour if we can solve them. And they are extremely difficult.
Reply
#56
RE: Artificial Intelligence
(July 23, 2015 at 11:04 am)I_am_not_mafia Wrote: I was truly trying to explain rather than debate but it seems that you have been mistakenly ascribing a position that I wasn't actually holding which is probably why what I said did not make sense to you.


(July 23, 2015 at 9:41 am)Rhythm Wrote: -You're aiming for a number "x", a measure of processing power whereby you feel that strong AI would be possible.  

No I am not. I never said that.  I was trying to demonstrate that we don't have enough processing power and that the more generalisable the adaptive solution, the more complex it will be and the more processing power will be required. I don't know how much processing power is required but it's more than most people think that it is. Not only that but we will need to evolve our solutions (something I tried to demonstrate above). The more complex our adaptive solutions are more the more processing is required to configure it.
 
But you -cannot- know this, unless you have a range of "x" to begin with.  By stating that we don't have the processing power, you have explicitly made a statement regarding the processing power required, even if that wasn't your intention, and even if it wasn't implicit in your position, which it is.   You may think it's more than most people think, but you -cannot- know that either.  You can't even begin to establish what you hope to establish without, at least, a conceptual range for the variable "x". Which was, abbreviated, "we don't have it, moores law may not hold long enough to get it".

Both of us think that humans possess that number "x", but what portion of what we possess is enough to satisfy "x" is still an undefined variable.  You (we, the royal we) have failed to create strong ai, in your estimation...and you've decided that the processing power available is responsible for that at least in part- but you cannot demonstrate that this is cause rather than correlation, or even misattribution. Perhaps we are currently capable of building a machine with -many times- the processing power required, we have simply failed to leverage that processing power. It is impossible for you to know, or even hold an informed position, let alone provide compelling evidence or valid argument....without some idea of what would satisfy you as "x".

If it were a debate, perhaps I'd impeach your variable "x", I'm not looking to do that, I'm just looking to see what you would accept as "x" - so we can determine whether or not we have that ability assuming your framework entirely. I'm looking to -agree- with you and then give a more solid number for "x" based upon your assumptions, regardless of whether or not they are true. You won't be able to explain to me what it is you're trying to communicate until we can pin that variable down -conceptually, even if we can;t pin it down practically (build an ai) factually (have knowledge of the exact number required) or accurately (demonstrate -by- building an ai -to- that number required).

I do, btw, agree with you in that an entirely top down approach is unlikely to yield the effect - but like many, I think that top down and bottom up meet somewhere, and they certainly seem to do so in our case, as an example of strong intelligence. We are both evolved from the bottom, and programmed from the top. It takes both, seemingly, to yield human intelligence (or -any- intelligence, using those examples we have to work with). The structure has to be capable, but it also has to be able to accept (and act on) instruction from "the outside". No sensible definition of intelligence, of any kind, omits either avenue.

Quote:Not only that but we also need to understand what's going on, that might be the greatest limiting factor. That takes time, effort, the ability to measure at finer detail and no amount of processing power will do that for us.

I am not saying that strong AI requires a certain amount of processing power or that it needs to model what we see in the brain. I do personally believe that it needs to be wholly self organising though (I won't explain why). I was railing against a top down approach where people point at something that they have simulated and claim that they have reproduced something. It's like drawing a picture of a house and pretending that you have built shelter. The function of both is completely different.
You absolutely are and continue to say precisely that.  You've decided that it requires an amount "x", x being: greater than we can build with current architectures, but presumably equal to or less than what we, in ourselves, possess.   

Quote:This all started with us disagreeing that the brain was inefficient and slow and maybe this just comes down to semantics. My position all this time is that we don't actually know that for sure and it's a difficult statement to actually qualify. We may suspect that to be the case because of what we understand about evolution. We know that the brain is not optimal precisely because we are able to generalise and adapt, but that's not the same as it being inefficient and slow. Nor am I denying that there isn't some scope for some redundancy. This is getting into the realms of complexity theory though.
Perhaps it does, in the case of efficiency -as I suggested the very moment I made those comments, but in the case of processing power required it is most definitely not an issue of semantics (though the fact that only a small portion of what we have is required to present effect is the strongest evidence, to me, that our brains are inefficiently constructed). You brought our brains into consideration, you brought moores law into consideration. My objections, under the framework you supplied, -need- to be addressed before I can assign any truth value to that statement, before I can understand what it is you mean to say.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply
#57
RE: Artificial Intelligence
(July 23, 2015 at 1:11 pm)Rhythm Wrote:
(July 23, 2015 at 11:04 am)I_am_not_mafia Wrote: I was trying to demonstrate that we don't have enough processing power and that the more generalisable the adaptive solution, the more complex it will be and the more processing power will be required. I don't know how much processing power is required but it's more than most people think that it is. Not only that but we will need to evolve our solutions (something I tried to demonstrate above). The more complex our adaptive solutions are more the more processing is required to configure it.
 
But you -cannot- know this, unless you have a range of "x" to begin with.

So practical experience counts for nothing in your book? Practical experience of myself and the entire field over several decades?


(July 23, 2015 at 1:11 pm)Rhythm Wrote:   You may think it's more than most people think, but you -cannot- know that either.

Again real world experience. Just look at this thread for example at people extrapolating non-adaptive smart programming and thinking that we're almost there. The field has been 'almost there' for the last five decades which is why there have been AI winters. Real world experience of how R&D sells 'A.I' but which are essentially just statistical techniques for very specialised purposes.



(July 23, 2015 at 1:11 pm)Rhythm Wrote:  You (we, the royal we) have failed to create strong ai, in your estimation...

It's not a binary thing. It's not like you suddenly get AI working or consciousness working. In your deleted example, the brain itself degrades gracefully as it is increasingly damaged. Some people are more or less conscious or intelligent than others.t's like you aren't having a discussio


I have created strong AI functions. They don't do much but they are very good at what they do do, and they most certainly generalise and adapt well.



(July 23, 2015 at 1:11 pm)Rhythm Wrote: and you've decided that the processing power available is responsible for that at least in part-

Again you are mistaking what I am saying. I am saying that people do not appreciate how much processing power is required. I tried to explain why but you have fought me at every step even when it agrees with exactly what you are saying.


(July 23, 2015 at 1:11 pm)Rhythm Wrote:  Perhaps we are currently capable of building a machine with -many times- the processing power required, we have simply failed to leverage that processing power.  It is impossible for you to know, or even hold an informed position, let alone provide compelling evidence or valid argument

But that is not a falsifiable statement and therefore not scientific. That's like saying that we cannot know that God does not exist. In practise there is no evidence to suggest that God does exist, there is no evidence to suggest that we are capable of building a machine with many times the processing power required for artificial general intelligence, but there is plenty of evidence to suggest that there isn't.

All we can do is go by the evidence and there is a hell of a lot of it. This is how the scientific method works. This is why we still talk about the theory of evolution even though for all intents and purposes it is fact.

Because until we come up with a technique that is as efficient as you theorise is possible, we need to carry on working with what we have.


(July 23, 2015 at 1:11 pm)Rhythm Wrote:
(July 23, 2015 at 11:04 am)I_am_not_mafia Wrote: I am not saying that strong AI requires a certain amount of processing power or that it needs to model what we see in the brain. I do personally believe that it needs to be wholly self organising though (I won't explain why). I was railing against a top down approach where people point at something that they have simulated and claim that they have reproduced something. It's like drawing a picture of a house and pretending that you have built shelter. The function of both is completely different.
You absolutely are and continue to say precisely that.  You've decided that it requires an amount "x",  x being: greater than we can build with current architectures, but presumably equal to or less than what we, in ourselves, possess.   

It's pointless to carry on with this if I say one thing and you tell me that I am saying something completely different. This is not a one-off, you have been doing throughout the entire thread. If you want to convince yourself that I am saying something completely different then whatever makes you happy. I'm not going to waste my time. It's also pointless to continue if you don't take real world evidence into account because it does not fit in with your theory.

I was trying to enlighten people and I went to great efforts to do so.  But if you're going to dismiss everything that I say that does not fit into your personal theory then I am wasting my time.
Reply
#58
RE: Artificial Intelligence
(July 13, 2015 at 6:10 pm)I_am_not_mafia Wrote: First, Moore's law won't last long enough to give us the processing power that we need. The brain has far more connectivity than we can ever hope to achieve with out current architectures in silicon. 
I won't be told that I'm trying to force a position on you that I can quote you as having graced us with.  No, your practical experience doesn't count for what you think it does, you may be a brilliant ai researcher, but you've made a statement which doesn't follow, based on a clear lack of knowledge regarding the thing you invoked as a means of explanation...and I fail to see how you might enlighten us in doing so. I think that there was -alot- wrong with the post that this is sourced from...but after having tried to discuss -all- of it and failing, I decided that this and this alone was enough meat for both of us to chew on.  Precisely because it -is- a falsifiable statement, and I've been attempting to provide you with the means to do so.

When you make the claim above, there is a variable "x" implied in the statement.  That is unavoidable.  The "processing power we need", presumably in order to create Strong AI, is "x" -whatever that number is, neither of us knows.

When(if) you reference the human brain as an explanation you have to accept that the human brain has much more "x" than it needs.  
https://en.wikipedia.org/wiki/John_Lorber
http://www.telegraph.co.uk/culture/books...brain.html
(just for starters)

You also might find it prudent to allow for some other candidate system as a means of comparison..or - as I've stated many times, you aren't talking about creating strong ai, you're talking about creating a simulated human being.  So, "x", a variable that you created in making that statement, can be no greater than Simon Lewis' "x"...but possibly much less.  That number may still be huge, but it's getting smaller with every reference you've made thusfar, and can only get -smaller still- with the reference you've refused to make, some non-human candidate for comparison using the -same- framework.

Moore's law has to get us to that undefined minimum -assuming there's a relationship to begin with-, but it certainly doesn't need to take us any farther than Simon Lewis's "x" - and we're not certain how much of that "x" is responsible for his presenting strong intelligence, relative to how much is devoted to presenting human strong intelligence. We could be certain, between us, on that last count -but doing so would take lines of discussion that have not thusfar been fruitful between us.

In short, on the upper end, "x" is a value that is impressive, immense even, but much smaller than your statement would imply - and that much -is- demonstrable, and falsifiable. Have at it. Good luck, you'll need to demonstrate that people such as the ones in my examples -don't exist- or that they don't present strong intelligence. The low end you have failed to define, despite my repeated asking, so I can't say much there that would hold for you...but only because you've refused to participate in that regard.

Are you entirely sure that you've been having the same conversation with me....that I've been trying to have with you?
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
Exclamation Google artificial intelligence razzes us! Eclectic 11 1914 November 5, 2022 at 11:14 am
Last Post: Eclectic



Users browsing this thread: 1 Guest(s)